Military Embedded Systems

GUEST BLOG: The promises and pitfalls of an AI-enhanced battlefield


June 20, 2024

George Kamis


In October 2023, the Biden administration issued an Executive Order focused on ensuring that artificial intelligence (AI) – which has captured headlines of late largely due to the rise of ChatGPT – is developed safely and securely. In a military context, this move will include the drafting of a memorandum, produced by the National Security Council and the White House Chief of Staff, that details how the U.S. military can ethically and effectively use AI to carry out its missions while also countering its use by adversaries.

Such guidance couldn’t come at a more opportune time. As AI continues to advance at an extraordinary pace, military and industry both must clearly understand the tools, processes, and procedures required of AI in a military context.

AI-enabled military tools in use

One promising area for AI on the battlefield is autonomous defense. Currently, Ukraine is using autonomous drones to identify and strike Russian targets. According to the Center for European Policy Analysis, AI automates the takeoff, targeting, and landing of uncrewed aerial systems (UASs). But automated defense systems have been around for some time now – and were doing AI-adjacent things before it was labeled as such. The Patriot Missile System, which the U.S. sent to Ukraine in the spring of 2024, was actually first used in combat in the 1991 Gulf War.

However, such systems are not immune to error. For example: During the war in Iraq, the Patriot identified a target as an Iraqi missile when it was in fact a U.K. fighter jet. This instance of so-called friendly fire offers an important lesson for the AI-enabled battlefield: A human should always be kept in the loop to vet the data and decision-making before a kinetic action takes place. Another area well-suited for AI is identifying changes in surveillance videos; again, however, AI should not be making the final decision. Instead, it should simply alert someone to look more closely at the change in question.

Processing AI-generated insights

Artificial intelligence also has the potential to play an important supporting role on the battlefield, particularly when it comes to enabling and securing information-sharing. It’s no secret that the ability to access and exchange data is integral to collaborative warfighting. In Ukraine, for instance, social media content is being used to identify Russian assets. Open-source weather data can also play a crucial role in executing military missions.

To that end, the AI-enabled battlefield cannot exist without cross-domain solutions to serve as the information highway between data that exists at different classification levels. Cross-domain solutions ensure that all unclassified data is vetted and sanitized before it’s pushed up to a higher classification level. The benefits of this approach are twofold: the AI engine has a richer overall data set to analyze, while the algorithm’s integrity is maintained.

Because AI is so adept at analyzing large data sets, it also holds tremendous potential to improve cybersecurity, which is particularly important as cyberattacks play an ever-growing role in warfare. Russia, for one, has been carrying out cyberattacks against Ukraine for nearly a decade now. AI should definitely be part of the process of continuously monitoring military networks. Generally speaking, AI can spot anomalies and threats far faster than human analysts, including advanced phishing and malware attacks.

Establishing proper AI procedures

AI is undoubtedly going to impact the battlefield but it’s important to ensure that the hype around it doesn’t distort our expectations. While generative AI in particular created a lot of buzz this year, a recent report from Gartner ( asserts that we’re at the peak of its hype cycle. Many existing machine learning technologies have already been rebranded as AI because that’s where the money is. But while AI is transformative, it should still be conceptualized as a tool that must be vetted and supervised by trained military personnel.

Despite its immense computing power, AI should not be considered a silver bullet. It’s an extremely useful tool, but much like a calculator, AI is only as good as the data it’s given. To reap the benefits of AI, the military cannot focus myopically on the technology itself. Instead, the emphasis should be on building sociotechnical systems where people and computers work together to make decisions.

Put another way, AI should be treated as another knowledgeable person on the team, as opposed to a piece of technology that takes precedence over military decision-making. Thus, it’s crucial to train soldiers and other military personnel on the proper procedures for working alongside AI.

George Kamis is CTO of Everfox (formerly Forcepoint Federal).

Everfox •

Featured Companies
A.I. - Big Data