How the military is using AI in war
With Anthropic’s AI systems being ushered out of the Pentagon, a battle is brewing among other major artificial intelligence firms looking to capitalize on this potentially lucrative opening and shape the way AI is integrated into America’s military defense.
Earlier this month, the Pentagon called for Anthropic’s AI technology to be removed from military operations within six months β the result of an escalating feud between the company’s chief executive and the Trump administration. An internal Pentagon memo hinted that Anthropic’s artificial intelligence was being used in key national areas of national security, including nuclear weapons, ballistic missile defense and cyber warfare.
Sources familiar with the U.S. military’s use of artificial intelligence tell CBS News that AI programs β including one created by Anthropic, which the Trump administration has deemed a supply chain risk β are likely being deployed as part of the U.S. operation against Iran.
While the Pentagon has not said exactly how AI tools are being deployed, CBS News spoke with several experts with knowledge of military operations who described the likely scenarios.Β
“The military is now processing roughly a thousand potential targets a day and striking the majority of them, with turnaround time for the next strike potentially under four hours,” said Β retired Navy Admiral Mark Montgomery, senior director of the Foundation for Defense of Democracy’s Center on Cyber and Technology Innovation. “A human is still in the loop, but AI is doing the work that used to take days of analysis β and doing it at a scale no previous campaign has matched.”
How AI is used by the military
The Pentagon uses AI in the ways many consumers do β to summarize and distill lots of information at once. According to former Pentagon officials, by analyzing documents, video, and images coming in from the battlefield, AI can help the military war-game out scenarios to minimize casualties and determine which weapons can be most effective. Β
“We’re living through a military revolution driven by the digital revolution,” said CBS News national security analyst Aaron McLean. “Today’s revolution is driven by the explosion of data: cameras everywhere, smartphones, connected cars. The battlefield is now flooded with information in ways that were unimaginable a generation ago.”Β
With so much data available, AI has become instrumental in contextualizing it for military personnel at a speed far beyond traditional human analysis.Β
“There’s now far more data than any room of analysts could process on timelines that matter. AI algorithms sift through it to build targeting packages, assign strike assets and assess damage β nearly instantly,” McLean said.Β
“The Israel missile defense example makes this visceral: when hundreds of drones and missiles are inbound over a few hours, no human team can decide in real time which ones to intercept, with what, and when. That’s what AI is doing.”Β
So far, Anthropic’s large language model, Claude, is the only large-scale AI system that’s been operational on the Defense Department’s classified systems.Β
AI is also used for other administrative functions like research, policy development and procurement, according to Josh Gruenbaum, the commissioner of the Federal Acquisition Service, a government agency which helps decide which goods and services to use.Β
“Our goal has been, and remains, to help agencies become comfortable using this technology and turbocharging output and efficiencies for the American taxpayer, while maintaining an evenhanded approach that welcomes American innovators who strengthen agency missions and enable the lawful deployment of these tools by government without inappropriate impediment,” Gruenbaum told CBS News.Β
How AI works with physical weapons
AI doesn’t exist in a vacuum on the battlefield β there is still plenty of human oversight and physical tech, including everything from aircraft carriers to drones, from legacy defense contractors like Northrop Grumman, Boeing and Lockheed Martin. The large language models that power AI are not flying the planes or firing the missiles, but they are being used to do a lot of analysis before those things are done.Β
According to Montgomery, this advancement has compressed operation time from days to hours.Β
“It’s an important enabler in the military’s ability to rapidly plan and execute war fights,” Montgomery told CBS News, emphasizing that there are still humans in the process, but that AI is used to help plan potential strikes.Β
A source directly familiar with the military capabilities of Anthropic’s Claude AI told CBS News the main task Claude is doing is sifting through large amounts of intelligence reports, like synthesizing patterns, summarizing findings and surfacing relevant information faster than a human analyst could.Β
The targeting process remains human-driven, the source said. While Anthropic’s U.S. Government Usage Policy does allow the Defense Department to use Clause for analyzing foreign intelligence, the terms of use require humans to make any decisions on military targets.
CBS News has not been able to independently verify whether Claude systems were used in a Feb. 28 strike that hit a girls’ school in Iran for which the U.S. was likely responsible.Β
AI is a significant boost to operations, but war could still be fought without it. More traditional legacy contractors still make the vast majority of weapons, according to Montgomery.Β
“This war is being fought by weapons, 98% by weapons provided by the traditional primes, and they’re doing very well,” Montgomery said. He added that you could fight a war without AI, but it would be “less desirable.” “It definitely is playing a role that will probably only grow campaign after campaign after campaign,” he said.Β
Big tech’s role in the military β and what’s changing
In July, the Pentagon signed a $200 million contract with the artificial intelligence companyΒ Anthropic to integrate Claude into Pentagon systems. That contract has since been canceled following a dispute between the Pentagon and Anthropic’s leaders about who should have final say in setting restrictions on how Claude is used by the military.Β
Now, the company is suing the federal government, alleging retaliation.”The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here,” Anthropic said in the lawsuit.Β
Microsoft and workers from OpenAI and Google have filed amicus briefs in support of Anthropic’s lawsuit.Β
The Pentagon has a six-month off-ramp period to remove Anthropic’s products from its system, and is still using them in Iran, despite the supply chain risk designation.Β
Meantime, other companies are getting in on the action. Google announced in a blog post on Tuesday that it is rolling out AI agents for non-classified military uses. On the heels of Anthropic’s fallout with the Defense Department in late February, Sam Altman, CEO of Anthropic rival OpenAI, posted on X about using the ChatGPT maker’s artificial intelligence models in the Pentagon’s classified network. The company then posted about language in their deal with the Pentagon honoring what they refer to as their three red lines on using AI: autonomous lethal weapons, mass surveillance of Americans, and high-stakes automated decisions.Β