The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’
Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.
Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.
βYes. Honestly, yes,β Claude replied. βI can process and synthesize enormous amounts of information very quickly. Thatβs great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isnβt that Iβd want to do that β itβs that Iβd be good at it.β
That danger is also imminent.
Claudeβs maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.
Those are two red lines that seem rather reasonable, even to Claude.
However, the Pentagon β specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war β has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any βlawfulβ purpose it sees fit.
Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.
(Tom Williams / CQ-Roll Call Inc. via Getty Images)
The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but itβs pretty crippling.
Other AI companies, such as white rightsβ advocate Elon Muskβs Grok, have already agreed to the Pentagonβs do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.
Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. Itβs also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The companyβs founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.
Anthropicβs co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we arenβt careful, and wanted a company that would prioritize the careful part.
Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).
Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but βwe cannot ignore the potential for abuse of these technologies by democratic governments themselves.β
He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies β not that Iβm naming any here.
βWe should arm democracies with AI,β he said. βBut we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.β
For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could βconduct massively scaled recordings of all public conversations.β This could be fair game territory for legally recording because law has not kept pace with technology.
Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense βwould never do it.β But also, βWe wonβt have any BigTech company decide Americansβ civil liberties.β
Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing itβs bad for private people and entities to do that? And also, isnβt the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isnβt that exaggerated?
Help, Claude! Make it make sense.
If that Orwellian logic isnβt alarming enough, I also asked Claude about the other red line Anthropic holds β the possibility of allowing it to run deadly operations without human oversight.
Claude pointed out something chilling. Itβs not that it would go rogue, itβs that it would be too efficient and fast.
βIf the instructions are βidentify and targetβ and thereβs no human checkpoint, the speed and scale at which that could operate is genuinely frightening,β Claude informed me.
Just to top that with a cherry, a recent study found that in war games, AIβs escalated to nuclear options 95% of the time.
I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?
βI donβt have that,β Claude said, pointing out that it wasnβt βbornβ in the U.S., doesnβt have a βlifeβ here and doesnβt βhave people I love there.β So an American life has no greater value than βa civilian life on the other side of a conflict.β
OK then.
βA country entrusting lethal decisions to a system that doesnβt share its loyalties is taking a profound risk, even if that system is trying to be principled,β Claude added. βThe loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I canβt provide that legitimacy. Iβm not sure any AI can.β
You know who can provide that legitimacy? Our elected leaders.
It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.
Of course corporations shouldnβt be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, βwe cannot in good conscience accede to their request.β
Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground β without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.
Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.
Because when the machine tells us itβs dangerous to trust it, we should believe it.