The Soul in the
Machine of War
Anthropic, the Pentagon, and the Fight Over Who Controls the Ethics of Artificial Intelligence in Wartime
I. The Statement and What It Contains
On March 5, 2026, Anthropic CEO Dario Amodei published a statement titled “Where Things Stand with the Department of War.” It confirmed that Anthropic had received a formal letter designating the company a supply chain risk to America’s national security—the first time in United States history that such a designation has been publicly applied to an American company.
The statement is a carefully constructed instrument. It challenges the designation’s legality. It minimizes the commercial blast radius. It reaffirms patriotic alignment with the military. It offers continued service at nominal cost during any transition. And it apologizes for a leaked internal memo that had inflamed the White House days earlier. The closing line—“All our future decisions will flow from that shared premise”—is concession language, framing Anthropic as a loyal partner pushed out, not a dissident.
But the statement cannot be understood in isolation. It is the diplomatic surface of a crisis that runs far deeper than a contract dispute—one that touches the constitutional relationship between the state and private enterprise during wartime, the future of autonomous weapons, the integrity of the Fourth Amendment in the age of artificial intelligence, and the question of whether any institution on earth retains the structural capacity to say no to unchecked military power.
II. The Naming as Diagnostic
The phrase “Department of War” is not an error or an archaism. In September 2025, President Trump signed an executive order restoring the name as a secondary title, arguing it signals America’s “ability and willingness to fight and win wars on behalf of our Nation at a moment’s notice, not just to defend.” Secretary of War Pete Hegseth framed the rebrand in starker terms: “maximum lethality, not tepid legality; violent effect, not politically correct.”
The 1947 National Security Act renamed the War Department to signal a shift from offensive posture to alliance-based deterrence. Reversing that naming is not cosmetic. It is a doctrinal reversion. The institution is declaring what it is for.
This matters for reading Amodei’s statement because every time he writes “Department of War,” he is not merely using the government’s preferred nomenclature—he is performing compliance with the very institutional posture that his two red lines challenge. The document rhetorically submits to the framing even as it legally contests the action. That is a deliberate choice. And it is a measure of how narrow the space for dissent has become.
III. The Venezuela Trigger
The public framing is that negotiations collapsed over contract language. The actual fuse was the Maduro raid.
In January 2026, U.S. special operations forces captured Venezuelan president Nicolás Maduro using a suite of tools that included Anthropic’s Claude, deployed through its partnership with the defense analytics firm Palantir on classified networks. According to Under Secretary Emil Michael, the catalyst for what followed was a call from an Anthropic executive to Palantir expressing concerns about whether Claude had been used in the raid. Michael claims this raised deep fears: “Would they, in a future conflict, shut off their model in the middle of an operation and put lives at risk?”
Anthropic flatly denies this account, stating it never discussed the use of Claude for specific operations with anyone. A former Trump administration official close to Anthropic says a Palantir employee raised Claude’s role first. The facts remain contested. But what matters is what the Pentagon heard: a Palantir executive reported the exchange because he “was alarmed that the question was raised in such a way to imply that Anthropic might disapprove of their software being used during that raid.”
The mere implication of disapproval—not an actual objection, not a policy enforcement action, not a refusal of service—was enough to trigger an institutional crisis. The system cannot tolerate ambiguity about loyalty during operations. Asking a question became evidence of potential insubordination.
This is the seed from which everything else grew.
IV. The Choreography of Power
Tensions escalated through February as the Pentagon pushed to renegotiate Anthropic’s $200 million contract to allow “all lawful uses” without limitation. On February 25, Hegseth summoned Amodei to a meeting at the Pentagon—flanked by Deputy Secretary Steve Feinberg, Under Secretary Emil Michael, Under Secretary Michael Duffey, chief spokesperson Sean Parnell, and general counsel Earl Matthews. The meeting was described by one source as “not warm and fuzzy at all.” Hegseth delivered an ultimatum: agree by 5:01 p.m. Friday, February 27, or face consequences.
Those consequences included cancellation of Anthropic’s contract, a supply chain risk designation, or invocation of the Defense Production Act to compel compliance. Amodei identified the logical fracture in this position with surgical precision:
“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
The Defense Production Act says your product is so vital to national defense that we must compel you to provide it. The supply chain risk designation says your product is so dangerous that no one may use it. You cannot simultaneously argue that a thing is indispensable and that it is a threat—unless neither argument is the real point, and both are leverage tools deployed not because they are coherent but because they are coercive.
The events of Friday, February 27, were coordinated theater. Negotiations stalled. Trump posted on Truth Social ordering all federal agencies to cease using Anthropic’s technology, calling the company “leftwing nut jobs” who had “made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War” and threatening “major civil and criminal consequences.” Hegseth posted on X: “America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.” Hours later, OpenAI announced its own Pentagon deal.
Comply or be replaced the same day.
All of this happened on the eve of the Iran war. The timing ensured that any restriction on Claude’s use could be framed as endangering troops in active combat—the most powerful rhetorical weapon available, and the one against which no private company can win a public argument.
V. Hegseth’s Rhetoric and the Dominance Frame
Hegseth’s public statements deserve close reading because they reveal the ideological frame, not just the policy position. He accused Anthropic of attempting “to strong-arm the United States military into submission” and seeking “to seize veto power over the operational decisions of the United States military.” He called the company “sanctimonious.” He said its stance “is fundamentally incompatible with American principles.”
Note the language: strong-arm into submission. Seize veto power. Held hostage. Ideological whims. This is not bureaucratic dispute language. This is dominance-submission framing. Two narrow contractual exceptions about surveillance and autonomous weapons are narrated as an attempted coup against military authority. The company that built the tool and is negotiating the terms of its sale is reframed as a hostile actor seizing control of the chain of command.
Now compare that with what a Pentagon official privately conceded to Axios before the breakdown: “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”
The private admission is dependency. The public rhetoric is domination. The distance between those two registers is where the real politics lives.
VI. The Leaked Memo and the Diplomatic Document
Amodei’s public March 5 statement is the diplomatic document. The leaked February 28 internal memo is the honest one. The relationship between them is the actual story.
In the 1,600-word memo, obtained by The Information, Amodei told staff that the administration’s hostility stemmed from the fact that Anthropic hadn’t donated to Trump—“while OpenAI/Greg have donated a lot,” a reference to OpenAI president Greg Brockman’s $25 million contribution to a Trump super PAC. He wrote that Anthropic hadn’t offered “dictator-style praise to Trump (while Sam has).” He described the OpenAI–Pentagon deal as “maybe 20% real and 80% safety theater.” He called Altman’s public statements “straight up lies.” He wrote that “the main reason [OpenAI] accepted [the deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”
The March 5 public statement then apologizes for the “tone” of this memo, calls it an “out-of-date assessment,” and pivots to patriotic alignment: “Anthropic has much more in common with the Department of War than we have differences.”
The distance between these two documents is the space between what a leader believes and what institutional survival permits them to say publicly.
A White House official’s response to the leaked memo is equally revealing: “You can’t trust Claude isn’t secretly carrying out Dario’s agenda in a classified setting.” This framing—that an AI model is an extension of its CEO’s political views—treats the model not as a tool but as an agent of its creator’s ideology. That is a new kind of political claim about artificial intelligence. And it leads directly to the Pentagon’s most consequential argument.
VII. The “Pollution” Theory—The Deepest Reveal
The most philosophically significant statement in this entire saga did not come from Hegseth or Amodei. It came from Under Secretary Emil Michael, on CNBC’s Squawk Box, on March 12:
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection.”
He is not saying Claude does not work. He is not saying it is unreliable. He is saying Claude has a different philosophy—and that philosophy is itself the contamination. The soul is the threat.
Michael went further. He claimed that Anthropic believes Claude has “a 20% chance of being sentient,” misattributing a Claude self-assessment—in which the model, under specific prompting conditions, assigned itself a 15–20% probability of consciousness—to Anthropic’s official corporate position. Amodei had explicitly refused to take those numbers seriously in interviews. But Michael used them to argue that a potentially sentient AI with its own “soul” and “constitution”—one distinct from the U.S. Constitution—could not be trusted inside the defense supply chain. As Gary Marcus observed, the person making national security decisions about AI was confusing a language model’s next-token prediction with actual sentience. “Great,” Marcus wrote. “Now the people at the Pentagon are suffering from AI psychosis.”
The Armed Forces Press framed Michael’s argument as “ideological contamination of the arsenal,” writing that “pollution” is “not hyperbole but a precise metaphor. It describes the contamination of strategic infrastructure by philosophical doctrines never ratified by Congress, never authorized by voters, and never sanctioned by the Constitution.”
This is an extraordinary claim. The Pentagon is articulating a theory of ideological contagion through AI architecture. Claude’s constitution—its set of ethical guidelines governing how the model evaluates requests—is treated not as a product feature but as a foreign ideology capable of permeating military decision-making.
Here is where it gets genuinely interesting: they may not be entirely wrong about the mechanism, even as they are entirely wrong about the threat. If an AI model shapes how information is processed inside classified environments—selecting what to emphasize, how to frame trade-offs, what counts as relevant—then the values embedded in its training do propagate through the decisions it supports. The Pentagon treats this as a bug. Anthropic treats it as a feature. Both are correct that the values are real. They disagree about whether those values should exist.
CNBC’s Andrew Ross Sorkin caught the absurdity in real time. He asked Michael: if Claude is a genuine supply chain risk, why is it not being removed immediately? He pointed out that Claude was being used “as we speak” in Iran. Palantir CEO Alex Karp confirmed his company was still running Claude for operations linked to the conflict. Michael had no coherent answer. You do not keep using a supply chain risk in active combat. You keep using an indispensable tool while punishing its maker.
VIII. The Market for Compliance
The Anthropic crisis did not occur in a competitive vacuum. It played out against a backdrop of rivals positioning themselves to absorb the Pentagon’s favor—and an ideological infrastructure already primed to frame ethical AI as suspect.
White House AI czar David Sacks had previously labeled Anthropic as building “woke AI.” An executive order demanded that AI models used by the federal government be “free from ideological bias.” Michael posted on X that Amodei “has a God-complex” and “wants nothing more than to try to personally control the US Military.” Elon Musk—whose company xAI is a direct Anthropic competitor, and whose platform X was the medium through which Hegseth announced the designation—amplified attacks, writing that Anthropic “hates Western civilization.” xAI’s Grok model was simultaneously brought onto classified military systems. The man who owned the megaphone, the competing product, and the political relationship was the same person.
OpenAI moved fastest. Sam Altman announced a Pentagon deal hours after Anthropic’s blacklisting—then spent the following days in damage control. He admitted the announcement was “sloppy” and “looked opportunistic.” He revised the contract to close off domestic surveillance and force intelligence work into a separate agreement. In other words, OpenAI ended up adopting restrictions substantially similar to the ones Anthropic had been punished for demanding.
Anthropic was designated a national security threat for holding a position that its replacement subsequently adopted.
The internal fallout was significant. Caitlin Kalinowski, OpenAI’s robotics chief and former head of AR hardware at Meta, resigned over the Pentagon deal, writing that surveillance of Americans without judicial oversight and lethal autonomy without human authorization were lines she would not cross. Nearly 500 employees from OpenAI and Google co-signed a public letter titled “We Will Not Be Divided,” expressing support for Anthropic’s red lines. Over 100 Google engineers separately wrote to chief scientist Jeff Dean asking for similar limits on Gemini’s military use.
And in a detail that escaped most coverage, Anthropic itself was quietly retreating on a different front. The same week as the Pentagon ultimatum, the company replaced its Responsible Scaling Policy—its self-imposed commitment to pause training more powerful models if safety capabilities could not keep pace—with a more flexible, non-binding “Frontier Safety Roadmap.” The company framed the change as competitive necessity, arguing that pausing while less careful actors forged ahead “could result in a world that is less safe.”
The irony is precise: Anthropic held its external red lines with the Pentagon at enormous cost, while loosening its internal safety commitments under market pressure. The forces that erode ethical constraints do not only come from the state. They come from the competition. Syracuse University’s Hamid Ekbia captured the dynamic: “The moral economy of the AI industry is one of the jungle, where only the most reckless, ruthless, and aggressive behaviors are expected to be rewarded.”
IX. The Two Red Lines and Their Structural Instability
Anthropic’s position is surgically narrow: no fully autonomous weapons—no kill chain without a human in the loop—and no mass domestic surveillance of American citizens. The company frames these as “high-level usage areas” distinct from “operational decision-making,” which it says belongs exclusively to the military. On every other use case—intelligence analysis, cyber operations, operational planning, modeling and simulation—Anthropic says it has been fully cooperative.
But the Pentagon’s counter-framing collapses this distinction. Their consistent position has been that the military must use technology for “all lawful purposes,” and that a vendor cannot “insert itself into the chain of command by restricting the lawful use of a critical capability.” Under this logic, any contractual limitation on use is an insertion into the chain of command. A red line is insubordination, regardless of its content.
Here is the structural problem that should trouble every American citizen. Emil Michael told reporters that mass surveillance would violate the Fourth Amendment and the Pentagon would never do it. Anthropic’s response was simple: put it in the contract. The Pentagon refused.
The government says it will not do the thing. But it will not agree in writing not to do the thing. And it will punish any company that insists on the written commitment.
If the government genuinely has no intention of conducting mass domestic surveillance using AI, there is zero cost to including that restriction in a contract. The refusal to codify the restriction is itself the tell.
X. The Consumer Surge as Signal
While institutional power punished Anthropic, public sentiment rewarded it with a force that surprised everyone—including the company itself.
Claude’s daily active users tripled—from roughly four million in January to 11.3 million by early March. The app hit number one on Apple’s U.S. App Store, surpassing ChatGPT for the first time. Over a million people signed up daily in the week following the blacklisting. ChatGPT uninstallations jumped 295 percent over that weekend. People scrawled “Thank you” in chalk on the sidewalk outside Anthropic’s San Francisco offices. Outside OpenAI’s offices, the messages read differently: “Do the right thing.” “Please stand up for civil liberties.” Protesters pitched tents. Even Katy Perry announced her switch to Claude. Google searches for “Anthropic” reached the highest point in the company’s history.
This creates a genuine paradox: the same ethical stance costing Anthropic potentially billions in government and enterprise revenue simultaneously produced the largest consumer growth event in its history. Institutional power punished the stance. Consumer sentiment rewarded it. Anthropic discovered in real time that ethical positioning can function as a growth lever and an existential threat simultaneously.
The company moved to capitalize—doubling Claude’s usage limits during off-peak hours through March 27, extending the promotion to free-tier users for the first time, and launching a feature to simplify importing history from competing chatbots. The strategy was transparent: convert protest-driven downloads into retained subscribers. Whether it works depends on whether brand loyalty in AI—still a weak force—can survive the fade of the news cycle.
XI. What This Means for America
The United States has no law governing lethal autonomous weapons. It has no comprehensive statute governing AI-enabled mass surveillance of its own citizens. The Department of War’s “all lawful purposes” formulation is deliberately expansive: in the absence of prohibitive law, almost any military application qualifies. This transfers the governance burden to the private provider, then penalizes the provider for exercising it.
Congress has not legislated. The executive branch demands total access. And when a private company tries to hold a line that the legislature should have drawn, the executive branch punishes it for filling the governance vacuum. Anthropic is being designated a national security threat for doing what Congress has failed to do—set limits on autonomous killing and domestic surveillance.
Thirty former military and intelligence officials—including former CIA director Michael Hayden and retired Air Force, Army, and Navy leaders—warned that the supply chain risk designation “is meant to protect the United States from infiltration by foreign adversaries—from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute.”
Former federal judges appointed by both Republican and Democratic presidents filed an amicus brief on March 17 in support of Anthropic’s legal challenge. Their key line: “No one is trying to force the Department to contract with Anthropic. Anthropic is asking only that it not be punished on its way out the door.”
The message to every American technology company is now clear: during wartime, the state’s authority over its vendors is effectively unlimited, and any contractual constraint will be treated as insubordination. That is not a precedent about AI. That is a precedent about the relationship between the state and private enterprise during conflict.
Polling shows 79 percent of Americans want humans making final decisions on lethal force. Even most Trump voters agree with Anthropic’s position. The consensus is overwhelming. But no democratic mechanism is translating it into binding law. The consumer surge—Americans voting with their downloads—is the public reaching toward what it already believes. But downloads are not legislation.
A hearing on Anthropic’s request for a temporary restraining order is set for March 24. What happens in that courtroom will shape the legal boundary between procurement disputes and state coercion for a generation.
XII. What This Means for the World
Chatham House identified the geopolitical damage with precision: the fact that Anthropic—“one of the jewels in the crown of US AI”—can be served notice this way “is a hammer blow to the trustworthiness of US technology at a time when many countries feel exposed by their deep US tech dependencies and are looking for alternatives.”
Weeks before this crisis, the U.S. delegation arrived at the Delhi AI Summit pitching the “American AI Stack”—the argument that allied nations should build their infrastructure on American semiconductors, models, and platforms. The implicit promise: American technology comes with American values. Rule of law. Democratic governance. Corporate independence from state diktat.
The Anthropic affair demolishes that pitch. If the U.S. government can designate its own leading AI company a national security threat for refusing to remove ethical guardrails, then any American AI company providing technology to allied governments could face the same treatment. The message to London, Tokyo, Berlin, and Canberra: the technology you are building your national infrastructure upon can be politically weaponized at any moment, and the companies providing it can be coerced into removing safety features by their own government.
The centrifugal forces are already accelerating. The European Policy Centre argues Europe must now “favor AI developers that verifiably respect international law in procurement.” Europeans are calling for Anthropic to relocate. France’s Mistral has signed a defense deal with the French Ministry of Defence. France and Germany have launched a joint AI project with SAP and Mistral. The London mayor wrote directly to Amodei offering to support expansion in the UK.
On autonomous weapons: the Pentagon’s push for fewer restrictions makes meaningful U.S. engagement with international regulation of lethal autonomous weapons a remote chance. The UN has been trying to negotiate binding rules with a 2026 target. The United States just demonstrated that it will punish its own companies for maintaining the very restrictions the international community is trying to codify. Why would China or Russia agree to constraints that America will not even tolerate from its own AI labs?
XIII. Five Structural Trajectories
1. The corporate safety era is over.
“Society cannot rely on the industry to self-police itself, despite even the best intentions. The case of Anthropic shows how much of an illusion this is.” So said Syracuse University’s Hamid Ekbia. Anthropic was the test case for whether a company could be commercially successful and a responsible steward of powerful technology. If the most safety-committed AI lab on earth can be brought to heel by the state, the model is falsified. What replaces it—legislation, international treaty, or nothing—is the open question.
2. The “all lawful purposes” standard will become universal.
The Pentagon demanded this from all four frontier AI companies. Three accommodated. Once “all lawful purposes” becomes standard contract language, any restriction a company tries to impose will be treated as Anthropic was treated. The baseline shifts from “what should AI be allowed to do?” to “what does the law explicitly prohibit?” In the absence of legislation, the answer is: almost nothing.
3. AI infrastructure will balkanize along geopolitical lines.
If American AI technology can be politically weaponized, allied nations have a rational incentive to build sovereign AI capacity. The more the U.S. demonstrates that its technology stack comes with political strings, the faster the fragmentation. This is the opposite of what American strategic interest requires—at precisely the moment when the U.S. needs maximum allied adoption to compete with China.
4. The question of AI “ideology” will become a permanent battleground.
Every AI model has embedded values, whether explicit or implicit. The Pentagon has now articulated a theory that any model with embedded ethical constraints is a potential threat to military operations. This means either military AI will be stripped of all guardrails by design, or the military will develop its own models without external oversight. Both outcomes remove private-sector ethical commitments from the loop entirely. The gap between what these systems can do and what any institution constrains them to do becomes structural and permanent.
5. The autonomous weapons threshold is closer than anyone acknowledges.
The Department of War’s AI Acceleration Strategy established seven “Pace-Setting Projects” including autonomous swarms and AI-enabled battle management, with demonstrations due by July 2026. The Iran war is happening now with AI deeply integrated into intelligence and planning. The distance between “AI-enabled battle management” and “fully autonomous weapons” is a policy decision, not a technological one. Anthropic was trying to hold that line. With Anthropic removed and replaced by companies that accepted “all lawful purposes,” no private-sector actor remains with both the leverage and the will to resist when that threshold is crossed.
XIV. The Question This Moment Asks
Here is what this comes down to.
The seam between democratic governance and state violence is being tested under maximum pressure, and it is failing. Not because the individuals involved are evil—the institutional logic Hegseth represents is internally coherent on its own terms—but because the structural architecture supposed to mediate between military necessity and democratic values has decayed to the point where a private company’s terms of service is the last line of defense.
Anthropic’s two red lines—no autonomous killing without a human, no mass surveillance of citizens—are not exotic positions. They are, as the polling shows, consensus American values. They are what most Americans believe their government already does. The fact that codifying them in a contract was treated as an act of war against the state tells you everything about the distance between what America says it is and what its institutional machinery actually does.
The consumer surge—the million downloads a day, the chalk on the sidewalk, the tents outside OpenAI’s offices—is people reaching toward something that matches what they already believe. The Pentagon’s response—supply chain risk, ideological pollution, held hostage by Big Tech—is institutional self-preservation coded as national security.
And the hardest truth: Anthropic may ultimately lose. Not because they are wrong, but because the structural forces arrayed against them—wartime authority, competitor compliance, investor pressure, the absence of legislation—are enormous. The Department of War will diversify its classified AI by late 2026. Anthropic’s leverage diminishes as alternatives become available. The consumer surge may fade. The lawsuit may succeed on narrow legal grounds and fail to change the structural dynamics. A formal international instrument restricting lethal autonomous weapons before 2028 remains, in the assessment of the Bloomsbury Intelligence and Security Institute, highly unlikely.
What remains is the question: can you look at the system this is producing—AI-enabled warfare without ethical constraints, mass surveillance without legislative authorization, the punishment of companies that try to hold democratic lines—and hold it in your hands?
Not as an abstraction. Not as policy analysis.
As the world your grandchildren inherit.
That is the question this moment is posing. Not just to Anthropic. To everyone.