I spent decades inside the Pentagon watching technology reshape warfare. I saw precision munitions change the battlefield. I watched satellites compress decision cycles. But nothing compares to what is happening now.
Artificial intelligence has moved the lab to the kill chain.
And the showdown between Secretary of War Pete Hegseth and AI firm Anthropic is not a contract dispute. It is the opening battle over who controls the most powerful military technology of the 21st century.
AI is already transforming war
Look at Ukraine.
Western officials report that drones now account for roughly 70-80% of battlefield casualties in that war. But the real revolution occurs when AI is added. Reports indicate AI-guided navigation can increase drone strike accuracy from 10–20% to as high as 70–80%.
That is not incremental change. That is a transformation in battlefield lethality.
The same dynamic is emerging in U.S. operations involving Iran and other theaters. AI tools are being used for intelligence analysis, targeting refinement, pattern recognition, and operational simulations. These systems compress time, reduce uncertainty and accelerate decisions.
AI is not theoretical. It is operational.
Which brings us to Washington.
What the Hegseth–Anthropic standoff is really about
On Feb. 27, Hegseth designated Anthropic a ‘supply chain risk to national security.’ President Donald Trump ordered federal agencies to cease using its Claude AI model after Anthropic refused to remove two guardrails:
A prohibition on fully autonomous weapons.
A prohibition on mass domestic surveillance.
Artificial intelligence has moved the lab to the kill chain.
![]()
Those are firm boundaries.
But here is the other boundary: no private corporation should hold an effective veto over how America defends itself.
Washington’s contractor addiction
For decades, the federal government has grown dependent on contractors for critical defense functions — logistics, cyber infrastructure, analytics and intelligence support. AI is simply the next frontier in that pattern.
But frontier AI models are not spare parts or uniforms. They are strategic infrastructure. They influence targeting, operational tempo and potentially deterrence modeling.
That level of sensitivity cannot remain under corporate ownership.
During World War II, the United States built the atomic bomb through the Manhattan Project under centralized national authority. It was not governed by venture-backed boards setting independent usage policies. It was directed by the U.S. government with a clear strategic mandate.
We need a similar mindset for our most sensitive AI systems.
Government must own core military algorithms. Not lease them. Not subscribe to them. Own them.
AI tools are being used for intelligence analysis, targeting refinement, pattern recognition, and operational simulations. These systems compress time, reduce uncertainty and accelerate decisions.
![]()
