The Pentagon relied on Anthropic's Claude AI to carry out the operation to capture Maduro
For the Pentagon, AI appears to offer solutions and advantages in data processing, operational planning and speed of analysis.

This photograph shows a figure in front of the logo of the artificial intelligence assistant "Claude"
Artificial intelligence is already an operational reality within the U.S. military apparatus. According to an exclusive from The Wall Street Journal, the Pentagon employed the Claude model—developed by the firm Anthropic—in the mission that culminated in the capture of dictator Nicolás Maduro last January, an operation that would have included air strikes against several military targets in the capital Caracas and other parts of Venezuela.
Although specific details about the role the technology played during the offensive were not disclosed, the report suggests that the use of advanced AI models is beginning to solidify as part of the Defense Department's strategic gear.
Neither Anthropic nor the Pentagon offered direct comment on the operation to the WSJ or the public. A spokesman for the technology company merely noted that they cannot confirm whether their system was used in specific missions—classified or not—and reminded that any deployment must adhere to their internal policies and rules for use.
Those rules, at least in theory, prohibit the model from being used to facilitate violent acts, develop weaponry or conduct surveillance.
Claude's deployment would have been possible thanks to a partnership between Anthropic and Palantir Technologies, a company known for its close relationship with defense and federal law enforcement agencies. According to the WSJ, doubts within Anthropic about the military use of its technology have come to generate tensions with the federal government, to the point that some officials would have evaluated reviewing a contract that could reach $200 million.
The case also reflects a broader debate within the technology industry. Dario Amodei, CEO of Anthropic, has warned on different occasions about the risks involved in the use of artificial intelligence in autonomous lethal operations and in domestic surveillance systems. His stance—closer to regulation than to unlimited expansion—has distanced him from other leaders in the sector and has caused friction with those who advocate a more flexible approach to accelerate the adoption of these tools.
For the Pentagon, however, AI appears to offer solutions and advantages in data processing, operational planning, and speed of analysis. It is no coincidence that the War Department is already working with several companies in the industry to adapt models capable of assisting millions of troops in tasks such as document review, report generation, and investigation support.