Pentagon Pushes AI Firms to Expand Tools on Classified Networks, Sources Say
WASHINGTON — The Pentagon is urging leading artificial intelligence companies, including OpenAI and Anthropic, to make their AI systems available on classified government networks — potentially with fewer user restrictions than typically applied in commercial settings, according to sources familiar with the discussions.
During a White House event on Tuesday, Pentagon Chief Technology Officer Emil Michael told technology executives that the military intends to deploy advanced AI models across both unclassified and classified domains, two sources said.
“The Pentagon is moving to deploy frontier AI capabilities across all classification levels,” a U.S. official, speaking on condition of anonymity, told Reuters.
The push marks the latest chapter in ongoing negotiations between the Department of Defense and major generative AI firms as the U.S. military explores how to integrate artificial intelligence into modern warfare. Emerging battlefields increasingly feature autonomous drone swarms, robotic systems and sophisticated cyber operations — areas where AI could play a pivotal role.
Classified Access Raises Stakes
Currently, most AI tools developed for the U.S. military are accessible only on unclassified networks used primarily for administrative functions. Anthropic is the only AI company with technology available in classified environments through third parties, though the U.S. government remains bound by the company’s usage policies.
Classified systems handle highly sensitive operations, including mission planning and weapons targeting. Reuters could not determine when or how the Pentagon plans to deploy AI chatbots on such networks.
Military officials believe AI can help synthesize massive volumes of intelligence data, potentially improving speed and decision-making. However, experts warn that AI systems are not infallible. They can produce inaccurate or fabricated information — a risk that could carry severe consequences in classified or combat settings.
Tensions Over Guardrails
The Pentagon’s push may intensify tensions between defense officials and AI companies over usage restrictions.
Tech firms have implemented safeguards within their AI models to prevent misuse and require adherence to specific guidelines. Some companies, including Anthropic, have drawn red lines against autonomous weapons targeting and domestic surveillance applications.
Anthropic executives have previously told military officials they do not want their technology used for autonomous weapons systems or U.S. domestic surveillance. The company’s chatbot, Claude, is already used in certain national security missions.
“Anthropic is committed to protecting America’s lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities,” a company spokesperson said. “Claude is already extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work.”
President Donald Trump has ordered the Department of Defense to be renamed the Department of War, though the change would require congressional approval.
OpenAI Agreement on Unclassified Network
This week, OpenAI reached an agreement with the Pentagon allowing the military to use its tools, including ChatGPT, on an unclassified network known as genai.mil, which has reportedly been rolled out to more than 3 million Defense Department personnel.
As part of the deal, OpenAI agreed to relax many of its standard user restrictions for that environment, though some safeguards remain. The company clarified that the agreement applies only to unclassified usage.
“Expanding that agreement would require a new or modified arrangement,” an OpenAI spokesperson said.
Alphabet’s Google and Elon Musk’s xAI have also entered similar agreements with the Pentagon.
The Future of AI in Warfare
The debate underscores a broader question confronting governments and technology firms worldwide: how far should artificial intelligence be allowed to operate in military contexts?
Defense officials argue that commercial AI tools should be deployable as long as they comply with U.S. law. AI companies, however, face mounting ethical scrutiny over how their technologies are used — particularly in high-stakes environments where errors could prove deadly.
As negotiations continue, the outcome could shape not only U.S. defense policy but also the global standards governing AI use in warfare.