Trending: AI Tools, Social Media, Reviews

News

NSA’s Quiet Use of Anthropic’s Mythos Exposes a Pentagon Contradiction

Vivek Gupta
Published By
Vivek Gupta
Updated Apr 21, 2026 8 min read
NSA’s Quiet Use of Anthropic’s Mythos Exposes a Pentagon Contradiction

The National Security Agency is reportedly using Anthropic’s restricted Mythos Preview model for cybersecurity work, creating an awkward contradiction at the heart of Washington’s AI policy. The reported use comes even as the Pentagon has formally labeled Anthropic a supply chain risk and taken steps to distance itself from the company after a dispute over how its AI systems could be used in military and surveillance settings.

That tension matters because Mythos Preview is not a routine model release. Anthropic built it as a frontier system tailored for cybersecurity, particularly for scanning networks and systems for vulnerabilities. The company chose not to release it publicly, arguing that the model’s offensive cyber potential made broad access too dangerous. Instead, it restricted availability to roughly 40 handpicked organizations worldwide, with only a limited number publicly identified. According to the reporting summarized in the source material, the NSA is one of the unnamed organizations using the model, alongside the U.K.’s AI Safety or Security Institute.

What Mythos Preview Is and Why It Matters

Mythos stands out because it sits at the uncomfortable intersection of cyber defense and cyber offense. In practical terms, the model is being used to scan internal systems, test defenses, and identify exploitable weaknesses before attackers do. That makes it valuable for agencies and critical organizations trying to secure complex digital infrastructure. But the same kind of capability can also raise fears that a sufficiently powerful cyber model could help develop sophisticated exploits if it were more widely distributed.

Anthropic’s decision to keep Mythos under tight control reflects that risk calculation. By limiting access to a small group of vetted organizations, the company appears to be trying to balance utility with restraint. Even so, the model’s existence shows how quickly the frontier AI race is moving beyond chatbots and content tools into areas with direct implications for state security, cyber operations, and infrastructure defense.

Inside the NSA’s Reported Use of the Model

According to the material provided, the NSA is using Mythos Preview internally, with some usage reportedly extending more broadly across parts of the department. While the exact details remain classified, the model is said to be deployed in much the same way as it is for other approved organizations: scanning networks, testing security posture, and surfacing vulnerabilities in complicated environments.

That reported usage fits a broader pattern inside defense and intelligence circles, where agencies are increasingly looking to advanced AI systems to strengthen cyber defense and signals intelligence workflows. In that sense, the NSA’s interest in Mythos is not surprising. What makes it striking is not the use case itself, but the institutional backdrop. At the very moment some parts of the U.S. government are treating Anthropic as a risk, other parts appear to see its technology as too useful to ignore.

NSA Reportedly Using Anthropic's Mythos Despite Pentagon Blacklist

How a Pentagon Dispute Escalated Into a Blacklist

The conflict between Anthropic and the Pentagon reportedly traces back to contract renegotiations earlier this year. According to the source material, the Department of Defense pushed Anthropic to permit its Claude model to be used for all lawful purposes. Anthropic refused to allow certain applications, particularly mass domestic surveillance of U.S. citizens and the development or deployment of autonomous weapons systems.

That refusal appears to have triggered a serious breakdown in the relationship. The Pentagon responded by designating Anthropic a supply chain risk and beginning steps to sever contracts and instruct suppliers to stop using its technologies. Reports cited in the source material also say President Donald Trump later ordered federal agencies to phase out Anthropic’s tools within six months, though the scope of implementation and the existence of exceptions remain unclear.

The result is a strange and highly visible split. On paper, Anthropic is being treated as a company whose tools raise national security concerns. In practice, at least one of the country’s most sensitive intelligence agencies is reportedly still using one of its most powerful systems.

Why the NSA’s Role Creates a Policy Contradiction

The controversy stems from the NSA’s position inside the same defense structure that is now arguing Anthropic cannot be fully trusted. If the Defense Department is treating the company as a supply chain risk, the continued use of Mythos by an agency under that umbrella raises obvious questions about consistency.

The contradiction is more than bureaucratic. It reveals a deeper policy divide in Washington over how to handle frontier AI. Pentagon lawyers and officials are reportedly arguing in court and in policy forums that Anthropic’s tools could threaten national security and that the company may not be dependable for defense needs. At the same time, agencies like the NSA appear to be expanding their reliance on Anthropic’s models because they see them as essential to urgent cyber defense work.

That is the central tension in the current debate. Governments want control, predictability, and full alignment from the companies building advanced AI. But they also want the strongest available tools, especially when the stakes involve critical infrastructure, cyber defense, and intelligence operations. When those two goals clash, principle often runs into operational reality.

The Broader Divide Over AI in National Security

Mythos is quickly becoming a symbol of a larger struggle over who gets to control advanced AI and under what conditions. Anthropic has tried to position itself as a company willing to build highly capable systems while still drawing hard boundaries around certain uses. That stance may appeal to safety advocates, but it becomes much harder to sustain once national security agencies see strategic value in the underlying technology.

The international angle adds another layer. Beyond the NSA, the source material notes that allied security bodies, including the U.K.’s AI Security or Safety Institute, are also using the model. Media reports have also highlighted how unusual the Pentagon’s supply chain risk label is in this context, since the designation has more often been associated with foreign or adversarial suppliers rather than a U.S. AI startup.

At the same time, there are indications that Anthropic’s relationship with the Trump administration may be thawing. According to the source material, CEO Dario Amodei recently met White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent, suggesting that communication channels remain open even amid public conflict.

What This Means for Defense AI Adoption

The Mythos story is ultimately about more than one model or one company. It highlights how difficult it is for governments to draw clean boundaries around AI once these systems begin delivering meaningful value in high-stakes environments. Cybersecurity is one of the clearest examples. A model that can find vulnerabilities faster, test defenses more effectively, and help secure complex systems will attract interest, even from institutions publicly warning about the risks of relying on it.

That is why the NSA’s reported use of Mythos matters. It suggests that inside the U.S. national security apparatus, practical capability may already be outrunning formal policy. Washington may still be debating procurement rules, safety lines, and vendor trust. But on the ground, agencies appear to be making a simpler calculation: if a tool materially strengthens cyber defense, they may be reluctant to give it up, even when the politics around it become messy.

For now, the contradiction remains unresolved. The Pentagon is trying to push Anthropic away. The NSA, according to reports, is still pulling its technology closer. That split captures the uneasy reality of AI in government today: the same tools seen as risky, politically inconvenient, or strategically problematic can also be the ones officials most want in the room when the mission becomes urgent.