After being banned for 48 hours, Claude reached number one on the App Store.

On Saturday morning, Ultraman reposted a screenshot of an internal email on X.

The email was written by him on Thursday night to OpenAI employees, stating that the company was in talks with the Pentagon and that he hoped to help “ease the situation.” He shared this email with a few lines of explanation, mainly to publicly clarify what had happened in recent days.

When he posted this tweet, Claude had already risen to the top of the free apps chart in the US App Store. Just the day before, ChatGPT was still holding that position.

Sensor Tower’s data recorded what happened in the following hours: on Saturday alone, ChatGPT’s uninstallation rate in the US surged by 295% compared to normal days, with 1-star reviews increasing by 775%. Meanwhile, Claude’s downloads increased by 51% in a single day. Reddit saw a wave of posts titled “Cancel ChatGPT,” with users sharing screenshots of their unsubscribes, and some commenting “fastest install of my life.” A website called QuitGPT.org launched, claiming that 1.5 million people had taken action.

By Monday, the influx of users caused a massive outage for Claude. The company, listed as a “supply chain security risk” by the federal government, was overwhelmed by user traffic, causing server stress.

Precise Product Counterattack

On the same day the uninstall wave intensified, Anthropic launched a memory migration tool.

The function itself is simple. Users copy a prompt into ChatGPT, which outputs all stored memories and preferences, then paste them into Claude. Claude imports everything with one click, allowing users to continue from where they left off in ChatGPT. The official website’s copy is just one sentence: “Switch to Claude without starting over.”

The timing of this tool is its most critical feature.

OpenAI’s own data shows that by mid-2025, over 70% of ChatGPT users’ scenarios are non-work related, including daily Q&A, writing, entertainment, and information seeking. It is many people’s first AI contact, embedded into daily life through a vast plugin ecosystem, Voice Mode, and deeply integrated third-party applications. For these users, switching costs are not just “downloading a new app,” but reintroducing an unfamiliar AI that has to learn who they are from scratch. Memory accumulation is the strongest reason to stay.

Anthropic’s own research indicates that Claude’s use cases are highly concentrated. Programming and math tasks account for 34%, making it the largest single category, with education and scientific research growing fastest over the past year. Core users are developers, researchers, and heavy writers—more rational, and more likely to switch tools based on clear value judgments, as long as migration costs are low.

The memory migration tool minimizes this cost. Additionally, Anthropic announced that the memory feature would be fully open to free users, a functionality previously exclusive to paid plans.

However, many of the new users are not the original target audience for Claude.

Feedback on social media shows that many ordinary users migrating from ChatGPT initially react with: “It’s different.” Some find Claude’s responses deeper and more proactive, rather than just agreeable. Others notice it writes more cleanly but doesn’t generate images or have Voice Mode-like interactive experiences.

Some users originally sought a “more obedient ChatGPT alternative,” but found Claude’s personality stronger, requiring time to adapt. A migration guide from TechRadar, widely shared in recent days, titled “I Wish Someone Had Told Me This,” emphasizes that Claude and ChatGPT operate on fundamentally different logic: the former is more like a principled work partner, while the latter is a versatile assistant.

This difference was originally a matter of product positioning, but it was unexpectedly amplified by this incident. Users, driven by moral stances, flocked to Claude, only to find a product different from their expectations—more demanding, with clearer boundaries. This could have been a reason for churn, but at this particular moment, it became a reason to stay: believing in a company’s stance makes it easier to accept its product logic.

Days after launch, Anthropic released data: free active users increased by over 60% compared to January, with daily new registrations quadrupling. Claude experienced outages due to traffic surges, with thousands of users reporting login issues, which were resolved within hours.

The Three Words in the Contract: What Did OpenAI Say and Do?

Anthropic is the first commercial company to deploy AI models on the US military’s classified networks, through a partnership with Palantir, valued at about $200 million. But over the past few months, relations have deteriorated. The core dispute revolves around a clause: the Pentagon demands AI models be open for “all lawful purposes” without conditions. Anthropic insists on explicitly excluding two uses: large-scale surveillance of US citizens and fully autonomous weapons systems.

Around February 20, reports indicate that an Anthropic executive questioned Palantir about Claude’s use during the US military’s January operation to arrest Venezuelan President Maduro, which caused strong dissatisfaction from the military. On Thursday, the Pentagon issued a final ultimatum, requiring Dario Amodei to respond by 5 p.m. that day.

Amodei issued a statement before the deadline, saying the company could not accept the current terms, “not because we oppose military use, but because in some cases, we believe AI could undermine rather than defend democratic values.” Trump then announced that federal agencies would cease using Anthropic products within six months, and Hegseth listed it as a “supply chain security risk,” a label usually reserved for foreign adversary companies. The contract was thus terminated.

The vacated position was quickly filled. Later that same day, OpenAI announced a deal with the Pentagon. In his internal letter on Thursday, Ultraman maintained a clear stance, stating this was an “industry-wide issue,” and that OpenAI and Anthropic shared the same “red lines”: opposition to mass surveillance and autonomous weapons. On Friday, the agreement was finalized, with models deployed on military classified networks, limited to cloud operation, with engineers overseeing, and the same two restrictions explicitly included in the contract.

Ultraman then opened a Q&A session on X, answering questions for hours. Someone asked why the Pentagon accepted OpenAI but banned Anthropic. His reply was: “Anthropic seems more focused on the specific prohibitions in the contract, rather than citing applicable laws, and we are comfortable with referencing laws.”

This statement reflects a methodological difference, but it reveals the core controversy.

The key to the breakdown with Anthropic was the Pentagon’s insistence on including the phrase: AI systems may be used for “all lawful purposes,” which Anthropic refused. Their reason: in the context of national security, this phrase is not a fixed boundary. Current laws have not kept pace with AI capabilities, and the scope of “lawful” will be determined by government interpretation. OpenAI signed this phrase and claimed that the same protections were discussed in the contract.

Legal experts later analyzed OpenAI’s contractual clauses, pointing out two specific wording issues.

The surveillance clause states that the system must not be used for “unconstrained” monitoring of American citizens’ private information. Samir Jain, VP of Democracy and Technology Policy, pointed out that this wording implies that “constrained” monitoring is permitted. Under current legal frameworks, the government can legally purchase citizens’ location data, browsing history, and financial records from data brokers, and analyze them with AI—technically not constituting “illegal surveillance.” Amodei cited this example in a CBS interview.

The weapons clause states that the system must not be used for autonomous weapons “unless required by law, regulation, or department policy with human control.” This qualification means restrictions only apply when other regulations explicitly require human control, relying entirely on existing policies. But the Pentagon can modify its internal policies at any time. Legal scholar Charles Bullock wrote on X that the weapons clause depends on DoD Directive 3000.09, which requires commanders to retain “appropriate human judgment,” a standard open to flexible interpretation.

OpenAI responded to these concerns by stating that models only run in the cloud, which from an architecture perspective excludes direct integration into weapon systems. The contract also explicitly cites specific legal bases, which are more binding than mere prohibitions, as they are based on established legal frameworks. Ultraman himself admitted in Q&A: “If we need to fight this war in the future, we will, but it obviously poses some risks.”

This is not a matter of one company willing to compromise and the other standing firm; it reflects two fundamentally different security philosophies. OpenAI’s bottom line is: I won’t do anything illegal. Anthropic’s is: I won’t do things that are not yet illegal but I believe shouldn’t be done.

This disagreement has also caused cracks within OpenAI. Last week, several employees signed an open letter supporting Anthropic’s stance and opposing its classification as a supply chain risk. Researcher Leo Gao publicly questioned whether the company’s contracts provide sufficient protections. Outside the San Francisco office, chalk graffiti criticized the contracts. Outside Anthropic’s office, supportive messages appeared. Ultraman’s several-hour Q&A on X was largely aimed at his own internal team, some of whom originally sided with Anthropic.

Two Outcomes of the Same Narrative

For years, Anthropic has framed its safety mission around “preventing civilization-level risks,” equating the potential threats of cutting-edge AI with nuclear weapons, positioning itself as a gatekeeper on this front. This narrative is core to its brand and how it gains trust in the capital markets.

During this controversy, tech commentator Packy McCormick cited Ben Thompson’s concept of the “Hype Tax”: if you build influence through extreme narratives, you pay a price when those narratives encounter real power. Comparing AI to nuclear weapons, the government will treat you as it does nuclear threats.

Anthropic paid a price for this narrative: losing a contract, being labeled a security risk, being mentioned by the president, and having all products ordered to be removed from federal systems within six months.

But over the same weekend, the same narrative produced a completely opposite effect in another dimension.

Ordinary users didn’t see contracts, legal explanations, or security debates. They saw a company saying no, being kicked out by the government. Another company saying yes, securing a contract. They made their own judgments—uninstalling 295%, topping the App Store, and servers going down.

This is a rare collective moral stance in the history of AI industry.

Anthropic didn’t spend a penny on PR for this. Amodei’s statement was restrained, without calls for user support, without naming OpenAI, and without portraying itself as a martyr. Yet, the outcome was achieved.

A noteworthy detail: the event that drove users to Claude was essentially OpenAI doing something entirely reasonable commercially—signing an agreement while its competitor was banned and its contract was uncertain—and claiming it included the same protections. Ultraman also explicitly said he did this partly to help de-escalate the situation and prevent further harm to Anthropic.

Regardless of motives, the result is that OpenAI secured the contract, and Anthropic’s user base grew. Both sides paid a price and gained benefits—just measured in different units.

And one more thing worth noting:

The Pentagon contract Anthropic lost was worth about $200 million.

Anthropic’s current annual revenue is around $14 billion. Its target is to reach $26 billion by 2026.

Last month, it completed a $30 billion Series E funding round, with a valuation of $380 billion.

This calculation is straightforward now. But another question remains unanswered: when AI is truly used on a large scale for military decision-making, will the “safety fences” and deployed engineers in those contracts be effective—whether those of OpenAI or the original requirements of Anthropic?

That question isn’t addressed in any publicly available contract.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)