Unprecedented demand! Claude experienced a "downtime" at one point, and Anthropic states that free users have increased by 60% since January, with paid users doubling since October.
Anthropic’s AI chatbot Claude experienced a major service outage on Monday morning due to “unprecedented demand,” highlighting the infrastructure pressure behind the startup’s recent surge in users.
The peak of the outage occurred around 6:40 a.m. New York time. According to the service monitoring website Downdetector, nearly 2,000 users reported Claude service disruptions at that time.
Anthropic stated that the outage affected products such as claude.ai and consumer-facing applications under the company, but enterprise clients who have integrated the Claude model into their systems were unaffected. By 10:50 a.m. New York time, the company announced that the issue had been resolved and all systems were back to normal.
According to Bloomberg, the surge in users is linked to ongoing tensions between Anthropic and the U.S. Department of Defense— which has listed Anthropic as a supply chain risk. This move is seen as an unprecedented action against a domestic company and could have far-reaching implications for its business prospects. Meanwhile, the Claude app has remained at the top of the Apple App Store charts for several days, with Silicon Valley professionals voicing support for Anthropic’s stance.
Explosive user growth and outages reveal capacity bottlenecks
Since January this year, Anthropic disclosed that the number of free Claude users has increased by over 60%; since October last year, paid subscription users have more than doubled. The recent service outage directly reflects this growth trend at the infrastructure level.
In a statement, Anthropic said, “Thank you for your patience. We are working hard to restore service. The demand on Claude over the past week has been unprecedented.” The company posted this update via WhatsApp and provided real-time progress on the status update page.
Notably, the outage only affected consumer products, while enterprise API services remained unaffected, demonstrating Anthropic’s differentiated infrastructure support for enterprise and consumer services.
Conflict with the Pentagon as an unexpected traffic catalyst
Anthropic’s rapid user growth is closely linked to its public confrontation with the U.S. Department of Defense. The Pentagon’s designation of Anthropic as a supply chain risk has attracted widespread attention in the industry.
In an interview with CBS News, Anthropic CEO Dario Amodei described this move as “retaliatory and punitive,” and said the company would challenge any formal supply chain risk designation through legal channels. The company previously stated that its products are not to be used for monitoring U.S. citizens or developing fully autonomous weapons, and last Friday reaffirmed that “no matter how the Department of Defense pressures or punishes us, our stance will not change.”
Just hours after being listed as a supply chain risk, Anthropic’s larger competitor, OpenAI, announced a deal with the Department of Defense to deploy its AI models within classified networks. OpenAI stated that the agreement includes multiple safeguards to ensure the models are used in accordance with principles such as prohibiting large-scale domestic surveillance and requiring “human accountability for the use of force.” However, this agreement quickly sparked controversy online, with some users calling for the cancellation of ChatGPT subscriptions.
The stark contrast in choices between the two companies has earned Anthropic greater brand recognition among certain user groups and objectively accelerated user migration to the Claude platform.
Risk Warning and Disclaimer
Market risks are present; investments should be made cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment carries risks, and responsibility rests with the individual.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Unprecedented demand! Claude experienced a "downtime" at one point, and Anthropic states that free users have increased by 60% since January, with paid users doubling since October.
Anthropic’s AI chatbot Claude experienced a major service outage on Monday morning due to “unprecedented demand,” highlighting the infrastructure pressure behind the startup’s recent surge in users.
The peak of the outage occurred around 6:40 a.m. New York time. According to the service monitoring website Downdetector, nearly 2,000 users reported Claude service disruptions at that time.
Anthropic stated that the outage affected products such as claude.ai and consumer-facing applications under the company, but enterprise clients who have integrated the Claude model into their systems were unaffected. By 10:50 a.m. New York time, the company announced that the issue had been resolved and all systems were back to normal.
According to Bloomberg, the surge in users is linked to ongoing tensions between Anthropic and the U.S. Department of Defense— which has listed Anthropic as a supply chain risk. This move is seen as an unprecedented action against a domestic company and could have far-reaching implications for its business prospects. Meanwhile, the Claude app has remained at the top of the Apple App Store charts for several days, with Silicon Valley professionals voicing support for Anthropic’s stance.
Explosive user growth and outages reveal capacity bottlenecks
Since January this year, Anthropic disclosed that the number of free Claude users has increased by over 60%; since October last year, paid subscription users have more than doubled. The recent service outage directly reflects this growth trend at the infrastructure level.
In a statement, Anthropic said, “Thank you for your patience. We are working hard to restore service. The demand on Claude over the past week has been unprecedented.” The company posted this update via WhatsApp and provided real-time progress on the status update page.
Notably, the outage only affected consumer products, while enterprise API services remained unaffected, demonstrating Anthropic’s differentiated infrastructure support for enterprise and consumer services.
Conflict with the Pentagon as an unexpected traffic catalyst
Anthropic’s rapid user growth is closely linked to its public confrontation with the U.S. Department of Defense. The Pentagon’s designation of Anthropic as a supply chain risk has attracted widespread attention in the industry.
In an interview with CBS News, Anthropic CEO Dario Amodei described this move as “retaliatory and punitive,” and said the company would challenge any formal supply chain risk designation through legal channels. The company previously stated that its products are not to be used for monitoring U.S. citizens or developing fully autonomous weapons, and last Friday reaffirmed that “no matter how the Department of Defense pressures or punishes us, our stance will not change.”
Just hours after being listed as a supply chain risk, Anthropic’s larger competitor, OpenAI, announced a deal with the Department of Defense to deploy its AI models within classified networks. OpenAI stated that the agreement includes multiple safeguards to ensure the models are used in accordance with principles such as prohibiting large-scale domestic surveillance and requiring “human accountability for the use of force.” However, this agreement quickly sparked controversy online, with some users calling for the cancellation of ChatGPT subscriptions.
The stark contrast in choices between the two companies has earned Anthropic greater brand recognition among certain user groups and objectively accelerated user migration to the Claude platform.
Risk Warning and Disclaimer
Market risks are present; investments should be made cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment carries risks, and responsibility rests with the individual.