On March 1st, Iran’s missiles and drones struck the Gulf region, one of which hit an Amazon data center in the UAE.
The server room caught fire, lost power, and about 60 cloud services were interrupted.
One of the world’s largest AI systems, Claude, running on Amazon’s cloud, went down globally on the same day.
Anthropic’s official statement was that a surge in users overwhelmed the servers.
As of press time, there are still complaints on social media about Claude being unavailable; on the well-known prediction market Polymarket, there are already predictions that “Claude will be down several times in March.”
If it is ultimately confirmed that Iran was responsible, this will be the first time in human history:
A commercial data center has been physically destroyed in a war.
But why would a civilian data center be targeted?
Going back two days. On February 28th, the US and Israel jointly carried out airstrikes on Iran, killing Supreme Leader Khamenei and several senior officials.
Much of the intelligence analysis, target identification, and battlefield simulation for this airstrike was assisted by Claude. Through cooperation with the military and data analysis company Palantir, Claude has long been integrated into the US military’s intelligence systems.
Ironically, a few hours before the airstrike, Trump had just ordered a comprehensive ban on Anthropic because Anthropic refused to hand over AI to the Pentagon without restrictions. But despite the ban, the war still had to be fought.
It would take at least six months to extract Claude from military systems.
So, before the ban was even fully enforced, the US military took Claude to carry out the Iran strike. Then Iran retaliated, and missiles hit the data center running Claude AI.
Image source: Bloomberg
The data center was probably not targeted directly but was affected incidentally. But regardless of whether the missiles were aimed at the data center, one thing is certain:
Truth is within cannon range, and AI is also within cannon range. The side that fires the cannon and the side that gets hit by the cannon are both within range.
AI Infrastructure Built on the Middle East Powder Keg
Over the past three years, Silicon Valley has moved half of its AI industry to the Middle East Gulf.
The reason is simple. The UAE and Saudi Arabia have the world’s wealthiest sovereign funds, cheap electricity, and a regulation:
If you want to serve my clients, the data must stay on my territory.
So Amazon has set up data centers in the UAE and Bahrain, and has invested $5.3 billion to open another in Saudi Arabia; Microsoft has nodes in the UAE and Qatar, and a data center in Saudi Arabia is also completed.
OpenAI, in partnership with Nvidia and SoftBank, is building a $30 billion AI park in the UAE, claimed to be the largest computing power base outside the US.
In January this year, the US signed a “Pax Silica” agreement with the UAE and Qatar. Translated as “Silicon Peace,” it sounds promising.
The core of the agreement is to control the flow of chips, ensuring advanced chips do not fall into Chinese hands.
In exchange, the UAE received permission to import hundreds of thousands of Nvidia’s most advanced processors annually. Abu Dhabi’s G42 cut ties with Huawei, and Saudi AI companies pledged not to buy Huawei equipment…
The entire Gulf AI infrastructure—from chips to data centers to models—is shifting fully towards the US.
These agreements cover everything, from chip export controls, data sovereignty, reciprocal investments, to technology leakage risks.
But no one considered that someone might use missiles to destroy data centers.
An international security scholar at Qatar University remarked after the Amazon data center caught fire, a statement I find quite apt:
“These security frameworks are designed for supply chain management and political alignment; physical security has never been on the agenda.”
Cloud computing has been a story of resilience, redundancy, and decentralization for ten years. But data centers are physical buildings with addresses, walls, roofs, and coordinates. No matter how advanced your chips are, if the data center is bombed, it’s destroyed.
“Cloud” is a metaphor; data centers are not.
AI may seem intangible, running in code, floating in the cloud. But the code runs on chips, chips are housed in data centers, and data centers are on Earth.
Who Will Protect AI?
This time, Amazon’s data center was affected—arguably collateral damage, or perhaps mistaken attack.
But what about next time?
In a world of escalating geopolitical conflicts, if your data center hosts AI models used for target recognition by adversaries, there’s no reason for them not to treat your data center as a military target.
This issue has no clear answer in international law.
Current warfare laws regulate “dual-use facilities,” but those clauses refer to factories and bridges; no one has considered data centers.
A data center that handles banking transactions during the day and military intelligence analysis at night—does it count as civilian or military?
In peacetime, data center location decisions are based on latency, electricity prices, and policy incentives… but in wartime, none of that matters. What matters is how close your data center is to the nearest military base.
This bombing has shifted everyone’s focus.
Previously, everyone was worried about AI replacing their jobs; but no one considered another question:
Before AI replaces you, how vulnerable is it?
A regional conflict caused the largest cloud provider’s Middle East nodes to go down for an entire day; and that was just one data center.
There are nearly 1,300 large-scale data centers worldwide, with 770 under construction. These data centers consume increasing amounts of electricity, water, and money, and host more and more critical data—your savings, medical records, food delivery orders, and even military intelligence of some countries…
But so far, the protection schemes for these data centers are still just fire suppression systems and backup generators.
When AI becomes a national infrastructure, its security is no longer just a company’s concern. Who will protect AI? Cloud providers? The Pentagon? Or UAE’s air defense systems?
Three days ago, this was just a theoretical question. Now, it’s real.
AI is within cannon range. Actually, it’s not just AI. In this era, what isn’t within cannon range?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
How can a single missile cause global AI giants to go offline instantly?
Author: David, Deep Tide TechFlow
Original Title: AI Within Cannon Range
On March 1st, Iran’s missiles and drones struck the Gulf region, one of which hit an Amazon data center in the UAE.
The server room caught fire, lost power, and about 60 cloud services were interrupted.
One of the world’s largest AI systems, Claude, running on Amazon’s cloud, went down globally on the same day.
Anthropic’s official statement was that a surge in users overwhelmed the servers.
As of press time, there are still complaints on social media about Claude being unavailable; on the well-known prediction market Polymarket, there are already predictions that “Claude will be down several times in March.”
If it is ultimately confirmed that Iran was responsible, this will be the first time in human history:
A commercial data center has been physically destroyed in a war.
But why would a civilian data center be targeted?
Going back two days. On February 28th, the US and Israel jointly carried out airstrikes on Iran, killing Supreme Leader Khamenei and several senior officials.
Much of the intelligence analysis, target identification, and battlefield simulation for this airstrike was assisted by Claude. Through cooperation with the military and data analysis company Palantir, Claude has long been integrated into the US military’s intelligence systems.
Ironically, a few hours before the airstrike, Trump had just ordered a comprehensive ban on Anthropic because Anthropic refused to hand over AI to the Pentagon without restrictions. But despite the ban, the war still had to be fought.
It would take at least six months to extract Claude from military systems.
So, before the ban was even fully enforced, the US military took Claude to carry out the Iran strike. Then Iran retaliated, and missiles hit the data center running Claude AI.
Image source: Bloomberg
The data center was probably not targeted directly but was affected incidentally. But regardless of whether the missiles were aimed at the data center, one thing is certain:
Truth is within cannon range, and AI is also within cannon range. The side that fires the cannon and the side that gets hit by the cannon are both within range.
AI Infrastructure Built on the Middle East Powder Keg
Over the past three years, Silicon Valley has moved half of its AI industry to the Middle East Gulf.
The reason is simple. The UAE and Saudi Arabia have the world’s wealthiest sovereign funds, cheap electricity, and a regulation:
If you want to serve my clients, the data must stay on my territory.
So Amazon has set up data centers in the UAE and Bahrain, and has invested $5.3 billion to open another in Saudi Arabia; Microsoft has nodes in the UAE and Qatar, and a data center in Saudi Arabia is also completed.
OpenAI, in partnership with Nvidia and SoftBank, is building a $30 billion AI park in the UAE, claimed to be the largest computing power base outside the US.
In January this year, the US signed a “Pax Silica” agreement with the UAE and Qatar. Translated as “Silicon Peace,” it sounds promising.
The core of the agreement is to control the flow of chips, ensuring advanced chips do not fall into Chinese hands.
In exchange, the UAE received permission to import hundreds of thousands of Nvidia’s most advanced processors annually. Abu Dhabi’s G42 cut ties with Huawei, and Saudi AI companies pledged not to buy Huawei equipment…
The entire Gulf AI infrastructure—from chips to data centers to models—is shifting fully towards the US.
These agreements cover everything, from chip export controls, data sovereignty, reciprocal investments, to technology leakage risks.
But no one considered that someone might use missiles to destroy data centers.
An international security scholar at Qatar University remarked after the Amazon data center caught fire, a statement I find quite apt:
“These security frameworks are designed for supply chain management and political alignment; physical security has never been on the agenda.”
Cloud computing has been a story of resilience, redundancy, and decentralization for ten years. But data centers are physical buildings with addresses, walls, roofs, and coordinates. No matter how advanced your chips are, if the data center is bombed, it’s destroyed.
“Cloud” is a metaphor; data centers are not.
AI may seem intangible, running in code, floating in the cloud. But the code runs on chips, chips are housed in data centers, and data centers are on Earth.
Who Will Protect AI?
This time, Amazon’s data center was affected—arguably collateral damage, or perhaps mistaken attack.
But what about next time?
In a world of escalating geopolitical conflicts, if your data center hosts AI models used for target recognition by adversaries, there’s no reason for them not to treat your data center as a military target.
This issue has no clear answer in international law.
Current warfare laws regulate “dual-use facilities,” but those clauses refer to factories and bridges; no one has considered data centers.
A data center that handles banking transactions during the day and military intelligence analysis at night—does it count as civilian or military?
In peacetime, data center location decisions are based on latency, electricity prices, and policy incentives… but in wartime, none of that matters. What matters is how close your data center is to the nearest military base.
This bombing has shifted everyone’s focus.
Previously, everyone was worried about AI replacing their jobs; but no one considered another question:
Before AI replaces you, how vulnerable is it?
A regional conflict caused the largest cloud provider’s Middle East nodes to go down for an entire day; and that was just one data center.
There are nearly 1,300 large-scale data centers worldwide, with 770 under construction. These data centers consume increasing amounts of electricity, water, and money, and host more and more critical data—your savings, medical records, food delivery orders, and even military intelligence of some countries…
But so far, the protection schemes for these data centers are still just fire suppression systems and backup generators.
When AI becomes a national infrastructure, its security is no longer just a company’s concern. Who will protect AI? Cloud providers? The Pentagon? Or UAE’s air defense systems?
Three days ago, this was just a theoretical question. Now, it’s real.
AI is within cannon range. Actually, it’s not just AI. In this era, what isn’t within cannon range?