Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Something strange has happened in recent weeks. I’ve started to notice a radical shift in the global AI landscape, not just at the model level, but across the entire infrastructure.
Eight years ago, the ZTE story was a harsh lesson. A giant company stopped operating overnight due to an American chip ban. But this time, the picture is completely different. When the U.S. began tightening restrictions on NVIDIA exports to China, everyone thought the Chinese AI dream would end here.
They were wrong.
The real problem wasn’t the chips themselves, but CUDA — NVIDIA’s software platform that has become the backbone of the entire AI industry. Today, over 90% of global AI developers are tied to this environment. It’s a sustainable wheel — the more it’s used, the stronger it becomes.
But instead of trying to break this wheel directly, Chinese companies chose a different path. They started with algorithms.
DeepSeek is a model with 671 billion transactions, but it only activates 37 billion during operation. The total training cost: just $5.576 million. Compare this to GPT-4, which cost $78 million. The difference isn’t in the upper limit, but in the entire level.
And the prices? DeepSeek charges between $0.028 and $0.28 per million tokens. GPT-4 costs $5.00. Claude Opus reaches $15.00. Simply put, DeepSeek is 25 to 75 times cheaper. This gap has caused a seismic shift in the developer market.
By February 2026, Chinese model usage on OpenRouter increased by 127% in just three weeks. A year ago, Chinese models accounted for no more than 2%. Now, they’re approaching 60%.
But here’s the most important part: reducing inference costs alone doesn’t solve the problem. Training is the real black hole of computational power.
In Qiansuo, in southeastern China, a full local production line was built in just 180 days. The main component? Loongson 3C6000 processors and T100 cards from Taichu Yuanqi — 100% Chinese chips. Productivity: one server every five minutes.
And that’s when the story started to change. In January 2026, Zhipu, in collaboration with Huawei, launched the GLM-Image model — the first advanced image generation model trained entirely on local Chinese chips. A month later, the “Stars” large communication model was trained on a local Chinese computing pool with tens of thousands of processing units.
This means one thing: local chips have moved from inference to training. A real qualitative shift.
The driving force behind all this is Huawei Ascend. By the end of 2025, the number of Ascend environment developers exceeded 4 million. More than 3,000 companies work with it. 43 main models have been trained on Ascend. And in March 2026, Huawei launched a new computing architecture called SuperPoD.
Downloading Huawei’s new update package means Ascend 910B processors have reached the processing power level of NVIDIA A100. The gap hasn’t disappeared, but it has shifted from unusable to effectively usable.
Now, here’s the most exciting part: energy.
The U.S. faces a real electricity crisis. American data centers consumed 183 terawatt-hours in 2024, about 4% of total electricity. This is expected to double by 2030. Arm’s CEO predicts that AI data centers will consume 20-25% of U.S. electricity by 2030.
China produces 10.4 trillion kilowatt-hours annually. The U.S. produces 4.2 trillion. China produces 2.5 times what America does. Most importantly: only 15% of China’s electricity consumption goes to households, compared to 36% in the U.S. This means enormous industrial energy that can be directed toward computing.
Electricity costs in U.S. AI company hubs range between $0.12 and $0.15 per kilowatt-hour. In western China, industrial prices are around $0.03 — a quarter to a fifth of the American price.
Huawei’s new update package and the new Chinese infrastructure mean that Chinese AI isn’t exported as products or factories, but as tokens — small units of data processing, produced in Chinese computing factories, then transmitted via submarine cables to the world.
DeepSeek alone has 26,000 global companies as clients, and 3,200 institutions have used the enterprise version. In China, it captured 89% of the market. In sanctioned countries, between 40-60%. In 2025, 58% of emerging AI companies integrated DeepSeek into their tech stacks.
This reminds me of another war for industrial independence. In 1986, Japan signed a semiconductor agreement with the U.S. Japan controlled 51% of the global market in 1988. Today, its share is less than 7%.
Why? Because Japan accepted being the best producer in a global system dominated by one power, but didn’t build an independent ecosystem.
This time, China is choosing a completely different path. From algorithm improvements, to the leap of local chips from inference to training, to 4 million Ascend developers, to a global spread of tokens.
Every step is building an independent industrial system.
On February 27, 2026, three Chinese chip companies released performance reports on the same day. Revenues soared: 453%, 243%, 121%. But losses are also significant. Half the results are fire, half are water.
Fire: the market desperately needs an alternative to NVIDIA. The 95% gap is gradually being filled.
Water: every loss is a real investment in building an independent ecosystem. R&D, software support, engineers solving translation problems one by one.
These losses aren’t mismanagement. They are a toll paid in a war.
Eight years ago, we asked: can we survive?
Today, the question is different: how much do we have to pay to survive?
The same price is progress.