Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Just caught something interesting that most people are still sleeping on. Everyone's been obsessed with GPU supply for years, but quietly, CPUs have become the real constraint in AI infrastructure. And this shift is happening faster than most realize.
Last month, Google and Intel announced a massive multi-year deal specifically to address this CPU bottleneck. Intel's messaging was clear: AI doesn't run on GPUs alone—CPUs and system orchestration are now the limiting factor. Meanwhile, server CPU prices jumped roughly 30% in Q4 last year, which is wild for a mature market. AMD's delivery times stretched from 8 weeks to over 10 weeks, with some parts facing 6-month delays. This isn't hype—it's real supply pressure.
The irony is brutal: AI labs have plenty of GPUs sitting idle but can't get enough high-end CPUs to actually run them. TSMC's 3nm capacity is getting squeezed by GPU orders, so CPU wafer allocation keeps getting reallocated. Even Elon Musk jumped into the CPU game, commissioning Intel to design custom chips for his Terafab project in Texas. That's how tight things have gotten.
Why the sudden shift? It's because agent workloads are completely different from traditional inference. Chatbots mostly offload compute to GPUs. But agents? They need to orchestrate APIs, manage databases, execute code, and coordinate results—all CPU-intensive tasks. Georgia Tech researchers found that CPU-side work now accounts for 50-90% of total latency in agent systems. The GPU's sitting there ready to go while the CPU is still handling tool calls.
Context windows exploding doesn't help either. Models now support over 1 million tokens, and the KV cache alone hits ~200GB—way beyond what a single H100 can hold. CPUs have to offload and manage this memory, so now they're not just orchestrating; they're doing serious data management.
Look at how manufacturers are responding. AMD CEO Lisa Su has been pretty direct about this: agent workloads are pushing tasks back onto traditional CPUs, and it's driving their growth. AMD's data center revenue hit $5.4B in Q4, up 39% year-over-year, with EPYC CPUs doing the heavy lifting. AMD's market share in server CPUs crossed 40% for the first time. But AMD still lacks the tight CPU-GPU interconnect capabilities that NVIDIA's building with NVLink.
NVIDIA took a different angle. Their Grace CPU has only 72 cores versus AMD's 128 or Intel's typical configs. Instead of chasing core counts, NVIDIA optimized for collaboration—NVLink C2C pushes bandwidth to 1.8TB/s, letting the CPU directly access GPU memory. They've started selling Grace as a standalone product, and Meta just did a massive "pure Grace deployment" without pairing it with GPUs. That's a signal.
Intel's playing both sides—pushing Xeon processors deep into hyperscaler partnerships while also collaborating with SambaNova on hybrid solutions that run agent inference without GPUs. The 18A process and Xeon 6 Granite Rapids roadmap will be critical for them.
Here's the bigger picture: Amazon's $38B OpenAI partnership explicitly mentions deploying "tens of millions of CPUs." That's a shift from the old playbook of "hundreds of thousands of GPUs." Bank of America's projecting the CPU market could double from $27B to $60B by 2030, almost entirely driven by AI.
What we're really seeing is a complete infrastructure rebuild. Companies aren't just scaling GPUs anymore—they're simultaneously building an entire layer of CPU orchestration infrastructure specifically designed for AI agents. When compute becomes abundant, system-level efficiency becomes the differentiator. The next winners in AI won't be determined by raw GPU counts; they'll be determined by who solves the CPU bottleneck first.