Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Today’s most important event is NVIDIA GTC Conference, which is basically an AI version of A Short History of Humanity.
The most important thing today is NVIDIA’s GTC conference—basically an AI version of A Brief History of Humankind.
Jensen Huang hasn’t even taken the stage yet, but the information leaked early is already enough to fill a whole book.
Wanwán has put together three big highlights—come on, friends, follow me.
The previous generation Blackwell was already very impressive, right? Next up, the new-generation chip Vera Rubin is about to begin mass production.
So what makes Vera Rubin so strong? In plain terms, just two words: cheap.
Running the same AI model,
the number of chips is reduced to one quarter, and inference computation costs drop by 90%.
Drop by 90%, friends.
AWS, Microsoft, and Google—the three major cloud providers—move onto board in the first batch.
Previously, at the earnings call, Jensen Huang said Groq would be integrated into the NVIDIA ecosystem as an expansion architecture—just like how back then they acquired Mellanox to round out networking capabilities.
Groq’s LPU and NVIDIA’s GPU sit in the same data center: the GPU understands the problem, while the LPU rapidly spits out the answers.
With the two chips split up and working together, the latency for Agent scenarios drops directly.
AI Agents do the work for people. A single task might involve adjusting the model dozens of times, and each round is burning inference computing power—while users are right there waiting. If it’s even a bit slower, the experience collapses.
Inference comes in two steps: first, understand your question; then output the answer word by word.
GPUs are good at the first step, but for the second step—how fast and stable it is at producing words—the Groq LPU is stronger.
Is 20 billion USD expensive?
Think about it: in the future, every company will run hundreds of Agents, and each Agent will adjust models thousands of times every day.
It’s basically an open-source platform. Enterprises can install it to deploy AI employees that run workflows, handle data, and manage projects. It’s said they’re already in talks with Salesforce and Adobe.
The interesting part is that NemoClaw doesn’t require you to use NVIDIA chips. You should think about that logic. Selling chips only earns you money on the hardware layer; setting the rules is what lets you earn across the entire chain. Jensen Huang has done the math very clearly.
Most likely, the next-next generation architecture, Feynman, will make its first appearance, with mass production in 2028 using TSMC’s most advanced 1.6nm process.
Also, there’s one more under-the-radar piece of information I think is pretty interesting.
NVIDIA has released laptop computer processors—two models—focused on gaming. The company selling graphics cards is coming to grab the CPU market’s share.
Wanwán, I feel like Jensen Huang is going to become a great figure of an era in the future.