Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
Gate MCP
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Former ByteDance Seed Engineer: ByteDance Takes Six Months for One Iteration, Google Allegedly Only Three Months
According to monitoring by Dongcha Beating, Zhang Chi, a former engineer from ByteDance’s Seed team and now an assistant professor at Peking University, revealed in the podcast “Into Asia” that it takes about six months for ByteDance to complete one round of large model training (pre-training plus post-training), while Google is rumored to only need three months. He believes that the speed of iteration is one of the core reasons why Chinese companies find it difficult to catch up. Zhang worked at ByteDance for about a year, and his math team’s focus was more research-oriented. He described the team’s positioning as ‘more for publicity,’ which differs from the teams responsible for model delivery in pre-training and post-training. Zhang described the internal ‘benchmaxxing’ culture at Seed: team leaders evaluate performance based on the benchmarks they are responsible for, and everyone is focused on scoring, ‘but this does not translate into a good experience in actual use.’ He stated that on paper, the models of large Chinese companies can match the leading models in the U.S., but in practice, they are ‘not good enough.’ The goal of Seed is to be globally top-notch, ‘but unfortunately, I do not think we have caught up,’ and even the goal of being number one domestically ‘has not been achieved.’ By the end of 2024, Seed believed it had caught up with GPT-4o, but after the release of DeepSeek, the team realized that the gap still existed, and when he joined, the entire group was urgently shifting towards reinforcement learning.