Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Byte's Seed Team Releases Seed3D 2.0, Upgrading Geometric Precision and Material Generation Framework
According to monitoring by Dongcha Beating, ByteDance’s Seed team has released Seed3D 2.0, a model that generates 3D assets with materials from a single image input. This upgrade focuses on geometric precision and material realism, with the API now live on Volcano Ark. The geometric generation employs a Coarse-to-Fine two-stage strategy: first, a coarse-grained topology is established using a large-parameter DiT, followed by the restoration of sharp edges and fine surfaces. The material side utilizes a MoE architecture to enhance high-resolution details, introducing VLM priors to improve the stability of material decomposition under unknown lighting conditions, outputting complete PBR textures that can be directly integrated into standard rendering pipelines. Sixty evaluators with 3D modeling experience conducted blind evaluations using approximately 200 test cases, comparing Seed3D 2.0 with Hunyuan3D-2.5/3.1, Tripo 3.0, Rodin Gen2, HiTem v2.0, and the previous Seed3D 1.0. The preference rates for geometric generation ranged from 65.1% to 98.3%, while the preference rate for 3D assets with materials exceeded 69%. In terms of downstream tasks, Seed3D 2.0 can decompose 3D assets into independent components based on functionality and add joint information, outputting in URDF format compatible with simulation engines like Isaac Sim, suitable for dynamic interaction scenarios such as robotic grasping. At the scene level, it supports text, multi-view images, or video input, combining multiple assets to generate complete scenes.