Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Augment Code practical testing of AGENTS.md's impact on code generation: the best is equivalent to a model upgrade by one level, the worst is not writing it at all
ME News Report, April 23 (UTC+8), according to Beating Monitoring, AI programming tool company Augment Code extracted dozens of AGENTS.md files from their monorepo and used their internal evaluation suite, AuggieBench, to measure their actual impact on code generation agents’ output. The method was to use a high-quality merged PR as a baseline, then have the agent redo the same task under two conditions: with and without AGENTS.md, and compare the scores. The difference was much larger than expected. The best AGENTS.md files provided a quality boost equivalent to upgrading the model from Haiku to Opus, while the worst ones were no better than having no AGENTS.md at all. Moreover, the same file could have opposite effects on different tasks: it increased the compliance of a bug fix specification by 25%, but reduced the completion rate of a complex feature in the same module by 30%. Effective writing practices include: keeping the main file between 100 and 150 lines, providing several focused reference documents, which can bring a 10% to 15% overall improvement in medium-sized modules with about a hundred core files. Structuring the process into numbered steps yields the best results; a six-step deployment process reduced PRs missing files from 40% to 10%, increasing accuracy by 25%. Using decision tables to help agents choose the correct plan before acting also improved compliance by 25%. When writing prohibitions, they must be paired with alternative solutions; simply writing “do not” causes agents to hesitate, and more than 15 consecutive warnings significantly worsen performance. The most common pitfall is having too much documentation. Once agents are pulled into a large number of architecture documents, loading hundreds of thousands of tokens, their output actually deteriorates. One module accumulated 226 documents exceeding 2MB, and even the best AGENTS.md would be useless. Additionally, AGENTS.md is the only document location that agents will read 100% of the time; unreferenced documents under _docs/ are discovered less than 10% of the time. (Source: BlockBeats)