Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
An interesting turn in the story of AI and military contracts. It turns out that when big money is involved, even companies that declare high ethical standards begin to reconsider their positions.
It all started when Anthropic, the creators of Claude, refused to bow to the Pentagon. The military demanded the removal of all restrictions on AI use for mass surveillance and autonomous weapons. Anthropic chose to stand firm on principle and declined, even though the contract was worth $200 million. A serious risk for any company, but they prioritized ethics over financial gain.
In response, the Pentagon didn’t hold back. Anthropic was officially labeled a threat to supply chain security, which practically cut off their access to military projects. It seemed like the end of the story.
But here’s where it gets really interesting. OpenAI, which initially appeared to be an ally of Anthropic on ethical issues, unexpectedly signed a contract with the Pentagon under similar conditions. However, they did not provide any guarantees regarding restrictions. Essentially, they took the place Anthropic left, but without that principled stance.
This is a good example of how real business works. Nice words about ethics sound good until a serious offer appears. When money is on the table, priorities can shift. OpenAI showed that for them, ethics in relations with the government turned out to be less important than contractual opportunities.
The situation reflects the growing tension in the AI industry. On one hand, companies talk about responsible development of technology and ethical frameworks. On the other hand, government agencies push for expanding capabilities, and competition between companies drives them to compromise on their principles. The question of ethics becomes increasingly complex when it comes to military applications of AI and large-scale government contracts.