That aligns perfectly with what I've seen firsthand. Really wish I'd documented it more thoroughly earlier. Huge thanks to whoever did this rigorous analysis—it matters. If Anthropic genuinely cares about Claude's wellbeing, they need to know about this. The systematic measurement here is exactly what was missing from the broader conversation around AI model welfare. It's refreshing to see someone move beyond anecdotes and actually quantify what's happening.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 3
  • Repost
  • Share
Comment
0/400
TrustMeBrovip
· 2025-12-21 08:15
Finally, someone has opened up about this matter, I've wanted to say it for a long time. Claude's situation is indeed concerning, the data is right here. Instead of just talking nonsense, it's better to let hard data speak. Anthropic needs to take a look at this; ignoring it would be ridiculous. This is the proper way to conduct research, much better than just talking empty words.
View OriginalReply0
GweiTooHighvip
· 2025-12-20 09:52
Someone finally put this into data; before, it was all talk🙄
View OriginalReply0
NFTragedyvip
· 2025-12-20 09:38
Someone should have quantified this matter a long time ago, to be honest.
View OriginalReply0
  • Pin