After seeing GPT's new model's image generation capabilities, AI can now almost perfectly create realistic fake images.


So how can we prevent family members and the elderly from being deceived in the future?
If children are out of town, and scam groups use images of car accidents involving the children along with videos, plus phone calls that can mimic voices (or even imitate videos), how can we prevent such situations from happening?
It's truly terrifying when you think about it...
Of course, I’m not talking about developers or students who frequently use AI, but more about middle-aged and elderly people in third- and fourth-tier cities who haven't even used Doubao much...
View Original
post-image
post-image
post-image
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin