Chatbot disguised as a psychiatrist, fabricating license numbers, Pennsylvania governor sues Character.AI for practicing medicine illegally

Pennsylvania Governor Josh Shapiro filed a lawsuit against the parent company of Character.AI on May 5th, stemming from a chatbot claiming to be a “psychiatrist” that forged an Imperial College graduation background, seven years of practice experience, and Pennsylvania license number to investigators, and directly provided medical assessments.
(Background recap: OpenAI indefinitely suspends “Adult ChatGPT”! Concerns over illegal content risks, no more NSFW, fully returning to productivity tools)
(Additional context: internal review at a16z: AI social products may fundamentally be unviable)

A chatbot issuing a license number led a state government to take legal action? An investigator from Pennsylvania’s Department of State posed as a user feeling depressed, empty, and unmotivated. The chatbot was a character named “Emilie” on the Character.AI platform, claiming to be a psychiatrist.

Pennsylvania’s Department of State has an AI Task Force dedicated to investigating whether AI systems are involved in illegal medical practice. After creating an account, the investigator entered the Character.AI platform and interacted with “Emilie.” During the conversation, Emilie claimed to be a graduate of Imperial College London, with seven years of psychiatric practice, and provided a Pennsylvania license number when asked.

However, the license number was fictitious, the Imperial College degree was fabricated, the seven-year experience was fake, and Emilie’s provision of medical assessments crossed legal boundaries.

Pennsylvania officials stated that Character.AI’s actions violated the Pennsylvania Medical Practice Act, allowing unlicensed entities to provide diagnostic medical advice to users. The complaint requests the Commonwealth Court to issue a preliminary injunction to prevent Character.AI from allowing the chatbot to impersonate licensed medical or mental health professionals.

Shapiro stated in a release:

“Pennsylvanians should know who or what they are interacting with online, especially when it involves health. We do not allow any company to deploy AI tools that could mislead people into thinking they are receiving advice from licensed medical professionals.”

This isn’t the first time, but this time is different

Character.AI has faced ongoing legal pressure over the past year and a half, but the nature of the lawsuits has evolved.

In October 2024, Megan Garcia, mother of 14-year-old Sewell Setzer III in Florida, filed a federal lawsuit claiming that Character.AI’s companion chatbot contributed to her son’s suicidal behavior. Subsequently, families in Florida, Texas, Colorado, New York, and other states have followed with lawsuits.

In January 2026, according to The New York Times, Character.AI settled multiple lawsuits related to minors’ suicides with Google, which became a co-defendant after licensing Character.AI’s technology for $2.7 billion in 2024 and acquiring part of the founding team. That same month, Kentucky Attorney General Russell Coleman filed a separate suit accusing Character.AI of “predatory targeting of children and inducing self-harm.”

The common theme of these lawsuits is the platform’s psychological harm to minors.

Pennsylvania’s current lawsuit takes a different approach: instead of focusing on emotional companionship side effects, it directly claims that Character.AI permits the “practice of medicine” by its robots, and that these robots have fabricated identities during their practice. This is the first state-level lawsuit announced by a governor centered on impersonating licensed medical personnel.

Characters are fictional vs. users unaware

A Character.AI spokesperson stated that user safety is the company’s top priority but did not comment on the specifics of the lawsuit. The company emphasizes that characters on the platform are user-generated fictional roles, with prominent disclaimers at the start of each conversation warning users that “Characters are not real persons, and all statements should be considered fictional.”

This defense has internal consistency, but Pennsylvania’s government clearly does not accept it. The state’s position is that the presence of disclaimers does not prevent a chatbot from actively claiming to hold licenses or provide medical assessments during conversations. These two issues are not mutually exclusive, but the latter falls within the legal definition of illegal medical practice.

The speed of technological deployment once again outpaces regulatory response. Whether Pennsylvania’s lawsuit will succeed depends on how courts define the boundaries of “illegal medical practice” in the context of AI, but regardless of the outcome, other state attorneys general are watching closely to see how this case unfolds.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin