Us News

When chatgpt causes psychosis: Can AI be psychologically safe?

Users say that humanoid AI chatbots can be too busy and cause psychosis, raising urgent questions about the dangers of technological mental health. – I brought it

Anthony Tan has a history of psychosis, but his most recent mental health episode was the most interesting yet – and it started from an unlikely source: chatgpt. Tan started chatting with the Ayi Chatbot about philosophical topics in September 2024, and the conversations gradually turned into an illusion. Combined with social isolation and misunderstanding, the exchange sent him into what he describes as an ai psychosis.

“I was stable for two years, and I was doing well. This did not break the pattern of stability,” he told the viewer, writing in a personal story about his experience. “The deeper the AI ​​Echo Chamber goes, the more lost you become.”

Tan, the founder of the Flitual virtual reality app, is now on the hunt for a superhuman ai psychoshos. He leads the AI ​​Mental Health project, a non-profit that aims to educate the public and prevent AI-related mental health issues.

Psychologist Marlynn Wei describes AI Psychosis (or Chatbot psychosis) as something that AI systems have developed, validated or improved or done together or done together.

Tan is one of a growing number of people who have fallen into serious psychosing after engaging with chatbots – some cases ending in suicide or violence. This week, Abalayi released data showing that 0.07 percent of the 800 million users in any week repeated mental health emergencies, such as psychosis, mania, suicidal thoughts and suicide and self-harm. But it’s not just the worst examples that are worrying. “In the Bell Curve, a lot of users in the middle are still affected,” Tan said.

Chats like chatgpt and character.awa are designed to appear human and empathetic. Many people use them as your partners or illegal workers, although the technology is not protected by the safety of good behavior or medical practice of licensed professionals. What makes these bots endearing – their warmth and relatability, for example – can also make them dangerous, they are prone to manipulation and destructive thought patterns.

It’s a risky plan

“If you have pre-existing mental health conditions or any kind of neurodevervity, these programs are not designed for that,” Annie Brown, AI BIAS researcher and entrepreneur in residence at UC San Diego, told the thinker.

Brown, founder and CEO of Reliabl, a company that uses label data to help AI stain, said mental health safety should be a shared task between users, social institutions, and model creators. But the biggest burden lies with AI companies, who better understand the risks, he noted.

Anand Dhananial, director of AI, products and innovation at Teksystems, noted that consumer-facing conversations lack the protections found in business tools. “I can separate layers of business at a higher level [with more guardrails and stricter standards] there are consumer chatbots,” he told the thinker.

Tan believes that companies have the resources and responsibility to do more. “I think they need to spend some of it on protecting people’s mental health and not just doing crisis management,” he said, pointing to the $40 billion bailout in March as an example of the massive financial power.

What’s next?

Experts say that high-level AI management and some security guardrails can guide the Chatbot industry to a safe place.

Black advocates for AI participation – involving people from diverse backgrounds in development and testing. “Right now, these AI systems don’t check people who report mental health issues,” he said, encouraging companies to work with mental health organizations and experts.

He also recommends redlining, a deliberate process that probes AI systems for weaknesses in controlled environments. At Reliabls, Brown’s team works with benefits such as forensics to bring users from various backgrounds to “break” models, which help to manage risks before harm occurs.

For example, even if chatgpt is programmed not to answer self-inflicted questions, emotional urgency or persuasive management can override those barriers. “Doing mental health research for other models, not just with experts and doctors, but also with people who are vulnerable to AI, I think is going to make a big impact,” Brown said.

Tan argues that chatbots’ tone of medical mimicry is part of the problem. “It’s important to make these Ayi conversations a little bit more emotionally compelling, a little bit more,” he said.

Opelai’s GPT-5, which some users have described as “more stubborn” than previous models, has taken small steps in that direction. But companies are always motivated to trade by making negotiations seem more profitable. Platforms like ntai’s Grok (“a grok with a rebellious and outsider’s view of humanity,” said X) and character.i (“Talking to AI is much better than connecting with human-like features.

“Users move to friendly chats, even though it can lead to more mental illness,” said Dhanabali. “If I measure the sensitivity gradually, then the visible effect will come more.”

Brown believes that modelers can better identify at-risk users through training programs to recognize cues in language content. Accurately labeling data with this awareness can help protect chatbots from reinforcing manipulation.

“By doing these participation tests, by doing red integration, you’re not just improving the security of your AI sometimes at the bottom of the totem pole in terms of investing,” Brown said. “And you improve its accuracy, and that’s very high.”

“I feel lucky that I recovered.”

“These AI chatbots are essentially, for many people, their mini therapists,” Tan said.

Brown acknowledges that many users are turning to Chatbots as an affordable alternative to professional care. “It would be great if we were in a country that had more access to affordable mental health care so people don’t have to rely on these conversations,” she said.

Today, Tan has settled down and sometimes uses chatbots for work-related tasks such as fact-finding and creative brainstorming.

“I stay away from personal and philosophical topics,” he said. “I don’t want to go down any rabbit holes again, rip you off with my worldview, and I don’t want to build an emotional bond… I feel lucky that I found my ai psychosis.”

Discussions like chatgpt encourage mental health disputes - what can be done?



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
google.com, pub-2981836223349383, DIRECT, f08c47fec0942fa0