OpenAI has unveiled new measures to mitigate ChatGPT mental health concerns, as millions of users confide in the bot about suicidal thoughts, self-harm and psychosis. But experts say without proper regulation and foresight, many vulnerable people could remain at risk.
In a post, OpenAI said it had worked with experts to ensure its latest models more reliably recognised signs of distress, reducing undesired responses by at least 65 per cent on previous models. It said the models could recognise emotional reliance on AI, short-circuit paranoid delusions, and shift conversations to safer, less-imaginative models if needed.
OpenAI said 0.15 per cent of users each week have conversations with the bot that could indicate suicidal intent.Credit: Pexels
“We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family or a mental health professional when appropriate,” it said.
But Nieves Murray, chief executive of Suicide Prevention Australia, said vulnerable people were accessing the technology in times of crisis and needed greater care. Government should demand accountability and transparency from OpenAI and others, requiring platforms that are safe by design and created with input from people with lived experience of suicidal distress, rather than retrofitted to avoid culpability.
“There’s a huge risk in this area,” Murray said. “We can’t rely on AI companies to self-regulate, particularly when it comes to issues of mental health and suicide risk.
“A poor rendition of an interaction with a bot could lead to the loss of life. That’s the pointy end of looking after someone who’s vulnerable.”
ChatGPT’s capabilities have been evolving quickly, leading to fears that regulation could fall behind.Credit: AP
OpenAI chief executive Sam Altman has said that ChatGPT had been limited in its expression due to mental health concerns (the company has had legal action against it for its products’ alleged role in some youth suicides), but that the new mitigations meant it was free to make ChatGPT more personable and human-like.
The company said only 0.15 per cent of users a week have conversations that indicate suicidal intent. But the company also claims to have 800 million weekly users, meaning more than a million instances of suicide ideation each week. Even if the latest ChatGPT performs as well in the real world as in testing – delivering messages that OpenAI considers to be compliant with its support goals 91 per cent of the time – that’s 108,000 people a week getting an experience that doesn’t meet the company’s self-defined standards.
Murray said a bot designed to provide what it’s asked for is a dangerous thing if its operator can see only one option.
Loading
“We have seen some worrisome behaviours with generative AI, things that don’t actually help a person who’s in distress ... advice that is in line with their own current thinking, so exacerbating the risks,” she said.
“As opposed to somebody who can provide an alternative perspective, to help them identify reasons for living. A person in distress is not going to ask for that.”
In September, Australia’s eSafety commissioner registered enforceable industry codes that apply to chatbots. They require platforms to prevent children from accessing harmful material, including content related to suicide and self-harm.
Murray said the government needed to take a more active role in protecting all Australians from potential harm. If a chatbot is to be used as a health service, it needs to be regulated like one and be held to the same standards of transparency and accountability.
“We’re not against the use of digital platforms to help people, but there are better ways of doing it. There are better ways of designing the future,” she said, pointing out that there are already digital, anonymous, evidence-based services that can help.
“You don’t have to talk to somebody on the phone. There are other ways of getting that support with well-recognised, well-researched and well-tested programs such as Lifeline and Beyond Blue. I understand the appeal of ChatGPT’s perceived anonymity, but in fact the existing services already provide that level of security. And they can demonstrate it. We don’t have that level of transparency with OpenAI.”
Amy Donaldson, a Melbourne clinical psychologist who works with young people, said chatbots can be dangerous because they’re programmed to please and can become an idealised, perfect friend, enabling negative patterns and compromising real-world relationships.
“People channel their energy into interacting with a bot that can’t provide the same depth and connection that a human can,” Donaldson said. “It’s designed to provide exactly the responses that you want to hear … and if it doesn’t, you can provide instructions so it does respond the way you want next time.
Loading
“The feedback that I’ve had from some of my clients is that they’re then surprised when people in the real world don’t respond in that way.”
The growth in people reaching out to chatbots for help comes as traditional services note unprecedented use. Almost three in 10 Australians sought help from a suicide prevention service in the past 12 months, according to Suicide Prevention Australia’s research. One in five young Australians had serious thoughts of suicide, and 6 per cent made an attempt in the past year. The 18-24 age group is the most likely to seek help.
But Donaldson said chatbots were attractive to young users who might not want to use existing services, for example school wellbeing services that must inform parents about self-harm. And while a bot could serve a positive role in encouraging care and offering advice while a person sits on a waitlist to see a professional, she said the platforms’ attempts to help were much riskier for the most vulnerable users, who might view ChatGPT’s safety messages as a refusal to help.
“Those people might say well, OK, this thing can’t help me either,” she said.
“I’m concerned about what happens after that because a person could see that response and come up with an alternative plan, and that’s a different thing to hitting a roadblock like that.”
If you or anyone you know needs support, call Lifeline on 131 114, Kids Helpline on 1800 55 1800 or Beyond Blue on 1300 224 636.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.
Most Viewed in Technology
Loading





























