Skip to main content

ChatGPT users can now choose a trusted contact"

ChatGPT app logo seen on a smartphone.

OpenAI has been under intense legal and public pressure to improve the way its flagship AI product ChatGPT responds when a user express suicidal feelings.

On Thursday, the company launched a feature called Trusted Contact, which allows users to designate an adult to notify should the user talk about self-harm or suicide in a serious or concerning way.

The optional feature only encourages the trusted contact to reach out to the user. It does not share chat transcripts or conversation details.

"Our goal is to ensure that AI systems do not exist in isolation," the company said in a blog post announcing the feature. "Instead they should help connect people to the real-world care, relationships, and resources that matter most."

OpenAI has been sued multiple times for wrongful death by family members of ChatGPT users who died by suicide after ChatGPT allegedly coached them to end their lives or didn't respond appropriately to their discussions of psychological distress. OpenAI has denied the allegations in the first of those lawsuits.

Trusted Contact prompts appear on a smartphone.
A designated trusted contact receives an invitation like this from ChatGPT. Credit: Courtesy OpenAI

The state of Florida is also investigating ChatGPT's links to "criminal behavior," including the "encouragement of suicide and self-harm."

Trusted Contact was developed with feedback from experts, including OpenAI's Expert Council on Well-Being and AI and the American Psychological Association.

"Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most," Dr. Arthur Evans, chief executive officer of the American Psychological Association, said in a statement.

How ChatGPT's Trusted Contact works

  1. Users can start the Trusted Contact process by clicking on their ChatGPT settings.

  2. One adult age 18 or older can be added via the Trusted Contact form.

  3. The contact doesn't need a ChatGPT account.

  4. The designated contact will receive an invitation from OpenAI explaining their role as a trusted contact. They must accept the invite within one week in order to activate the feature. The contact can share their phone number or email address as a contact method. Should the person decline, the user can add a different adult.

  5. When OpenAI's automated monitoring systems detect discussion of self-harm or indicates a serious safety issue, ChatGPT alerts the user that the company may notify their trusted contact. The user prompt encourages outreach to the trusted contact and provides conversation starters.

  6. The safety issue is then reviewed by what OpenAI describes as a "small team of specially trained people." When the human reviewers confirm a possible serious safety concern, ChatGPT sends the Trusted Contact a brief email or text message. If the person has a ChatGPT account, they will receive an in-app notification.

  7. The notification doesn't include details about the user's discussion. Instead, it informs the trusted contact that the user mentioned self-harm and encourages the contact to reach out. The message includes a link to guidance for having sensitive conversations.

  8. Users are free to remove or edit their Trusted Contact at any time. The Trusted Contact can also remove themselves via ChatGPT's help center.


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.



from Mashable https://ift.tt/w8NysFr
https://ift.tt/9DoIuxl

Comments

Popular posts from this blog

The Shortcut AI Excel agent could one-shot spreadsheet jobs. Heres how to try it.

There's a new AI agent on the block for people who spend their waking hours inside spreadsheets. Navigate to Shortcut AI's website , and you'll find a page that looks almost exactly like an empty Microsoft Excel spreadsheet. The main difference is a sidebar chatbot that can be tasked with taking on the tedious legwork of building, say, complex financial models or competitive analyses. Because Shortcut is agentic , meaning it can handle multi-step tasks on the user's behalf, the tool can do more than just generate Excel formulas or analyze spreadsheet data. In a demo on X, Nico Christie, founder and CEO of the Shortcut AI agent, showed how the tool swapped out the data from a Microsoft distributed cash flow analysis (DCF) for Google data by looking up Google's SEC filings and populating the data in the same template. This Tweet is currently unavailable. It might be loading or has been removed. Shortcut launched on Monday with a rather ominous tagline: "Try...

When the clocks change for Daylight Saving Time, and why we do it at all

The clocks on our smartphones do something bizarre twice a year: One day in the spring, they jump ahead an hour, and our alarms go off an hour sooner. We wake up bleary-eyed and confused until we remember what just happened. Afterward, "Daylight Saving Time" becomes the norm for about eight months (And yes, it's called "Daylight Saving" not "Daylight Savings." I don't make the rules). Then, in the fall, the opposite happens. Our clocks set themselves back an hour, and we wake up refreshed, if a little uneasy.  Mild chaos ensues at both annual clock changes. What feels like an abrupt and drastic lengthening or shortening of the day causes time itself to seem fictional. Babies and dogs demand that their old sleep and feeding habits remain unchanged. And more consequential effects — for better or worse — may be involved as well (more on which in a minute). Changing our clocks is an all-out attack on our perception of time as an immutable law of ...

Mystery Pixel smartphones detailed in code references

The devices also pack 12GB of RAM apiece. Shiba is said to feature a screen with a resolution of 2,268 x 1,080 pixels while Husky could be a bit larger at 2,822 x 1,344 pixels. Given the amount of RAM, however, both would likely qualify as premium devices. from TechSpot https://ift.tt/cefMDJW via