OpenAI has been under intense legal and public pressure to improve the way its flagship AI product ChatGPT responds when a user express suicidal feelings. On Thursday, the company launched a feature called Trusted Contact, which allows users to designate an adult to notify should the user talk about self-harm or suicide in a serious or concerning way. The optional feature only encourages the trusted contact to reach out to the user. It does not share chat transcripts or conversation details. SEE ALSO: 4 reasons not to turn ChatGPT into your therapist "Our goal is to ensure that AI systems do not exist in isolation," the company said in a blog post announcing the feature . "Instead they should help connect people to the real-world care, relationships, and resources that matter most." OpenAI has been sued multiple times for wrongful death by family members of ChatGPT users who died by suicide after ChatGPT al...
Tech teacher site is the source of Online Earnings,SEO,amazon, Technology,Android review and Tips,blogging and Editing.