Skip to main content

OpenAIs new open-source prompts take aim at sexual content for teens

A hand in shadows holds a glowing phone, displaying a blue ChatGPT logo.

OpenAI has announced new open-source safety prompts for developers, aimed at launching a mass deployment of policies to protect teens.

The prompt-based safety pack includes model guidance on common teenage risks, developmental content recommendations, and age-appropriate guidelines on topics such as self-harm, sexual content and romantic role play, dangerous trends or viral challenges, and harmful body ideals.

OpenAI said its a more robust alternative to the high-level guidelines previously offered, formatted as prompts that plug right into AI systems.

OpenAI added new Under-18 principles to its Model Spec in December. A few months prior, the company released gpt-oss-safeguard, an open-weight reasoning model designed to assist developers in implementing safety conditions and classifying safe and unsafe content. Unlike traditional safety classification processes, gpt-oss-safeguard can be fed platform safety policies directly, and infers the policy's intent as it distinguishes appropriate outputs.

But "even experienced teams often struggle to translate high-level safety goals into precise, operational rules, especially since it requires both subject matter expertise and deep AI knowledge," said OpenAI in its latest press release. "This can lead to gaps in protection, inconsistent enforcement, or overly broad filtering. Clear, well-scoped policies are a critical foundation for effective safety systems."

The additional developer pack was designed in collaboration with nonprofit Common Sense Media and everyone.ai.

Experts have warned parents about excessive chatbot exposure of vulnerable teens and even young children, as AI companies attempt to get a handle on the ramifications of their models on user mental health. Last year, OpenAI was sued by the parents of teen Adam Raine in the industry's first wrongful death case, with the Raine family claiming that a combination of ChatGPT sycophancy and lax safety policies was responsible for their son's death by suicide. The company has denied allegations of wrongdoing and in response have beefed up its mental health and teen safety features, including age assurance. Even so, third-party developers licensing OpenAI's models have struggled to maintain the same level of safety precautions, including in AI-powered children's toys.

The case against OpenAI followed multiple lawsuits against controversial platform Character.AI and set the stage for a recent wrongful death suit filed against OpenAI competitor Google and its Gemini AI assistant.

Industry-wide, tech and social media companies are facing an onslaught of legal challenges regarding the long-term impact of their products on users. Last month, Instagram CEO Adam Mosseri and Meta head Mark Zuckerberg testified before a jury in a watershed case putting social media platforms on trial for their allegedly addictive design principles. A verdict has yet to be reached.

OpenAI said its new safety prompt pack is not a "comprehensive or final definition or guarantee of teen safety." Robbie Torney, head of AI and digital assessments for Common Sense Media said that the new policies can build a "meaningful safety floor across the ecosystem," filling an AI safety gap that has been exacerbated by a lack of operational policies for developers.

Developers can download OpenAI's safety model on Hugging Face and access its new prompt pack on GitHub.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.



from Mashable https://ift.tt/TCstuzP
https://ift.tt/JNhrk30

Comments

Popular posts from this blog

When the clocks change for Daylight Saving Time, and why we do it at all

The clocks on our smartphones do something bizarre twice a year: One day in the spring, they jump ahead an hour, and our alarms go off an hour sooner. We wake up bleary-eyed and confused until we remember what just happened. Afterward, "Daylight Saving Time" becomes the norm for about eight months (And yes, it's called "Daylight Saving" not "Daylight Savings." I don't make the rules). Then, in the fall, the opposite happens. Our clocks set themselves back an hour, and we wake up refreshed, if a little uneasy.  Mild chaos ensues at both annual clock changes. What feels like an abrupt and drastic lengthening or shortening of the day causes time itself to seem fictional. Babies and dogs demand that their old sleep and feeding habits remain unchanged. And more consequential effects — for better or worse — may be involved as well (more on which in a minute). Changing our clocks is an all-out attack on our perception of time as an immutable law of ...

A speeding black hole is birthing baby stars across light years

Astronomers think they have discovered a supermassive black hole traveling away from its home galaxy at 4 million mph — so fast it's not doing what it's notorious for: sucking light out of the universe. Quite the opposite, possibly. Rather than ripping stars to shreds and swallowing up every morsel, this black hole is believed to be fostering new star formation, leaving a trail of newborn stars stretching 200,000 light-years through space . Pieter van Dokkum, an astronomy professor at Yale University, said as the black hole rams into gas, it seems to trigger a narrow corridor of new stars, where the gas has a chance to cool. How exactly it works, though, isn't known, said van Dokkum, who led research on the phenomenon captured by NASA 's Hubble Space Telescope accidentally. A paper on the findings was published last week in The Astrophysical Journal Letters . “What we’re seeing is the aftermath," he said in a statement . "Like the wake behind a ship, we’r...