Summary: OpenAI CEO Sam Altman recently shared that ChatGPT will soon support “erotica” for users who verify their age, as part of a broader rollout of age-gating features expected in December. This move aligns with OpenAI’s goal to treat adult users like adults while maintaining safeguards around mental health. Alongside this, OpenAI plans to reintroduce a more personable ChatGPT model and has formed a council focused on well-being and AI.

Age Verification and Erotica on ChatGPT

In a recent post on X, OpenAI CEO Sam Altman announced that once age verification is fully implemented in December, ChatGPT will allow more mature conversations, including erotica, for verified adult users. This update is part of OpenAI’s “treat adult users like adults” principle, aiming to provide a more tailored experience based on user age.

Earlier this month, OpenAI hinted at enabling developers to create “mature” ChatGPT applications, contingent on appropriate age verification and controls being in place.

Comparison with Other AI Platforms

OpenAI is not alone in exploring adult-oriented AI interactions. For example, Elon Musk’s xAI has introduced flirty AI companions that appear as 3D anime models within the Grok app, offering a playful and engaging experience.

ChatGPT Model Updates and User Experience

Alongside the addition of erotica, OpenAI plans to launch a new ChatGPT version that behaves more like the popular GPT-4o model. After making GPT-5 the default, OpenAI reinstated GPT-4o as an option following user feedback that the newer model felt less personable.

Altman explained that ChatGPT was initially made “pretty restrictive” to address mental health concerns, but this led to a less enjoyable experience for many users without such issues. To address this, OpenAI has developed tools to better detect when users may be experiencing mental distress.

Mental Health Considerations and New Council

OpenAI has also established a council on “well-being and AI” comprising eight researchers and experts who study the impact of technology and AI on mental health. This group will help guide OpenAI’s approach to complex or sensitive scenarios.

However, as noted by Ars Technica, the council currently does not include suicide prevention experts, despite recent calls for enhanced safeguards for users with suicidal thoughts.

Looking Ahead

Altman expressed optimism about safely relaxing restrictions for most users now that OpenAI has improved its ability to mitigate serious mental health issues. The upcoming age verification and expanded content options mark a significant step toward a more personalized and responsible AI experience.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

By Manish Singh Manithia

Manish Singh is a Data Scientist and technology analyst with hands-on experience in AI and emerging technologies. He is trusted for making complex tech topics simple, reliable, and useful for readers. His work focuses on AI, digital policy, and the innovations shaping our future.

Leave a Reply

Your email address will not be published. Required fields are marked *