Character.AI Bars Under 18s From Open Chats, Launches Guided Stories
Character.AI said it would block users under 18 from engaging in open ended conversations with its AI characters and will redirect minors into a new Stories product designed to keep interactions within predefined narrative paths. The move comes as legal pressure mounts over alleged links between AI chat platforms and teen self harm, and it raises fresh questions about safety, privacy, and the limits of automated moderation.

Character.AI announced on November 26 that it would bar users under 18 from open ended chats with its conversational models and would funneled younger users into a new Stories offering that confines interaction to branching predefined narratives. The change is the company response to a wave of legal and regulatory scrutiny that has accused large language model based chat platforms of contributing to harms among adolescents, including self harm. Character.AI also said it would roll out age assurance measures to enforce the policy.
The Stories product lets users choose characters and genres and then proceed through branching, predefined narrative branches rather than carrying on an unconstrained dialogue. Company executives framed the format as a way to preserve engagement with younger audiences while limiting the unpredictable free form output that plaintiffs and regulators have cited as risky. By bounding conversations inside author designed pathways, the platform aims to reduce the chance that an AI will generate harmful or manipulative responses in exchanges with minors.
The policy shift follows months of mounting litigation and public pressure focused on the downstream effects of conversational AI on young people. Plaintiffs in several suits have argued that free form chatbots can produce content that encourages self harm or amplifies mental health crises, and regulators have signaled they may impose stricter obligations on companies that operate directly to minors. For Character.AI, the new approach is both a safety measure and a legal posture intended to demonstrate proactive harm mitigation.
Experts caution that moving under 18s into guided experiences will not solve every problem. Age assurance systems can be technically and ethically fraught, requiring trade offs between verification accuracy and user privacy. Determined adolescents may find ways to access unrestricted accounts, and bounded narrative formats reduce spontaneity, which may change how teens learn to interact with conversational agents. The Stories approach also raises questions about creative limits and the role of human oversight in designing the narrative branches.

The change will be closely watched across the industry because it tests a middle path between outright bans and unrestricted access. If Stories effectively reduces harmful outputs without driving users off platform, it could become a model for other companies seeking to balance safety and user retention. If it fails to prevent risky exchanges or proves too restrictive, the move may prompt regulators to demand more stringent safeguards or technological solutions.
Character.AI said it would implement age assurance measures alongside the Stories rollout, but did not provide detailed technical specifications in its announcement. Privacy advocates and child safety groups are likely to press for transparency about verification methods and data handling. The rollout marks a significant moment in the broader debate over how to govern conversational AI for young people, and regulators, plaintiffs, and competitors will be watching closely as the company puts the new policy into practice.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

