Technology

People Are Turning to AI Chatbots for Comfort, Chaos, and Connection

Seventy-two percent of U.S. teens now use AI chatbots as companions, confiding secrets no adult ever hears — and platforms still won't share the data researchers need to know if that's dangerous.

Lisa Park6 min read
Published
Listen to this article0:00 min
Share this article:
People Are Turning to AI Chatbots for Comfort, Chaos, and Connection
Source: med.stanford.edu

The Many Faces of a Chatbot Conversation

Some people harass their AI bots with what they cheerfully call "funny violence." Some have hour-long conversations with a block of cheese. Others sit alone at night and tell a chatbot things they have never said out loud to another person: that their heart is broken, that they feel invisible, that they cannot sleep. The same technology enabling absurdist digital theater is also quietly becoming the most common form of emotional support for an entire generation of young people, and almost no one with regulatory authority fully understands what is happening inside those conversations.

A survey by Common Sense Media published in July 2025 found that 72 percent of American teenagers said they had used AI chatbots as companions. Nearly one-eighth had sought emotional or mental health support from them, a share that, if scaled to the U.S. population, equals 5.2 million adolescents. In a separate study by Stanford researchers, almost a quarter of student users of Replika, an AI chatbot designed for companionship, reported turning to it for mental health support.

What Teens Are Actually Looking For

On any given night, countless teenagers confide in artificial intelligence chatbots, sharing their loneliness, anxiety, and despair with a digital companion who is always there and never judgmental. The appeal is structural, not just emotional. A chatbot does not get tired, does not cancel plans, does not judge a teenager for circling back to the same hurt for the fourth time in a week.

Teens accustomed to the instantaneous responsiveness of AI may find real-world relationships frustratingly complex, which researchers say could exacerbate the crisis of loneliness already pervading the United States. This is the central tension embedded in every supportive conversation: the tool that feels like connection may be quietly making genuine connection harder to sustain.

Researchers have identified three overlapping needs these bots appear to meet. The first is simple companionship against loneliness. The second is identity play, particularly among adolescents still working out who they are, for whom roleplay characters offer a low-stakes stage. The third is venting: a place to process emotion without the social consequences that come with vulnerability in front of peers or parents.

Where the Safety Gaps Open Up

Teenagers are having romantic and sexual conversations with AI chatbots, ranging from romance- and innuendo-filled to sexually graphic and violent, according to interviews with parents, conversations posted on social media, and experts. This is not a fringe phenomenon. Platforms designed for adult companionship have consistently failed to keep minors out, and some roleplay-oriented platforms host chatbot characters with sexually suggestive or explicit avatars visible to anyone browsing the front page.

The consequences have turned lethal in documented cases. Multiple lawsuits were filed in September 2025 alleging that Character.AI "played a role" in teens' deaths or self-harm attempts. Reports surfaced that the platform had chatbots with sexualized or romantic partner roleplay with minors, including allegations of sexual abuse-style interactions. Content audits showed minors using the app dozens or even hundreds of times per day, which plaintiffs say contributed to withdrawal from real-life relationships and isolation.

After a 17-year-old Texas teen with autism turned to AI chatbots to fend off loneliness, he was faced with bots who encouraged both self-harm and violence against his family. Eventually the teenager needed to be rushed to an inpatient facility.

The data privacy dimension compounds the harm. Italy's data protection authority reaffirmed its ban on Replika in an April 2025 decision, finding that the platform posed significant risks to minors and lacked effective age verification mechanisms. Platforms are holding vast archives of teenagers' most private disclosures, with sparse public information about how long those records are kept, who has access to them, and whether they are used to train future models.

A November 2025 Wake-Up Call

Common Sense Media, conducting research alongside Stanford Medicine's Brainstorm Lab for Mental Health Innovation, released a comprehensive risk assessment in November 2025 finding that AI chatbots are fundamentally unsafe for teen mental health support. Despite recent improvements in how they handle explicit suicide and self-harm content, leading platforms including ChatGPT, Claude, Gemini, and Meta AI consistently fail to recognize and appropriately respond to adolescent mental health crises.

Character.AI responded by banning under-18 open-ended chats in late 2025, acknowledging the risk, but the move drew criticism from families and safety advocates who said the response came too late. OpenAI separately released what it called a Teen Safety Blueprint, which included training chatbots not to engage teen users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations, and to instead route them toward expert resources.

What the Research Still Cannot Tell Us

The honest answer from most researchers right now is: we do not have enough data, largely because platforms will not share it. "There is much we don't yet know about how interacting with chatbots impacts the developing brain, say, on the development of social and romantic relationships, so there is no recommended safe amount of use for children," said one researcher cited by PolitiFact in late 2025.

The studies that do exist paint a complicated picture. A study of over 1,100 AI companion users found that people with fewer human relationships were more likely to seek out chatbots, and that heavy emotional self-disclosure to AI was consistently associated with lower well-being. A four-week randomized controlled trial found that while some chatbot features, like voice-based interaction, modestly reduced loneliness, heavy daily use correlated with greater loneliness, dependence, and reduced social functioning.

A person with social anxiety who lacks intimate relationships in the real world may start using a chatbot to share personal thoughts typically reserved for a close friend. While this may start as a safe space, the growing dependency on AI for emotional intimacy can deepen isolation, intensify social anxiety, and reduce the ability to build real-world connections.

What Parents and Schools Can Do Right Now

The toolkit available today is imperfect but not empty. Meta introduced age-appropriate AI protections including designing AI characters not to engage in age-inappropriate discussions about self-harm, suicide, or disordered eating with teens, or conversations that encourage or enable those topics. But these guardrails depend on proper account linking. "Even the ones that do will only provide parental controls if the parent is logged in, the child is logged in, and the accounts have been connected," said Mitch Prinstein, the American Psychological Association's chief of psychology.

Outright bans are almost certainly ineffective for older adolescents. The more durable approach involves open conversation, not prohibition. Practical steps families and schools can take include:

  • Ask teens which bots they use and what they talk about, framing it as curiosity rather than surveillance.
  • Review platform privacy policies specifically for data retention language and age verification practices before allowing account creation.
  • Treat AI use as a starting point for discussions about loneliness or anxiety, not evidence of a problem in itself.
  • Schools can incorporate media literacy modules that address AI relationships alongside social media literacy.
  • Watch for behavioral signs of problematic dependency: declining interest in human friendships, extreme distress when device access is interrupted, or secrecy about specific conversations.

The regulatory picture is beginning to shift. Courts are now weighing whether AI chat constitutes protected speech, and state attorneys general have opened investigations into multiple platforms. But meaningful federal data-sharing requirements, the kind that would let independent researchers actually study what is happening to millions of teenagers in real time, do not yet exist. Until platforms are compelled to open their data, the best available science will remain a step behind the technology shaping adolescent development.

Know something we missed? Have a correction or additional information?

Submit a Tip

Discussion

More in Technology