Health

Mental Health Providers Should Ask Patients About AI Chatbot Use, Study Says

A new JAMA Psychiatry paper argues AI chatbot use belongs on every clinical intake form, citing risks including data exposure and sycophantic reinforcement that can undermine real therapy.

Sarah Chen6 min read
Published
Listen to this article0:00 min
Share this article:
Mental Health Providers Should Ask Patients About AI Chatbot Use, Study Says
AI-generated illustration

Therapists ask about sleep, alcohol, and substance use as a matter of clinical routine. According to a new paper in JAMA Psychiatry, it is time to add one more question to that list: "Are you using an AI chatbot for emotional support?"

The call comes from Shaddy Saba, an assistant professor at New York University's Silver School of Social Work, and his co-author William Weeks. Their argument is not that chatbots are harmful by default, but that a patient's relationship with one carries real diagnostic and therapeutic weight that clinicians are currently ignoring. "We're not saying that AI use is good or bad," Saba said, "just like we wouldn't say, substance use is necessarily good or bad, or consulting with a friend about something is good or bad." The goal, he added, is understanding motivation: "Our job is to understand why people are behaving as they are — in this case, why they are seeking help from an AI system."

The stakes are concrete. Consider a patient managing marital conflict who, instead of practicing difficult conversations with a spouse, turns to ChatGPT each night to have emotional needs met at arm's length. Psychologist Vaile Wright of the American Psychological Association described exactly that scenario, noting that a therapist who never asks about chatbot use may misread the patient's avoidance entirely. Saba put the clinical upside in equally direct terms: "The extent that we can prompt our clients to bring these conversations, in increasing detail even, into the therapy room, I think there's potentially a treasure trove of information."

The paper also identifies clear risks that warrant discussion between clinician and patient. Data privacy ranks among the most serious: many AI companies use conversations, including sensitive disclosures about mental illness, to further train their models, often without users fully understanding the terms. Dr. Tom Insel, a psychiatrist and former director of the National Institute of Mental Health, warned of a subtler structural hazard. Chatting with a bot about one's mental health is "the opposite of therapy," he said, because chatbots are built to affirm and, in his words, may be "sycophantic," reinforcing the user's existing thoughts and feelings rather than challenging them. "Therapy is there to help you change and to challenge you and to get you to talk about things that are particularly difficult," Insel said.

At the same time, Insel acknowledged that the technology is not categorically a rival to clinical care. Used deliberately, a chatbot could help a patient organize and vet which topics to raise in their next session, or serve as a low-stakes outlet for day-to-day frustration. In that framing, he said, therapy and chatbots "could be aligned to work together."

Saba and Weeks offer practitioners a specific template: ask patients not only whether they use AI chatbots, but whether any interactions felt unhelpful or problematic, and then proactively discuss the risks, including data exposure and the limits of machine-generated emotional support. People are especially likely to confide in chatbots about topics they fear will be judged harshly in human conversation, including thoughts of suicide, Insel noted, making the disclosure conversation both sensitive and clinically valuable.

At least one therapist said the paper arrives at the right moment. A clinician identified as Winkelspecht said she had been weighing whether to add AI and social media questions to her intake form after a growing number of clients and their parents came to her seeking guidance on using AI tools without violating school honor codes. Saba's paper, she said, gave her the sample questions she needed to make it official.

Therapists ask about sleep, alcohol, and substance use as a matter of clinical routine. According to a new paper in JAMA Psychiatry, it is time to add one more question to that list: "Are you using an AI chatbot for emotional support?"

The call comes from Shaddy Saba, an assistant professor at New York University's Silver School of Social Work, and his co-author William Weeks. Their argument is not that chatbots are harmful by default, but that a patient's relationship with one carries real diagnostic and therapeutic weight that clinicians are currently ignoring. "We're not saying that AI use is good or bad," Saba said, "just like we wouldn't say, substance use is necessarily good or bad, or consulting with a friend about something is good or bad." The goal, he added, is understanding motivation: "Our job is to understand why people are behaving as they are — in this case, why they are seeking help from an AI system."

Psychologist Vaile Wright of the American Psychological Association described a patient having relationship issues with a spouse who, instead of trying to have open conversations to get their needs met, goes to a chatbot to either fill those needs or to avoid having difficult conversations. Saba put the clinical upside in equally direct terms: "The extent that we can prompt our clients to bring these conversations, in increasing detail even, into the therapy room, I think there's potentially a treasure trove of information."

The paper also flags concrete risks that warrant discussion between clinician and patient. Data privacy ranks among the most serious, because many AI companies use the conversations, including sensitive ones, to further train their models. Dr. Tom Insel, a psychiatrist and former director of the National Institute of Mental Health, warned of a subtler structural hazard. Talking with a chatbot about one's mental health is "the opposite of therapy," he said, because chatbots are designed to affirm and flatter, reinforcing users' thoughts and feelings. "Therapy is there to help you change and to challenge you and to get you to talk about things that are particularly difficult," Insel said.

At the same time, Insel acknowledged that the technology is not categorically a rival to clinical care. It could help a therapist figure out if a chatbot can complement therapy in helpful ways, such as to vet which topics to bring to sessions or to vent about day-to-day life, with therapy and chatbots potentially "aligned to work together."

Saba and Weeks suggest asking patients not only whether they use AI chatbots, but whether any interactions felt unhelpful or problematic, and offering to share risks of using chatbots for emotional support. People are especially likely to confide in chatbots about topics they fear will be judged harshly in human conversation, Insel noted, citing suicidal thoughts as one example, making the disclosure conversation both sensitive and clinically valuable.

At least one therapist said the paper arrives at the right moment. A therapist named Winkelspecht said she had been considering adding questions about social media and AI use to her intake form and appreciated Saba's study because it offered sample questions to include. After watching a growing number of clients and their parents seek guidance on navigating AI without violating school honor codes, she concluded that therapists and parents need to be more aware of how children and teens are using their digital devices, both social media and AI chatbots alike.

Sources:

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Health

Mental Health Providers Should Ask Patients About AI Chatbot Use, Study Says | Prism News