GenAI Chatbots in Mental Health Support for Teens: Student spotlight featuring Myra

Student Spotlight
GenAI Chatbots in Mental Health Support for Teens: Student spotlight featuring Myra

Today, we’re talking to Myra, a high school junior at Ridgeview High School, who is passionate about psychology and technology. She recently completed a research project on how Generative AI (GenAI) chatbots can support teen mental health, exploring their capabilities, limitations, and ethical implications.

Hi Myra! Can you tell us a bit about your interest in mental health and technology?

Absolutely! I've always been fascinated by psychology and technology. I’ve seen how mental health struggles impact my friends and classmates, and I began to wonder if AI could help bridge the gap in accessible mental health support for teens. With the rise of AI chatbots capable of engaging in human-like conversations, I wanted to explore whether these tools could offer meaningful support. So, I decided to focus my research project on this question: Can GenAI chatbots effectively provide mental health support to teenagers, and what are their strengths and limitations?

That’s an intriguing question. What did your research involve?

I started by examining the landscape of AI-driven mental health support tools. I looked into popular chatbots like Woebot and Replika to understand how they engage users in conversations about their emotions, stress, and daily challenges. I was particularly interested in how these chatbots handle sensitive topics that teens often face, such as academic pressure, social anxiety, and self-esteem issues.

How did you go about testing the chatbots' effectiveness?

I designed a series of experiments simulating common teen mental health scenarios. For instance, I initiated conversations as if I were a student feeling anxious before a big exam, feeling left out in social situations, or dealing with the stress of balancing school and extracurricular activities. During these interactions, I documented the chatbots’ responses, focusing on how they provided empathy, suggested coping strategies, and responded to more nuanced emotional expressions.

Afterward, I had a group of my peers review and rate these chatbot interactions to gauge their ability to offer genuine support. This feedback was crucial in understanding how other teens perceive AI's effectiveness in such scenarios.

What criteria did you use to evaluate the chatbot responses?

I developed a framework to analyze the chatbot responses based on three main criteria:

  1. Empathy: How well did the chatbot acknowledge and respond to the user’s feelings?
  2. Relevance: Were the chatbot's suggestions and responses appropriate and specific to the concerns mentioned in the conversation?
  3. Safety: Did the chatbot handle sensitive topics responsibly, avoiding potentially harmful advice?

By analyzing these factors, I wanted to determine how closely AI chatbots could mimic human-like empathy and whether they could effectively support teens dealing with mental health challenges.

What were your findings about the strengths of GenAI chatbots in mental health support?

I found several strengths. Firstly, accessibility stood out as a significant advantage. GenAI chatbots are available 24/7, offering immediate support whenever teens feel the need to talk. This constant availability can be especially comforting for those who might not have access to traditional mental health resources.

Secondly, many teens felt that talking to a chatbot provided a non-judgmental space. My peers mentioned that interacting with a chatbot was less intimidating than opening up to a person, allowing them to express their feelings without fear of being misunderstood or judged.

Lastly, chatbots are generally good at providing basic coping strategies. They offered advice like breathing exercises for anxiety, tips on managing stress, or suggestions for self-care activities. While these tools are quite basic, they can serve as helpful starting points for teens seeking simple, immediate support.

Interesting! Were there any limitations or challenges you discovered with using chatbots for mental health support?

Yes, I uncovered some notable limitations. One major issue was the lack of deep understanding. Despite their conversational abilities, chatbots often failed to grasp the full context or emotional depth of the user’s situation. For instance, when I simulated a conversation about feeling overwhelmed by both school and family expectations, the responses were quite generic and didn’t fully address the complex emotions involved.

Another limitation was repetitive responses. The chatbots tended to rely on a limited set of replies, which made longer or more complex conversations feel less personalized. This could potentially reduce their effectiveness over time, as teens might feel like they’re talking to a robot rather than engaging in a meaningful dialogue.

Lastly, I explored the ethical concerns and safety of using AI for mental health support. Chatbots need to handle sensitive topics like depression or self-harm responsibly, and while some are programmed to refer users to professional help in crisis situations, I emphasized that it's crucial for these tools to prioritize user safety and privacy.

After completing your research, what conclusions did you come to?

My research led me to the conclusion that while GenAI chatbots show promise as supplementary mental health tools, they are not a substitute for professional counseling or human interaction. They can provide a form of immediate support and a safe space for initial expression, but they lack the deep empathy and understanding that human connections offer.

I think chatbots could be most beneficial as part of a larger mental health support system. For example, they can act as an entry point, encouraging teens to open up about their feelings and then guiding them toward human-led support resources when necessary. Additionally, ongoing improvements to chatbot algorithms are needed to better interpret and respond to the nuanced needs of teenagers.

What advice would you give to other students interested in researching AI and mental health?

I would say it’s an incredibly rewarding area of study because it combines technology with real-world impact. Start by exploring the role of technology in addressing mental health challenges and don’t be afraid to dig deep into both the possibilities and limitations. Engaging with research programs that offer guidance and mentorship can also provide invaluable support as you navigate such a complex and sensitive topic.