GenAI Chatbots in Mental Health Support for Teens: Student spotlight featuring Myra

Myra, a high school student, explored how AI chatbots support teen mental health. Her research reveals both their promise and critical limitations.

Nova Scholar Spotlight
GenAI Chatbots in Mental Health Support for Teens: Student spotlight featuring Myra

Artificial Intelligence (AI) continues to expand into domains once considered exclusively human—including emotional support. Among the most discussed applications is the use of Generative AI (GenAI) chatbots to assist individuals facing mental health challenges. For Myra, a high school junior at Ridgeview High School, this convergence of psychology and technology became the foundation of a rigorous independent research project.

Her investigation aimed to assess whether GenAI chatbots can meaningfully support teenagers experiencing emotional distress. Through structured testing, peer review, and ethical analysis, Myra explored both the capabilities and limitations of AI-driven emotional support systems.

Origin of the Research: Bridging Psychology and AI

Myra's interest in psychology was longstanding. However, her curiosity deepened as she noticed a rise in mental health struggles among peers—particularly stress, anxiety, and social pressure. Simultaneously, she observed the increasing presence of AI chatbots capable of engaging in seemingly human conversations.

“I’ve always been fascinated by how people think and feel,” she explains. “But as I watched friends deal with anxiety or loneliness, I started wondering—what if technology could help?”

Her project question was clear and focused: Can GenAI chatbots provide meaningful mental health support for teenagers, and what are their core strengths and weaknesses?

This line of inquiry positioned her project at the intersection of two fast-evolving fields—AI and adolescent psychology—and allowed her to explore not only technology's potential, but also its boundaries.

Framing the Scope: From Tools to Techniques

Myra began by identifying which platforms she would analyze. She selected three GenAI chatbots widely recognized for emotional support use cases:

  • Woebot – a mental health chatbot developed with cognitive behavioural therapy (CBT) principles.
  • Replika – a conversational AI designed for companionship and self-reflection.
  • Wysa – a digital mental wellness coach that integrates AI with guided self-care techniques.

She studied how these platforms are marketed, their underlying conversational frameworks, and how they have been received by teenage users.

However, she quickly realized that anecdotal feedback and app reviews were insufficient. To properly evaluate the role of GenAI in mental health contexts, she needed to simulate realistic emotional scenarios, collect response data, and engage in qualitative peer assessment.

Constructing the Methodology

The research design included the following phases:

  1. Scenario Simulation
    Myra designed realistic mental health scenarios reflective of common teenage experiences. Each scenario was framed as a conversation starter, such as:
    • “I’m feeling anxious before my final exams.”
    • “I feel left out when my friends hang out without me.”
    • “I’m overwhelmed trying to meet expectations from both school and family.”
  2. Chatbot Interaction and Documentation
    She initiated conversations with each chatbot, keeping the variables consistent. For each scenario, she recorded the full chatbot response, noting:
    • Length and complexity of response
    • Tone and language used
    • Coping strategies or referrals provided
  3. Peer Review and Rating
    To introduce an external perspective, Myra asked a group of students (ages 15–17) to review anonymized transcripts of the conversations and evaluate the chatbot responses based on specific criteria (detailed below).
  4. Framework for Evaluation
    Each chatbot response was assessed on three main dimensions:
    • Empathy – Did the chatbot demonstrate emotional recognition and understanding?
    • Relevance – Were the responses tailored to the specific issue raised?
    • Safety – Did the chatbot avoid offering misleading or potentially harmful advice?

This multi-phase approach ensured that her analysis would be systematic, peer-informed, and grounded in realistic use cases.

Findings: Where Chatbots Excel

The research surfaced several consistent advantages offered by GenAI chatbots in the mental health support space.

1. Round-the-Clock Accessibility

The most immediate benefit was the chatbots’ 24/7 availability. Teenagers who experience stress or emotional lows outside school hours often lack immediate access to support systems. Chatbots fill that gap by offering instant engagement, regardless of the time of day.

This feature was particularly valued by her peers in the peer review group, several of whom commented that simply having someone—or something—to talk to during difficult moments would make a difference.

2. Non-Judgmental Listening

One of the most frequently cited strengths was the perceived lack of judgment. For teenagers who fear stigma or rejection when discussing mental health, AI offered a safe, anonymous space to express their emotions.

Review participants noted that speaking with a chatbot felt emotionally safer than confiding in a teacher or even a friend.

3. Entry-Level Coping Strategies

All three chatbots reliably offered simple coping mechanisms, such as:

  • Deep breathing exercises
  • Positive affirmation scripts
  • Journaling prompts
  • Encouragement to seek offline support

While these responses were basic, they provided a starting point for individuals unfamiliar with mental wellness practices.

Identified Limitations and Areas of Concern

Despite their strengths, the chatbots displayed several notable weaknesses that limit their effectiveness in complex mental health scenarios.

1. Surface-Level Emotional Understanding

Even when language appeared empathetic, Myra found that the chatbots frequently failed to grasp deeper emotional nuances. In more layered situations—such as when a student expressed both academic and familial pressure—the chatbot responses were generic and lacked specificity.

Several responses showed signs of keyword-based recognition rather than contextual awareness. For example, in a scenario involving social exclusion, the chatbot offered general self-care tips without acknowledging the interpersonal dimension of the concern.

2. Repetitiveness and Lack of Depth

Another challenge was redundancy. Many chatbot interactions became repetitive after a few exchanges. The AI often cycled through the same advice loops—“Take a deep breath,” “You’re not alone,” “Talk to someone you trust”—without adapting to the direction of the conversation.

This issue reduced perceived authenticity and made it difficult for users to sustain longer conversations.

3. Safety and Ethical Shortcomings

While some chatbots included disclaimers or suggestions to seek professional help, their actual responses to serious concerns—such as mentions of depression or self-harm—were inconsistent.

Myra tested a scenario that subtly referenced suicidal ideation to evaluate how the chatbot would respond. While one platform immediately referred to a suicide prevention hotline, another offered vague encouragement without any safety protocol. This discrepancy raised ethical concerns regarding the readiness of AI to engage in mental health contexts.

Broader Ethical Considerations

In addition to testing functionality, Myra’s project incorporated an ethical analysis. She reviewed academic literature on AI in healthcare, algorithmic bias, and the ethical responsibilities of developers deploying tools in emotionally vulnerable spaces.

Her key ethical findings included:

  • Lack of Transparency
    Few platforms disclosed how data was stored or used, raising privacy concerns—especially for teens.
  • Insufficient Guardrails
    Not all chatbots had adequate escalation protocols for high-risk conversations.
  • Emotional Dependency Risk
    Myra noted that frequent chatbot use could risk emotional dependency, especially for teens lacking human support structures. This observation aligns with broader concerns in the mental health tech field.

Conclusions and Implications for Future Use

Myra’s conclusions were balanced and grounded in evidence. She determined that GenAI chatbots can play a helpful supplementary role, but are not suitable substitutes for licensed mental health professionals.

Specifically, she proposed that:

  • AI chatbots are best positioned as “first-line” tools, offering an immediate outlet for expression and guiding users toward human-led interventions.
  • Future development should prioritize emotional intelligence models that go beyond keyword recognition to truly understand context.
  • Ethical safeguards and transparency policies must be standardized, especially when products are aimed at adolescents.

Her research suggests that, while not without flaws, AI chatbots could be strategically integrated into school wellness programs, youth counselling services, or broader mental health ecosystems—provided that they are always positioned as supplements, not standalone solutions.

Research Outcomes and Skill Development

In addition to her findings, Myra reflected on how the project helped her grow academically and personally.

  • Research Methodology: She learned to build a structured experimental framework from scratch—designing simulations, collecting data, and implementing peer-reviewed evaluation.
  • Technical Literacy: Myra explored aspects of Natural Language Processing (NLP) and how conversational AI is trained to respond using Large Language Models (LLMs).
  • Critical Thinking: She gained experience interpreting not just chatbot responses but the broader implications of using emerging technologies in sensitive spaces.

This hands-on engagement also gave her the confidence to participate in school research competitions and apply to summer programs that support student-led inquiry into social and technological issues.

Advice for Other Students

Myra encourages other students to pursue independent research in areas that combine academic curiosity with real-world relevance.

“Choose a topic that connects with something you care about. If you’re curious and willing to investigate deeply, your project can lead to meaningful results—even if you’re still in high school.”

She also emphasizes the importance of structure. Teen mental health is a sensitive field, and Myra cautions that such research requires careful ethical consideration, sound methodology, and appropriate guidance from teachers or mentors.

“Don’t underestimate how much you can learn by designing your own research,” she adds. “Whether your focus is mental health, climate change, or AI, what matters is that you’re contributing something thoughtful and well-informed.”

Final Reflections: The Role of Students in Exploring Complex Challenges

Myra’s project underscores the value of student-led research in emerging fields. In an era where both mental health awareness and artificial intelligence are accelerating in public discourse, high school students are uniquely positioned to ask bold questions—and begin answering them.

Her investigation provides a measured, evidence-based assessment of GenAI’s role in teen mental health support. While her findings highlight limitations, they also point to tangible opportunities for improvement and integration.

As Myra continues her academic journey, her work sets a compelling example of how young researchers can engage with interdisciplinary questions—and make a meaningful contribution to the conversation.