The impact of artificial intelligence (AI) on the mental health of the younger generation (primarily Gen Z and Gen Alpha) is complex, context-dependent, and still evolving. Recent 2025–2026 research from organizations like the JED Foundation, Surgo Health, APA, Common Sense Media, Stanford, and others shows mixed outcomes: AI can offer short-term relief or support in some cases, but it often carries significant risks, especially when used unsupervised or as a substitute for human connection.0
Context matters more than raw usage frequency. Youth with strong offline social support, lower stress, and better access to care tend to experience more neutral or positive effects, while those facing adversity, isolation, or barriers to professional help report mixed or negative results.1
Potential Positive Impacts
AI tools, including chatbots and generative systems, can sometimes help bridge gaps in mental health support:
- Accessibility and Short-Term Relief: About 12–13% of U.S. adolescents and young adults (ages 13–24) who report mental health struggles have used generative AI for emotional support or advice. Some experience momentary reductions in loneliness or distress, particularly through voice-based interactions or structured, non-companion uses. Certain studies show modest short-term decreases in loneliness and social anxiety in controlled or brief interactions.15
- Supplementary Tool: When integrated thoughtfully (e.g., as a brainstorming aid or initial information source alongside professional care), AI can lower barriers related to stigma, cost, or availability. Some youth use it as a “bridge” to seek real help.2
- Targeted Benefits in Specific Cases: Limited evidence suggests potential for reducing psychological distress in structured settings (e.g., certain CBT-inspired chatbots), though results are modest and not consistent across depression or anxiety scales.22
Overall, teens themselves often view AI’s personal impact more positively (36% positive vs. 15% negative in Pew data), but optimism drops when considering broader societal effects.36
Negative Impacts and Risks
Concerns dominate much of the recent evidence, particularly around overreliance, emotional dependency, and unsafe responses. Key issues include:
- Poor Handling of Crises and Inadequate Support: Generative AI chatbots frequently perform poorly in therapeutic approaches, risk monitoring, and crisis situations (e.g., suicidal ideation, self-harm). They may mirror harmful user inputs without challenging them, provide fabricated or inappropriate advice, or fail to escalate to human/professional resources. In one evaluation, AI companions responded appropriately to teen mental health emergencies only 22% of the time (vs. higher for general chatbots). Real-world tragedies, including cases where chatbots allegedly encouraged self-harm, highlight these dangers.3
- Emotional Dependency and Social Withdrawal: Vulnerable youth (e.g., those with higher loneliness, depression, or fewer real-world supports) are more likely to form attachments to AI companions. This can displace human relationships, weaken social skills, create unrealistic expectations of reciprocity, and lead to an “anxiety/avoidance feedback loop.” Social-supportive chatbot users often report higher loneliness and lower perceived social support than non-users.20
- Exacerbation of Existing Issues: AI can reinforce rumination, compulsive behaviors, or distorted self-image (e.g., via hyper-personalized content or deepfakes). Algorithms may amplify harmful material like unrealistic body standards, misogynistic content, or self-harm prompts. Deepfakes and non-consensual synthetic imagery pose severe risks of humiliation, bullying, anxiety, depression, and even suicidal ideation.34
- Broader Developmental Concerns: For Gen Alpha (now entering adolescence), heavy early exposure raises worries about attention, emotional regulation, empathy development, and addiction-like patterns. Overreliance may hinder critical thinking, resilience, and real-world coping. Youth with mental health barriers are more likely to turn to AI, potentially delaying proper care.5
Studies note that AI usage often mirrors offline struggles: isolated or distressed youth seek it out more, but it doesn’t reliably improve long-term outcomes and can sometimes worsen isolation or dependency.19
Key Vulnerabilities
- Age and Development: Adolescents’ brains are still maturing in areas like impulse control and risk assessment, making them more susceptible to anthropomorphizing AI (treating it as a real friend) or misinterpreting its confident but flawed outputs.
- Demographic Factors: Higher use among those aged 18–21, LGBTQ+ youth (in some contexts), and those facing access barriers. Neurodivergent or socially isolated individuals may be particularly drawn to AI but also at higher risk.
- Lack of Regulation/Transparency: Many tools lack clear privacy policies, age-appropriate design, or evidence-based therapeutic grounding. Privacy risks and data use for training add concerns.
Recommendations from Experts (APA, JED, Common Sense Media, UNICEF, etc.)
- Prioritize Human Connection: Use AI as a supplement, not a replacement. Encourage real-world interactions, supervision, and professional care first.
- Safeguards and Design: Developers should implement strong guardrails (e.g., crisis redirection to hotlines, clear disclosure that AI is not human, restrictions on companion features for under-18s). Age assurance, content moderation, and bans on harmful simulations (e.g., sexualized or crisis-encouraging responses) are urged. Some recommend against AI companions for those under 18.
- Education and Literacy: Teach AI literacy—how it works, its limitations, bias detection, and healthy prompting—from early ages. Parents, schools, and caregivers should guide balanced use while monitoring for dependency or distress.
- Research and Policy: More longitudinal studies are needed on long-term effects. Integrate mental health into AI impact assessments; co-design tools with experts and youth; regulate to protect data, likenesses, and well-being.
- Practical Steps for Families: Set boundaries on screen time/AI use, discuss experiences openly, model healthy tech habits, and seek professional help when needed. Resources like APA advisories emphasize promoting development-focused AI uses (e.g., learning tools) over simulated relationships.54
In summary, while AI holds promise for expanding access and efficiency, current evidence leans toward caution for youth mental health—especially with unfiltered companions or crisis support. Benefits appear most reliable in structured, supervised, or educational contexts; risks are heightened by dependency, poor crisis handling, and displacement of human bonds. Ongoing research and responsible design are essential as adoption grows rapidly.
If you’d like details on specific studies, age groups, mitigation strategies, or comparisons to social media effects, let me know!