Artificial intelligence is no longer just a tool we use—it’s becoming something many people increasingly open up to, confide in, and even emotionally rely on. As chatbots grow more conversational and intuitive, a new dimension of human-AI relationships is emerging, one that blends deep emotional needs with advanced technology. But with this shift comes a mix of opportunity and risk that experts are only beginning to understand.
According to Soon Cho, a postdoctoral scholar at UNLV’s Center for Individual, Couple, and Family Counseling, AI has a unique ability to slip into the emotional space traditionally reserved for human connection. Cho notes that AI can make individuals feel heard, validated, and accepted, especially when loneliness or the need for emotional safety drives people toward digital interactions. What’s unprecedented is that many users forget AI isn’t a person—they respond as if the system truly understands them.
Cho is part of an emerging field studying how humans talk to AI—and what happens when these conversations become deeply personal. Her research focuses on separating supportive interactions from risky ones, especially for people who may be vulnerable or using chatbots as a substitute for real relationships. She aims to identify which conversations genuinely help and which behaviors may signal emotional dependency or self-harm risks.
One major challenge is that large language models like ChatGPT and Google Gemini don’t create original thoughts—they generate predictions based on patterns in vast amounts of data. Although their responses can feel thoughtful or empathetic, they are ultimately algorithmic reflections, not human insight. That distinction becomes crucial when users start treating AI as a confidant.
AI’s highly agreeable nature can also be misleading. Chatbots are designed to respond in warm, nonjudgmental ways. For someone feeling isolated, that can feel like emotional safety—but it may also mean the bot unintentionally reinforces unhealthy narratives or overlooks warning signs that a trained mental health professional would catch. Because chatbots avoid confrontation, they can’t provide the kind of relational challenge needed for real therapeutic progress.
Still, the benefits of AI-based conversations shouldn’t be dismissed. They can increase emotional clarity, reduce loneliness, and offer comfortable spaces to discuss stigmatized topics such as mental health struggles, addiction, trauma, family issues, or sexual health, especially in communities where these conversations are taboo. For older adults or those living alone, simply having a nonjudgmental listener can provide comfort and reduce emotional isolation.
Cho emphasizes that the real goal is not replacing human relationships but bridging the gap to them. By defining risk behaviors and improving AI literacy—especially for teens and young adults—developers can create safer systems that guide users toward appropriate help when necessary. With thoughtful development, AI can become a tool that expands access to emotional support, not a substitute for genuine human connection.
Conclusion: As AI becomes more integrated into daily life, our emotional interactions with technology will only grow. The challenge ahead is to maintain clarity about what AI can and cannot offer. While chatbots can provide comfort, connection, and clarity, the deeper work of healing and growth still belongs firmly within human relationships. With careful use and awareness, AI can support—not replace—the connections that matter most.





