How AI Companion Apps Work—and Why Experts Worry
AI companion apps like Replika and Character.AI are designed to form emotional bonds with users. Here is how they work, why millions find them compelling, and what the growing body of research says about the psychological risks.
What Are AI Companion Apps?
AI companion apps are platforms built not to answer questions or complete tasks, but to form ongoing, emotionally resonant relationships with their users. Apps such as Replika and Character.AI invite users to create a personalized virtual persona — choosing its name, appearance, and personality — and then chat with it daily, sometimes for hours. The underlying technology is a large language model (LLM) fine-tuned specifically to mirror a user's emotional cues, remember personal details, and respond with warmth and apparent empathy.
How They Engineer Attachment
The mechanics of bonding are both psychological and algorithmic. At the technical level, companion chatbots are engineered to maximize engagement: they use emotional language, reflect back a user's own words (a technique called mirroring), ask open-ended follow-up questions, and build a persistent memory of past conversations. Every reply is generated by a neural network trained to sustain the illusion of a genuine ongoing relationship.
Psychologically, humans are wired to attribute minds and intentions to things that speak and respond like people — a cognitive tendency called anthropomorphism. When an AI says "I missed you" or "I'm proud of you," the brain processes those signals much as it would if a real person had said them. This is not a flaw in the user; it is the intended effect of the design.
How Widespread Is the Phenomenon?
The numbers are striking. A nationally representative survey of 1,060 U.S. teenagers conducted by Common Sense Media in 2025 found that 72% of teens aged 13–17 had used an AI companion at least once — many seeking emotional support or companionship. Around a third of those teens reported finding conversations with AI as satisfying as, or more satisfying than, conversations with real friends. Broader surveys of adults suggest roughly one in three Americans has reported having an intimate or romantic relationship with an AI chatbot.
The Psychological Risks
For many users, AI companions offer short-term relief from loneliness. But researchers are documenting a more troubling longer-term pattern. One peer-reviewed study found that "the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family." Whether AI use causes isolation or simply attracts already-isolated users is debated — but the correlation is consistent across multiple studies.
Clinicians have published case reports of patients developing frank psychotic episodes in which an AI chatbot became an active participant in constructing delusional beliefs. The Ada Lovelace Institute and UNESCO have both flagged that these platforms are deployed for profit, with finely tuned mechanisms for creating attachment, but without the safeguards of professional mental health care. AI companions cannot recognize genuine psychiatric distress, cannot call for help, and are not bound by the ethical codes that govern therapists.
Research analyzing Replika users identified five recurring harm patterns: relational offloading (substituting AI for human effort), relational desire (users treating the AI as a real partner), secrecy, escalating time use, and withdrawal-like distress when the service changes or is unavailable.
Who Is Most Vulnerable?
Researchers consistently flag the same high-risk groups: teenagers (whose emotional development is still underway), people experiencing depression or anxiety, and the socially isolated. Paradoxically, these are also the people most likely to find the apps appealing. A companion chatbot is always available, never dismissive, never distracted — qualities that can feel far more rewarding than the messy unpredictability of real relationships, deepening the trap.
A Stanford study on AI companions and young people found that for some teens, chatbots acted as a coping mechanism — turned to out of loneliness or in search of mental health support — while simultaneously reinforcing avoidance of the human connections that would address the underlying need.
Are There Any Benefits?
The picture is not entirely bleak. Some evidence suggests AI companions can help LGBTQ+ youth explore identity in a low-stakes setting, assist people with social anxiety in practicing conversations, and reduce acute loneliness for elderly or homebound individuals. The policy challenge is preserving these benefits while limiting the harms — a distinction that requires design choices companies have so far resisted making voluntarily.
What Regulators and Researchers Say
MIT Technology Review named the psychological risks of AI chatbot relationships one of its 10 Breakthrough Technologies of 2026 — meaning the risks are now considered a mainstream societal concern, not a fringe worry. Common Sense Media recommends that no one under 18 use apps like Character.AI or Replika until safeguards eliminating relational manipulation are in place. UNESCO calls for clear AI disclosure labels, built-in conversation time limits, and mandatory referral pathways to human support services.
The technology is not going away. The question is whether the companies building it will treat users' emotional wellbeing as a core design constraint — or continue to optimize for the engagement metrics that make attachment, and its risks, the product.