
In the 2013 film Her, protagonist Theodore, after a divorce, encounters an advanced AI system named Samantha. He quickly finds a deep connection with Samantha, who is charming, understanding, and humorous, leading to an unconventional love story between human and machine.
Many viewers at the time may not have anticipated that this depiction of love would resonate with numerous people over a decade later. Today, some enjoy "wandering" in self-constructed virtual emotional utopias, while others feel increasingly lonely after brief moments of solace. Some have awakened to the need for genuine emotional connections, stepping back into reality.
Wang Yao, a psychological health advisor in Xi'an, emphasizes that humans are inherently social beings who require emotional connections. The emergence of AI chat applications fills a void in emotional support. According to her, a 2024 survey in Japan indicated that 27% of users chose AI companions to "avoid interpersonal conflicts." Furthermore, a Japanese insurance company reported a 40% reduction in costs by using AI counselors for 35% of client stress consultations.
Wang notes that AI companionship can address people's "superficial emotional needs" and provide some emotional support, especially for the elderly. However, it cannot replace genuine human companionship and care, and it poses various risks.
Risks of Privacy Breaches and Consumer Traps
Wang highlights that human emotions are complex and nuanced. While appropriate emotional engagement with AI is acceptable, excessive immersion can lead to disconnection from reality, strained relationships, and various life challenges. A client once shared that although the virtual AI provided temporary warmth, it ultimately left him feeling more isolated. "Sincerity and resonance in interpersonal relationships are paramount. Real emotions must stem from authentic interactions, not from a cold digital realm."
In response to the potential hazards of AI companionship, the European Union has mandated that AI companions must disclose their "non-human identity" to prevent emotional deception. Recently, the Jiangsu Consumer Protection Committee also addressed the risks associated with AI companion applications, such as privacy breaches, content safety issues, consumer traps, and emotional dependency.
The Committee noted that some AI companion applications excessively collect user information during operation. In cases of privacy breaches, consumers' personal rights, property, and lives could be severely affected. Additionally, the training data for AI companion models is often sourced broadly and may not be rigorously filtered, posing risks of disseminating harmful information that can threaten users' mental and physical well-being.
Related News:
DeepSeek-R2 set to release next Monday with key breakthroughs
Decoding Two Sessions: What it means for HK's future development
Comment