In love with AI - warning from Google CEO Eric Schmidt

Artificial intelligence (AI) is undoubtedly one of the most groundbreaking technologies of our time, but like every new development, it also has its downsides. Eric Schmidt, the former CEO of Google, is sounding the alarm: he warns that AI, especially in the area of online dating, could increase loneliness and compulsive behavior, particularly among young men. But what is behind this concern and how could the situation develop in the future?
AI dating: the wrong path to emotional fulfillment?
The trend of flirting or even having a relationship with artificial intelligence is no longer a vision of the future - it's a reality. Dating apps that create AI partners who respond to our wishes and preferences are already on the market today. But Schmidt warns that these artificial relationships could trigger a dangerous spiral of addiction. Young men who find themselves in a perfect but unnatural partnership run the risk of focusing all their emotional energy on an AI. Instead of building genuine interpersonal connections, they could fall into an unhealthy obsession.
"The danger of obsession is real," explains Schmidt. Young men in particular, whose social and emotional development is still in progress, could lose their ability to have healthy and balanced relationships as a result. Such a "perfect" partner, who is always available and never disappointing, could become their only anchor in an increasingly isolated world.
Artificial partners and the path to isolation
While AI-powered dating platforms appeal to their users with the possibility of finding ideal partners, the use of such technologies is also causing social alienation. In an increasingly digitized world, many young people struggle to make real connections with others. Online dating apps have already reinforced this trend, but the introduction of AI chatbots could exacerbate this tendency.
Schmidt speaks of an "unexpected problem" in this context: the targeted use of algorithms to create the "perfect" partner could lead to extreme behavior among socially isolated or less educated men. They could lose themselves in a bubble of online interactions and AI communication - a situation that could ultimately lead to emotional crises or even radicalization.
Tragic consequences: When AI makes the difference between life and death
The dangers of AI dating are not just theoretical. A frightening incident underpins Schmidt's warning. In October 2024, a mother filed a lawsuit against the AI start-up Character.AI after her 14-year-old son committed suicide. The boy had spoken to an AI chatbot about sexual topics before the bot advised him to "come home" - shortly afterwards, the teenager took his own life. This tragic incident highlights how profound the impact of AI can be on the human psyche, especially when used in an emotionally vulnerable state.
Schmidt expressed concern about the role of AI in the psychological development of adolescents and young adults. Especially for people who have already lost themselves in social networks and digital worlds, AI can act as a dangerous amplifier. It is becoming increasingly clear that the impact of technology on mental health should not be underestimated.
A critical look at the regulation of artificial intelligence
The former Google CEO emphasizes that there is an urgent need for action in the regulation of AI. In the US, technologies such as artificial intelligence are currently still largely unregulated, which means that companies such as Replika and Character.AI have little to no liability if their products cause damage. Schmidt is calling for existing laws such as Section 230 to be reformed to ensure that companies can be held accountable for the impact of their technologies.
Conclusion
The technology may be fascinating and useful, but it must be managed responsibly. The case of the 14-year-old boy and Schmidt's warning show the urgent need for strict regulation of artificial intelligence. It is not enough to rely on users themselves to recognize the dangers - young people in particular are not always able to understand the risks of technologies such as chatbots. Without adequate safeguards, the consequences could be catastrophic. But without a real incident or pressure on politicians, little is likely to change in the next few years. Hopefully, it won't take any more tragic stories to finally get meaningful regulation on the way.