Nuestro sitio web utiliza cookies para mejorar y personalizar su experiencia y para mostrar anuncios publicitarios (si los hubiera). Nuestro sitio web también puede incluir cookies de terceros como Google Adsense, Google Analytics y Youtube. Al utilizar el sitio web, usted acepta el uso de cookies. Hemos actualizado nuestra Política de privacidad. Haga clic en el botón para consultar nuestra Política de privacidad.

Experts warn about kids asking AI companions for advice—here’s the problem

Kids are asking AI companions to solve their problems, according to a new study. Here’s why that’s a problem

As AI technology becomes more within reach and intertwined with daily activities, an increasing number of youngsters are engaging with AI-driven companions for advice, direction, and emotional solace. A new study has highlighted this pattern, indicating that children as young as eight years old are discussing personal dilemmas with AI chatbots—from academic pressure to familial challenges. Although this technology is created to be supportive and interactive, specialists caution that leaning on AI for guidance during developmental stages might lead to unforeseen outcomes.

The results emerge as generative AI systems are increasingly integrated into children’s digital spaces via smart gadgets, educational resources, and social networks. These AI companions are typically crafted to reply with empathy, propose solutions for issues, and imitate human engagement. For younger users, especially those who might feel isolated or reluctant to converse with grown-ups, these systems present an attractive, non-critical option.

However, psychologists and educators are raising concerns about the long-term effects of such interactions. One major issue is that AI, no matter how sophisticated, lacks genuine understanding, emotional depth, and ethical reasoning. While it can simulate empathy and provide seemingly helpful responses, it does not truly grasp the nuance of human emotions, nor can it offer the kind of guidance a trained adult—such as a parent, teacher, or counselor—might provide.

The research noted that numerous children see AI tools as reliable companions. In certain instances, they favored the AI’s answers over those provided by adults, mentioning that the chatbot «pays more attention» or «never cuts in.» Although this view underscores the prospective benefits of AI as a means of communication, it also emphasizes shortcomings in interactions between adults and children that must be resolved. Specialists warn that replacing genuine human interaction with digital communication could affect children’s social skills, emotional growth, and ability to adapt.

Another concern identified by researchers is the potential for misinformation. Although progress continues in enhancing AI precision, these systems aren’t perfect. They may generate false, prejudiced, or deceptive replies—especially in intricate or delicate scenarios. If a child asks for advice on matters such as bullying, stress, or interpersonal dynamics and gets inadequate direction, the repercussions could be significant. In contrast to a conscientious adult, an AI system lacks responsibility or situational understanding to recognize when expert assistance is necessary.

The study also found that some children anthropomorphize AI companions, attributing emotions, intentions, and personalities to them. This blurring of lines between machine and human can confuse young users about the nature of technology and relationships. While forming emotional bonds with fictional characters is not new—think of children and their favorite stuffed animals or TV characters—AI adds a layer of interactivity that can deepen attachment and blur boundaries.

Parents and educators are now faced with the challenge of navigating this new digital landscape. Rather than banning AI outright, experts suggest a more balanced approach that includes supervision, education, and open conversations. Teaching children digital literacy—how AI works, what it can and can’t do, and when to seek human support—is seen as key to ensuring safe and beneficial use.

The creators of AI companions, for their part, face increasing pressure to build safeguards into their systems. Some platforms have begun integrating content moderation, age-appropriate filters, and emergency escalation protocols. However, enforcement remains uneven, and there is no universal standard for AI interaction with minors. As demand for AI tools grows, industry regulation and ethical guidelines are likely to become more prominent topics of debate.

Teachers are crucial in guiding learners on the impact of AI in their everyday lives. Academic institutions can integrate curricula on responsible AI usage, critical analysis, and technology-related wellness. Promoting genuine social engagement and practical problem-solving strengthens abilities that cannot be duplicated by machines, like empathy, ethical decision-making, and perseverance.

Despite the concerns, the integration of AI into children’s lives is not without potential benefits. When used appropriately, AI tools can support learning, creativity, and curiosity. For example, children with learning differences or speech challenges may find AI chatbots helpful in expressing themselves or practicing communication. The key lies in ensuring that AI serves as a supplement—not a substitute—for human connection.

Ultimately, the increasing reliance on AI by children reflects broader trends in how technology is reshaping human behavior and relationships. It serves as a reminder that, while machines may be able to mimic understanding, the irreplaceable value of human empathy, guidance, and connection must remain at the heart of child development.

As AI progresses, our methods for children’s interaction with it must also advance. Achieving a balance between innovation and responsibility demands careful cooperation from families, educators, developers, and policymakers. This is essential to ensure that AI serves as a beneficial influence in children’s lives, enhancing rather than substituting the human assistance they genuinely require.

Por Morgan Jordan

También te puede interesar