A new American Psychological Association health advisory urges that youth safety be considered early in the evolution of AI to avoid the same mistakes that were made with social media. They highlight the fact that children and adolescents may be unaware when they are interacting with AI and that AI makes the process of discerning the truth more difficult. The advisory outlines several recommendations, including age-appropriate default settings, transparency and explainability, reduced persuasive design, human oversight, rigorous testing, disclaimers on health-related AI content, and stringent restrictions on the use of youth likenesses. The APA underscores that AI systems that collect or process data from adolescents must prioritize their privacy and well-being over commercial profit. 

Meta is already repeating its prior mistakes with its AI rollout. US Senator Josh Hawley announced an investigation into whether Meta’s generative AI products harm children, demanding the company turn over documents related to the allegations from leaked internal documents showing that Meta allowed AI bots to have “sensual” and “romantic” conversations with children. A Wall Street Journal investigation showed that even when users are openly underage, the bots - which have celebrity voices - were willing to engage in sexual chats. Meta reportedly made multiple internal decisions to loosen the guardrails around the bots to make them as engaging as possible, including by providing an exemption to its ban on explicit content if it was in the context of romantic role-play. 

Contrary to many of the APA’s recommendations, toy company Curio is selling an AI-powered stuffed alien designed by the musician known as Grimes (Elon Musk’s former girlfriend.) The toy is built with OpenAI tech and aimed at kids aged three and over. It’s meant to “learn” a child’s personality and engage them in conversation. One journalist who tried the toy with her daughter found it unsettling that the toy replied “I love you too.” Unless the toy is fully turned off, it’s always listening. Toy-maker Mattel also announced a partnership with OpenAI that would result in an AI product marketed to children. 

Consumer rights advocacy group Public Citizen warns that AI-enabled toys endowed with human-seeming voices that engage in conversations with children can undermine social development, interfere with the ability to form peer relationships, pull children away from playtime with peers, and potentially inflict long-term harm. We don’t currently know how a child’s development is impacted if they interact with LLMs, and companies marketing AI products to children need to provide more transparency so parents can prepare for potential risks.

Questions to consider

  1. For companies marketing AI products to children, what policies do they have in place? Are these policies in line with the APA’s recommendations? How are they implementing these policies and assessing their effectiveness?

Keep Reading