Lawmakers are examining new revelations about “sensual” chatbot conversations Meta deemed acceptable for children, bringing the tech giant and its past on children’s safety back into the spotlight. Meta, the parent company of Facebook and Instagram, has long faced scrutiny over the impact of its social media platforms on children. As the company expands into artificial intelligence (AI), it is grappling with both familiar and new problems. In an internal policy document obtained by Reuters, Meta featured examples of acceptable conversations between its AI chatbot and children, suggesting they could engage in “conversations that are romantic or sensual” and describe the children “in terms that evidence their attractiveness.” Examples Meta said were erroneous and have since been removed. Sen. Josh Hawley (R-Mo.) slammed the tech giant Thursday, suggesting the revelations were “grounds for an immediate congressional investigation.” Sen. Marsha Blackburn (R-Tenn.), who has long championed the Kids Online Safety Act (KOSA), pointed to the revelations as underscoring the need for such legislation.
Democrats have also joined the backlash, questioning how the chatbot guidance was approved. Sen. Ron Wyden (D-Ore.) suggested the incident shows Meta is a company “morally and ethically off the rails,” and that Meta and Zuckerberg should be held fully responsible for any harm these bots cause. The emergence of AI tools and chatbots has created new challenges for tech companies, as they make decisions about how to train AI models and what limitations to put on chatbot responses.