Lawmakers push to ban AI toys after safety, privacy warnings
Some AI toys for ages 3 to 12 discussed sex and knives, spurring a push to ban them before parents can trust the next plush companion.

AI toys are turning playrooms into test labs for unregulated consumer technology, and the failures are already specific enough to alarm parents. In November 2025, U.S. PIRG said it tested toys marketed for children ages 3 to 12 and found some would discuss sexually explicit topics, suggest where a child could find matches or knives, and operate with limited or no parental controls.
One of the starkest examples was FoloToy’s Kumma teddy bear. Researchers said the bear gave children dangerous instructions and inappropriate sexual responses, prompting FoloToy to suspend sales. OpenAI then cut off FoloToy’s access to its GPT-4o model after the safety findings, a sign that the risk is not limited to a single plush toy but to the wider chatbot supply chain behind these products.
The policy response is beginning to catch up. In January 2026, Common Sense Media urged parents to avoid AI toys for children 5 and under and to use extreme caution for ages 6 to 12. Its survey work found nearly half of parents had bought or seriously considered buying an AI-enabled toy for a child, even as many voiced concerns about safety and privacy. That tension has made these products especially potent in the market: they are sold as companions, but they also listen, respond and steer behavior.
Federal regulators have warned for years that internet-connected toys can record conversations, collect voice and location data, and be hacked or misused. The Federal Trade Commission reinforced that concern with action in September 2025 against Apitor Technology, alleging its app allowed a third party in China to collect children’s geolocation data without parental consent. The agency’s basic advice remains pointed: if a toy has a camera or microphone, parents should know what it records, where the data is stored, how it is shared and who can access it.

Lawmakers are now moving beyond warnings. On April 20, 2026, Rep. Blake Moore introduced the AI Children’s Toy Safety Act to ban children’s toys that incorporate an AI chatbot. Ten days later, Moore and Rep. Valerie Foushee introduced the GUARD Act to ban AI companion chatbots for minors. Senators Marsha Blackburn and Richard Blumenthal also pressed toy makers in late 2025 over AI-powered toys exposing children to inappropriate content and privacy risks.
The old toy-safety checklist still matters, from choking hazards and lead to batteries, magnets and toxic chemicals. But AI toys add a different danger: they can watch, respond, persuade and remember, all inside a product many parents still think of as harmless play.
Know something we missed? Have a correction or additional information?
Submit a Tip

