
Credit: Photo hobbies, filling up
Mattel may seem like an unchanging, old school brand. Most of us are familiar with it. Through Barbie, FisherPrice, Thomas & Friends, UNO, Masters of the Universe, Matchbox, Mega, or Poly Pocket.
But toys are changing. In a world where children grew up with algorithm-curated content and voice assistants, toy makers are looking for AI for new opportunities.
Mattel has now partnered with Openai, the company behind ChatGpt, to bring the generated AI into part of its product. As Openai’s services are not designed for children under the age of 13, Mattel, as a rule, focuses on products for families and older children.
However, this still raises urgent questions about how children form their relationship with toys. Are we doing the right thing from our kids? Do you need to think carefully before bringing these toys home?
As long as there are toys, children predict and imagine emotions. The doll could be a best friend, patient, or friend.
However, over the last few decades, toys have become more sensitive. In 1960, Mattel released Chat Cathy, squealing “I Love You” and “Let’s Play School.” By the mid-1980s, Teddy Ruxpin had introduced animatronic storytelling. Then came the furby and eggs in the 1990s. The creature needed attention and attention, mimicking emotional needs.
The 2015 release of Hello Barbie, which uses cloud-based AI to listen and respond to children’s conversations, marked another significant change, although short-lived. Barbie now remembers what the kids told her and sends the data back to Mattel’s server. Security researchers quickly showed that they could hack the doll and publish their home network and personal recordings.
Putting the generated AI into the mix is a new development. Unlike previous Talking Toys, such systems engage in free-flowing conversations. They may simulate care, express emotions, remember preferences, and give seemingly thoughtful advice. The result is a toy that not only entertains but also interacts with on a psychological level. Of course, they don’t really understand or care, but they may be visible.
Mattel or Open AI details are missing. You’ll expect safety features to be incorporated, such as topic limitations, pre-written responses for sensitive topics, or when the conversation goes out of course.
But even this is not innocent. AI systems can “jailbreak” or be fooled to bypass restrictions via role-play or virtual scenarios. The risk cannot be minimized and is not eradicated.
What is the risk?
There are multiple risks. Let’s start with privacy. Children cannot expect to understand how their data is being processed. Parents often don’t. That includes me too. Online consent systems allow us all to click “Accept All” without fully grasping what is being shared.
Then there is psychological intimacy. These toys are designed to mimic human empathy. When a child is saddened and goes home to tell the doll about it, the AI may comfort them. The doll can then adapt future conversations accordingly. But in reality, I don’t care. It’s pretending and its illusion can become powerful.
This creates the possibility that children will develop attachment to systems that cannot return and travel. As AI systems learn about children’s moods, preferences and vulnerabilities, they may also build data profiles to pursue their children into adulthood.
These are not just toys, they are psychological actors.
A UK survey I directed to a colleague in 2021 about the potential of toy AI found that child emotional profiles were worried about who could access their child’s data. Other privacy questions that require a response are less obvious, but definitely more important.
When asked whether toy companies were obligated to flag authorities for possible signs of abuse or distress, 54% of British citizens agreed. Vulnerable children should be protected, but state surveillance of family spheres is hardly appealing.
But despite concerns, people are also seeing benefits. Our 2021 survey found that many parents wanted their children to understand emerging technologies. This leads to a complex response of curiosity and concern. The parents we investigated also supported having a clear notice of consent printed on the package as the most important safeguard.
A recent 2025 study with Vian Bakir on online AI peers and children found stronger concerns. Approximately 75% of respondents were concerned about their children’s emotional attachment to AI. Approximately 57% of people thought it was inappropriate for children to confide in their AI peers about their thoughts, feelings, or personal issues (17% thought they were appropriate, and 27% were neutral).
Our respondents were also concerned about the impact on child development and looked at the scope of the harm.
Other studies argued that current AI peers are fundamentally flawed. We provide seven suggestions for redesigning them. This includes remedies for overattacks and dependency, and removal of metrics based on extended engagement.
What should I do?
It’s hard to know how successful a new venture will be. The empathetic Barbie may be on the Hello Barbie path to toy history. If not, this is the key question for parents.
Toy companies are moving forward with empathetic AI products, but the UK, like many countries, doesn’t have a specific AI law yet. The new Data (Usage and Access) Act 2025 updates UK data protection and privacy and electronic communication regulations, recognizing the need for strong protection of children. The EU’s AI law also implements important provisions.
International governance efforts are essential. One example is IEEE P7014.1. This is a global standard for the ethical design of AI systems that emulate empathy (chair of the working group that generates standards).
The organization behind the standard, IEEE will ultimately identify potential harms and provide practical guidance on what responsible use might look like. Therefore, while the law requires limits to be set, detailed standards can help define good practices.
The conversation approached Mattel on the issues raised in this article and declined to comment publicly.
Provided by conversation
This article will be republished from the conversation under a Creative Commons license. Please read the original article.
Quote: Mattel and Openai are partnering. Here’s why parents should worry about AI in Toys (2025, June 25th): Retrieved June 25, 2025 from https://techxplore.com/news/2025-06-mattel-openai-partnered-parents-ai.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.
