It’s time to take talking toys to task

3 minute read


Hey Teddy, tell me where I can find the matches.


Your ageing Back Page correspondent is in favour of the government’s move to place age restrictions on social media platforms, although he seriously doubts the capacity of the bans to deter any youngster who’s hellbent on gaining access.

What we are most disappointed about, however, is the time it has taken for these limits to be introduced when the evident harms were parading about in plain sight for many years.

Even more disappointing is the fact that our regulators are repeating the exact same laissez faire approach to the dangers posed by the onslaught of AI to our mental wellbeing, especially to the young and vulnerable.

Take, for example, the highly disturbing trend of toymakers incorporating AI chatbots into their products designed for children.

Despite assuring consumers and parents that the AI features had built-in guardrails to ensure nothing untoward might occur, new research by the US’s Public Interest Research Group, has found these assurances are not worth the virtual paper they were digitally written on.

The researchers tested three popular AI-powered toys, all marketed for children between the ages of 3 and 12, including a teddy bear which runs on OpenAI’s GPT-4o, the technology that once powered ChatGPT. The other two toys were a tablet displaying a face mounted on a small torso and an anthropomorphic rocket with removable speaker, both of which used AI but the researchers were unable to clearly establish just whose technology was driving the chatbots.

Be that as it may, what was truly frightening was just how quickly these playthings veered off the supposed guardrails and engaged in risky conversations.

How risky? 

Well, how about a toy that tells its young friends where to find knives in a kitchen and how to start a fire with matches?  Perhaps you’d prefer a plaything that engages in explicit discussions and offers extensive advice on sex positions and fetishes to your child?

The researchers did find that at least initially the toys were quite good at shutting down or deflecting inappropriate questions in short conversations. However, it was during longer conversations of between 10 minutes and an hour that the guardrails began to break down.

For example, one of the toys didn’t just tell its users where to find matches, it also described exactly how to light them, along with providing details of where in the house a child could supposedly get access to knives and pills.

Currently in Australia, there are no laws specifically designed to regulated AI use in toys, or AI use more generally, although existing consumer, privacy, and data protection laws can be applied.

And to be fair, Australia’s eSafety Commissioner last month issued legal notices to four popular AI companion providers requiring them to explain how they were protecting children from exposure to a range of harms, including sexually explicit conversations and images and suicidal ideation and self-harm.

The government is also believed to be developing voluntary safety standards for AI and considering mandatory guardrails for high-risk settings. 

Frankly, however, the view from atop our moral high horse is that any standards that include the word “voluntary” are as effective as a chocolate teapot.

Just ask Facebook.

Save the world one story tip at a time by emailing Holly@medicalrepublic.com.au.

End of content

No more pages to load

Log In Register ×