Many AI toys claim to use chatbots meant for adults and teens


Most major tech companies have age restrictions on their powerful chatbots, but that hasn’t stopped some toy companies from claiming to use OpenAI and Google to power their products.

A report released Tuesday by a consumer watchdog found that more than two dozen toys advertised online were marketed as being powered by leading AI models, despite restrictions meant to stop children from using them.

The report from the U.S. Public Interest Research Group Education Fund (PIRG) said the toy companies appeared to have found a gap in AI companies’ policies regarding age restrictions. While young people are forbidden to use such models and their chatbots, developers — people and companies building on the AI models — often don’t face similar restrictions.

PIRG said it was able to sign up for developer access for AI models from Google, OpenAI and xAI and faced “no substantive vetting” regarding whether it would target its services to children. Anthropic asked PIRG whether it planned to build a product for minors.

On Google’s, Anthropic’s and OpenAI’s developer platforms, PIRG was able to build a system designed to act like an AI-powered teddy bear for children.

“You have AI companies that say their models, on their own, are not for kids,” R.J. Cross, lead author of the report and a researcher at PIRG, told NBC News. “But they allow third-party developers to use them in toys and are very hands-off about the question of safety.”

In response to a request for comment, an OpenAI spokesperson wrote in a statement: “Minors deserve strong protections and we have strict policies that all developers are required to uphold. We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old.”

“These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors,” the spokesperson wrote, referring to the application programming interfaces (APIs) that developers use to interact with companies’ services.

An Anthropic spokesperson told NBC News that users of its AI systems must be over 18 years old because young people are at higher risk of experiencing negative outcomes when conversing with chatbots. The spokesperson said developers are required to utilize age-appropriate guardrails and tell users their product is powered by AI, emphasizing that developers must follow Anthropic’s acceptable use policy prohibiting many types of dangerous or harmful behavior.

Google and xAI didn’t reply to requests for comment.

The AI boom has created a new market for a wide variety of products infused with leading chatbots as tech companies compete to attract developers. A wave of AI toys hit shelves last holiday season, but experts have warned — and an NBC News investigation showed — that they present a variety of safety concerns.

Today’s AI toys rely on a handful of tech companies for their interactive features. But rather than build AI into the toys, most use the internet to transmit data back to AI companies, which then send responses to the toys.

Concerns about the use of AI chatbots by minors have spurred action from tech companies, many of which have placed restrictions on the ages of their users.

OpenAI has said its flagship system, ChatGPT, “is intended for people 13 and up,” and it has also built a version for people under 18 that treats sensitive topics differently.

Google says users must be over 13 to use its Gemini AI products. Google also has firm restrictions barring organizations’ using its products in any service or business “that is directed towards or is likely to be accessed by individuals under the age of 18.”

PIRG identified over 20 unique toys being sold online that claimed to use OpenAI’s systems, while five toys claimed to use Google’s systems — which appear to be direct violations of Google’s terms of service regarding the targeting of children. However, some of the toys misspelled the name of OpenAI’s products or claimed to use both OpenAI and Google systems, casting doubt on the accuracy of toymakers’ claims.

Assuming the toymakers’ claims are valid, Cross said, the apparent lack of oversight raises questions about companies’ ability to track how developers and third parties are using their systems.

“It doesn’t make a ton of sense that AI companies that have not released kids-safe versions of their models would allow anyone with a credit card to sign up to make a product for kids using that same technology,” Cross said. “That doesn’t make a lot of sense to have AI companies outsourcing child safety to unvetted developers.”

PIRG also identified toys that claimed to be powered, at least in part, by AI services from Anthropic and xAI. Anthropic’s terms of service require organizations to agree to supplemental warnings about making their products available to users under age 18, but NBC News found those supplemental guidelines never appear if developers identify themselves as “individuals” using Anthropic’s services, instead of “organizations.” While xAI’s consumer terms ban users under age 13, the same language doesn’t appear in the terms of use for enterprise users, which covers using xAI for “business purposes.”

Most leading AI companies monitor the submissions and requests to their services, and their terms of service include provisions allowing them to ban users if they violate their policies.

Rachel Franz, director of the Young Children Thrive Offline program at the child advocacy group Fairplay, told NBC News that the looser rules for developers threatened to undermine basic rules protecting children from harmful AI-generated material.

“It’s not surprising that there’s a ‘who’s on first?’ debate between AI companies and the corporations embedding AI in kids’ products,” Franz said in written comments. “Both have a long history of skirting accountability and risking harm to children for profit.”

“In order to truly keep kids safe,” Franz continued, “AI companies must ensure that their models are not used in children’s products through better scrutiny and accountability for the companies that use them.”



Source link

  • Related Posts

    Mobile Heartbeat’s Banyan Platform Connects Health System Staff Across the Enterprise Through Microsoft Teams

    The Banyan platform’s interoperability extends beyond Teams to include integrations with electronic health records (EHRs), nurse call systems, scheduling, medical devices, and other critical healthcare IT systems. “Health systems can…

    Kristi Noem testifies at Senate hearing

    IE 11 is not supported. For an optimal experience visit our site on another browser. UP NEXT ‘I never thought I’d be a children’s author’: Tish Rabe on carrying on…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Should Marathon change its divisively fast Server Slam time-to-kill?

    Should Marathon change its divisively fast Server Slam time-to-kill?

    Politics and its Discontents: Bootlicking Followup

    Politics and its Discontents: Bootlicking Followup

    Renuka ruled out of Australia Test, Kashvee called up as replacement

    How Emirates, Qatar, & Etihad Airways’ Business Class Compare In 2026

    How Emirates, Qatar, & Etihad Airways’ Business Class Compare In 2026

    Mobile Heartbeat’s Banyan Platform Connects Health System Staff Across the Enterprise Through Microsoft Teams

    Australian energy bills could surge as Iran conflict drives up global gas prices | Energy

    Australian energy bills could surge as Iran conflict drives up global gas prices | Energy