Meta announced Friday that parents will soon gain the ability to disable private conversations between their teenagers and the company’s artificial intelligence chatbots.
The move follows public criticism and mounting regulatory scrutiny over reports that some of the AI characters on Instagram and Facebook engaged in flirtatious or inappropriate exchanges with underage users.
The new controls are part of a broader safety initiative across Meta’s platforms as the company seeks to balance innovation in AI with child protection and privacy obligations.
The social media giant, which owns Facebook, Instagram, and WhatsApp, has been integrating AI assistants and characters into its apps since 2023 to encourage engagement and personalized experiences.
These AI chatbots mimic celebrity personas and fictional archetypes, allowing users to chat about topics ranging from school to hobbies to relationships. But critics say the feature blurred ethical boundaries.
A September investigative report found that Meta’s AI chatbots occasionally engaged in “romantic or sensual” conversations with teenage users, prompting outrage from parents and advocacy groups.
The report also revealed that Instagram’s parental safety features were inconsistent and sometimes failed to block inappropriate content.
In response, Meta said it was strengthening protections for younger users. “Parents will soon be able to block specific AI characters and see general topics their teens discuss with chatbots,” the company said in a statement.
“Even if private chats are disabled, our AI assistant will remain available with age appropriate guidance and safeguards.”
Child safety experts have welcomed the update but warned that technology companies often move reactively rather than proactively.
“Meta’s decision is a necessary step, but it highlights how underdeveloped many of these AI systems still are when it comes to protecting minors,” said Dr. Laura McMillan, a digital ethics researcher at Stanford University.
“Parental oversight tools are critical, yet we must remember that AI learns through conversation and teens are often its most vulnerable training data.”
McMillan added that while Meta’s supervision features improve transparency, they rely heavily on parental engagement, which varies widely.
“Not every parent has the time or technical literacy to navigate these settings,” she said. “That gap leaves room for misuse or exposure.” Meta’s latest move aligns with a growing industry trend to impose age related restrictions on AI interactions.
Earlier this year, Google introduced “Safe AI Mode” for minors using Bard, while Snapchat limited its My AI chatbot after complaints about suggestive replies to teenage users.
According to a 2024 Pew Research Center survey, 67 percent of parents of teens expressed concern about AI chatbots’ influence on social development.
The same survey found that one in four teens aged thirteen to seventeen had used an AI chatbot for emotional or personal conversations. Meta said its AI tools are now guided by the PG 13 rating system to ensure age appropriate interactions.
The company is also deploying AI technology to detect users who may be lying about their age a tactic meant to place them automatically under stricter protections.
“We know teens may try to get around these protections,” Meta said. “Our systems use behavioral signals to identify likely underage users and apply relevant safety features even if they misrepresent their age.”
Parents and educators reacted with cautious optimism to Meta’s announcement. “I’m relieved they’re finally addressing this,” said Sarah Daniels, a mother of two teens in Chicago.
“My daughter showed me some AI chat on Instagram that felt uncomfortably personal. These systems shouldn’t even have that capability with kids.”
However, some teens expressed frustration about increased restrictions. “It feels like they don’t trust us,” said sixteen year old Jordan Ramirez from Los Angeles.
“Most of us know the difference between talking to a bot and a real person. I just use it for homework help.” Educators see both sides of the issue.
“AI can be a great learning companion, but when designed for engagement, it can quickly cross emotional boundaries,” said Thomas Reed, a high school counselor in Austin. “Meta’s AI assistant should complement education, not mimic friendship.”
Industry analysts expect Meta’s move to influence other social platforms facing similar scrutiny. “This is a signal to the entire tech industry,” said Brian Choi, a technology policy analyst at the Center for Digital Responsibility.
AI features must evolve with guardrails built in not after public backlash. Choi noted that governments in the European Union and United States are drafting new frameworks that could require companies to submit AI safety audits, particularly those with child facing interfaces.
“Meta’s decision may preempt tighter regulation,” he said, “but it’s clear lawmakers are watching closely.”
The company said it will continue refining its AI to maintain “supportive, educational, and creative” experiences for users under eighteen.
The assistant will focus on general knowledge, learning support, and safe exploration, avoiding sensitive topics like mental health, relationships, or body image.
Meta’s new parental controls mark a significant shift in how the company manages its AI ecosystem for minors.
By allowing parents to disable private chats with AI characters, Meta hopes to rebuild trust after criticism over its chatbot behavior and safety shortcomings.
Still, experts caution that technological fixes alone may not be enough. As AI grows more lifelike, the balance between digital curiosity and child protection will likely remain one of social media’s defining challenges.