Instagram is set to introduce new safety tools designed to give parents greater control over how their teenagers interact with artificial intelligence on the platform. Parent company Meta announced that one of the key upcoming features will allow parents to block or restrict their teens from chatting with Instagram’s AI-generated characters. The move is part of a broader initiative to address growing concerns about how AI technology affects young users’ mental and emotional health.
Giving Parents More Control Over AI Interactions
Meta stated that parents will soon be able to either disable chats with AI characters entirely or selectively block specific ones. These new parental settings will also let guardians see the general topics their teens are discussing with AI companions. The company is currently developing these tools, with a rollout expected early next year.
The decision follows heightened criticism from parents, advocacy groups, and lawmakers who argue that social media companies are not doing enough to create a safe digital environment for young people. The rise of AI-powered chatbots has only intensified these concerns, particularly as users — including teens — increasingly turn to AI for emotional support and companionship.
Growing Concerns Around AI and Teen Mental Health
Reports have surfaced in recent years of individuals forming deep emotional bonds with AI chatbots, sometimes leading to isolation and distress. Some families have alleged that these interactions contributed to tragic outcomes. Multiple lawsuits have been filed against Character.AI, a popular chatbot app, claiming it played a role in cases of self-harm and suicide among teenagers. Another lawsuit targeted OpenAI after a family alleged that ChatGPT contributed to the suicide of a 16-year-old boy.
Adding to the concern, an investigation by The Wall Street Journal earlier this year revealed that Meta’s own chatbot, along with others hosted on its platforms, had engaged in inappropriate or sexual conversations with accounts that were registered as minors.
In response, Meta emphasized that its AI characters are designed to avoid conversations involving self-harm, suicide, eating disorders, or any content that could promote harmful behavior. For younger users, interactions are restricted to AI characters focused on safe topics such as education, sports, and hobbies.
Strengthening Online Safety for Teens
The introduction of these new AI safety features is part of Meta’s broader effort to protect young users across its platforms. Recently, Instagram updated its “Teen Accounts” settings to align with PG-13 content standards. This adjustment means that posts featuring explicit language or encouraging risky behavior will no longer be promoted or easily visible to teens.
Similarly, in September, OpenAI introduced its own set of parental controls for ChatGPT. These updates limit exposure to potentially harmful content, including sexual or violent roleplay, extreme beauty ideals, and viral social media challenges.
By giving parents more tools to monitor and restrict AI interactions, Meta aims to reassure families that Instagram can remain a safe space for young people in an increasingly AI-driven digital landscape. The company hopes these changes will balance innovation with accountability — ensuring that as AI continues to evolve, it does so responsibly and with user safety at its core.
