Meta Brings New Safety Tools for Parents to Monitor Teen AI Usage on Facebook and Instagram
New parental control features allow parents to understand how teenagers interact with AI assistants across Meta platforms while maintaining privacy and promoting safer digital conversations for younger users

Meta has introduced a new set of parental control tools aimed at making AI interactions safer for teenagers across its platforms. With the growing use of artificial intelligence among young users, concerns around safety and content exposure have increased, prompting the company to add more transparency for parents without compromising privacy.
In a major update, Meta has rolled out a feature called Insights that gives parents a clearer picture of how their teens are using AI assistants. Instead of showing full chat conversations, the tool provides a summary of topics discussed over the past seven days. These categories may include areas such as school related queries, entertainment, lifestyle interests, and health topics, offering a balanced view while still protecting personal privacy.
Interestingly, the feature is not designed to expose private chats but to highlight general usage patterns. This approach aims to help parents stay informed without making teenagers feel constantly monitored. Meta believes this middle ground is important for maintaining trust while ensuring safety in digital spaces.
At present, this update is available in selected regions including the United States, United Kingdom, Australia, Canada, and Brazil. It has already been introduced across platforms like Facebook, Messenger, and Instagram. The company has indicated that expansion to other regions, including India, may happen in the coming weeks depending on rollout progress.
Along with Insights, Meta is also working on additional safety mechanisms. One of the upcoming features includes alert systems that can notify parents if a teenager shows signs of engaging in sensitive topics such as self harm or suicidal thoughts during AI conversations. This is part of a broader effort to detect risk early and encourage timely intervention.
Meta has also emphasized that its AI systems are designed with age appropriate safeguards, similar to content ratings used in films. This means that while direct harmful responses are restricted, general topic-level information may still appear in parental insights to help awareness.
To further support families, the company has developed conversation starter guides that are available within the Family Center. These prompts are meant to help parents and teens talk openly about AI usage and digital habits in a more natural and comfortable way.
In addition, Meta has announced the formation of an AI Wellbeing Expert Council. This group will include professionals from mental health and youth safety fields who will advise the company on improving AI systems and ensuring they remain responsible and safe for younger audiences.





