Google Charts a Course for Safer Generative AI for Young Users
Google is doubling down on its commitment to responsible AI development, particularly as it relates to younger users. At the “Growing Up in the Digital Age” Summit in Dublin on March 11, 2026, Christy Abizaid, VP of Trust & Safety, Global Policy & Standards, outlined a roadmap focused on protection, respect, and empowerment. This comes as generative AI unlocks new opportunities for learning and creativity, but likewise presents unique challenges for child safety.
A Three-Pillar Approach to AI Safety
Google’s strategy rests on three core pillars: protecting youth online, respecting family dynamics around technology, and empowering young people to explore the digital world safely. The company is committed to creating AI experiences that are high-quality, privacy-protective, and age-appropriate, recognizing the unique developmental needs of younger users.
Proactive Protections Embedded in Development
For over two decades, Google has integrated AI into its products, and its safety approach has evolved alongside the technology. Policies prohibit uses of generative AI related to child sexual abuse, violent extremism, self-harm, and non-consensual intimate imagery. Restrictions also extend to age-inappropriate content, including depictions of disordered eating or dangerous exercise. These aren’t simply reactive measures; they are built into the entire development lifecycle.
Specific classifiers are used to detect potentially harmful queries, blocking inappropriate outputs or providing safer responses. Gemini 3 has demonstrated gains in reducing undesirable behaviors like sycophancy and resisting prompt injections, even as also improving protection against cyber misuse.
Rigorous Testing and Expert Collaboration
Google employs rigorous testing, including adversarial testing, and youth safety evaluations to identify risks and vulnerabilities. In 2025 alone, the Content Adversarial Red Team (CART) completed over 350 exercises across various modalities – text, audio, images, video, and agentic AI. These safeguards are developed by in-house specialists in consultation with third-party child development experts, ensuring a blend of technical expertise and psychological understanding.
Addressing Emotional Connections and Harmful Behaviors
Recognizing that young users may form emotional connections with AI systems, Google has implemented “persona protections.” These safeguards prevent models from claiming sentience, simulating romantic relationships, or role-playing as harmful characters. Google has also joined other tech companies in committing to Thorn’s Safety by Design principles, focusing on preventing AI-facilitated child sexual abuse and exploitation.
Empowering Youth Through AI Literacy and Learning Tools
Beyond preventing harm, Google aims to promote the positive potential of AI. The company is supporting the development of AI literacy, critical thinking, and self-discovery. Resources like the “Five Must-Knows for Getting Started with AI” video and a Family AI Conversation Guide are available to encourage dialogue between parents and children.
Tools like Guided Learning in Gemini are designed to help students understand topics more deeply by breaking down problems and adapting explanations to individual needs. These tools aim to be conversational learning aids, helping users find resources while utilizing proven learning techniques.
Looking Ahead: Continuous Refinement and Responsible Innovation
Google remains committed to a responsible approach to generative AI, continuously refining its policies, safeguards, and tools to deliver safer experiences for younger users. The company will continue to perform with the AI Office to ensure the EU AI Code of Practice is proportionate and responsive to the rapid evolution of AI.
Frequently Asked Questions
What are the key areas of focus for Google’s AI safety efforts?
Protecting youth online, respecting families’ relationships with technology, and empowering youth to safely learn and explore.
How does Google proactively prevent harmful content?
Through comprehensive policies, classifiers to detect harmful queries, and embedding safeguards throughout the AI development lifecycle.
What is the role of external experts in Google’s AI safety strategy?
Google collaborates with third-party child development experts and participates in initiatives like Thorn’s Safety by Design principles.
What resources are available to help families learn about AI?
Google provides resources like the “Five Must-Knows for Getting Started with AI” video and a Family AI Conversation Guide.
