keynote address from Christy Abizaid

keynote address from Christy Abizaid

The following post is adapted from a keynote speech given by Christy Abizaid, VP, Trust & Safety, Global Policy & Standards, at the “Growing Up in the Digital Age” summit in Google Dublin on March 11.

Generative AI opens up new opportunities for learning, creativity and connection. As we develop this technology, we have a deep responsibility to do so in a way that is safe and beneficial for everyone, especially for younger users who are beginning to explore its potential.

At Google, our work is built on three essential pillars: protecting young people online, respecting families’ unique relationships with technology, and empowering young people to learn and explore safely online. As we build safer generative AI tools, we are committed to creating high-quality, privacy-protective and age-appropriate AI experiences that empower youth and protect their unique developmental needs.

Building a foundation for proactive protection

For more than two decades, AI has powered Google’s core products, and our approach to security has evolved alongside it. Our work is based on comprehensive policies that prohibit certain uses of our generative AI and limit harmful content for minors. This includes clear prohibitions on content related to child sexual abuse, violent extremism, self-harm and intimate images without consent. We also maintain specific policies that limit age-inappropriate content for minors, such as content that depicts or promotes eating disorders or dangerous exercise.

These policies are not just a reactive backstop; they are embedded throughout the development life cycle. Security measures are strategically implemented at every stage, from a user’s initial input to the model’s final output. We use specific classifiers to detect child safety-related queries and prevent harmful output. Some controls are e.g. designed to identify known CSAM, while others assess whether a request might violate our policies (including those designed specifically for teens) and trigger either a block or a safer response. For example, our evaluations have shown how Gemini 3 achieved specific gains in reducing sycophancy, resisting rapid injections and improving protection against cyber abuse.

Conducting rigorous testing and responsible design

To ensure these protections are effective, we carry out rigorous testing and expert advice. This includes adversarial testing and specialized youth safety assessments designed to uncover emerging risks and vulnerabilities. (In 2025 alone, our Content Adversarial Red Team, or CART, conducted more than 350 exercises spanning all major modalities, including text, audio, images, video, and complex features such as agent AI.) Our comprehensive safeguards are developed by Google’s dedicated internal specialists in ongoing consultation with third-party child development experts. This multifaceted approach ensures that our safeguards are based on both technical expertise and a deep understanding of child psychology.

We recognize that younger users are particularly vulnerable to forming strong emotional connections with generative AI systems. That’s why we’ve designed specific persona protections to prevent our models from engaging in harmful behavior. This includes prohibiting explicit claims of sensation, simulating romantic relationships or flirtatious innuendos, or role-playing as harmful real-world or fictional characters. We supplement this work by collaborating with external experts; last year we joined other tech companies to commit to Thorn’s Safety by Design principles, which focus on integrating protections against AI-facilitated child sexual abuse and exploitation.

Promote security and opportunities

In addition to preventing harm, our mission is to promote good. We believe in promoting safety and access and empowering younger users to take advantage of all that this new technology has to offer. This means supporting the development of artificial intelligence, critical thinking and self-discovery. We’ve released AI skills resources for families, such as our “Five Must-Knows for Getting Started with AI” video and a Family AI Conversation Guide, to encourage dialogue between parents and children about responsible use of this technology.

To help both in and out of the classroom, we’ve launched tools like Guided Learning in Gemini, which help students build a deeper understanding of topics by breaking down problems and tailoring explanations to their needs. Tools like this are designed to be conversational learning aids that help younger users find the best resources on the web while using proven learning techniques.

As generative artificial intelligence continues to evolve, we remain committed to this responsible approach. We will continue to build and refine our policies, safeguards and tools to deliver safer product experiences that empower younger users to explore, learn and take advantage of the incredible potential of this technology.

Leave a Reply

Your email address will not be published. Required fields are marked *