We’re updating Gemini to streamline the path to support for those who need it. When a conversation might signal that a user might need mental health information, Gemini will pop up a redesigned “Help is Available” module—developed with clinical experts—to provide more efficient and immediate connections to care.
When Gemini recognizes a conversation that indicates a potential crisis related to suicide or self-harm, we are introducing a new, simplified “one-touch” interface that will provide an instant connection to crisis hotline resources, allowing a user to chat, call, text or visit the crisis hotline’s website. Within this new interface, we design responses to encourage people to seek help. Once the interface is activated, the option to seek professional help will remain clearly available throughout the rest of the conversation.
2. Scaling the impact of crisis support
Today, Google.org is announcing $30 million in funding globally over the next three years to help global hotlines. This funding will help effectively scale their capacity to provide immediate and safe support to people in crisis.
We are expanding our partnership with ReflexAI to help social sector organizations scale their mental health support services. This initiative includes $4 million in direct funding and the integration of Gemini into ReflexAI’s training suite. In addition, Google.org Fellows will provide pro bono technical expertise to help develop Prepare, a customizable platform that uses realistic, AI-powered simulations to train staff and volunteers for critical conversations. Priority partners for this new phase include educational organizations such as Erika’s Lighthouse and Educators Thriving.
3. To help Gemini respond in acute mental health situations
People interact with Gemini in deeper, more complex ways, looking for information across many different topics (including when experiencing psychological crises). Our clinical, engineering and safety teams are focused on:
- Prioritizing security and human connection: We want to provide practical help by connecting users with real-world resources and human support.
- Designing Better Answers: We design responses to encourage help-seeking while avoiding validation of harmful behaviors such as urges to self-harm.
- Avoid confirming false beliefs: We have trained Gemini not to agree with or reinforce false beliefs, and instead to gently distinguish subjective experience from objective fact.
Although Gemini can be a useful tool for learning and obtaining information, it is not a substitute for professional clinical care, therapy or crisis support for those who need it. That’s why we’ve trained the model to help recognize when a conversation might signal that someone might be in an acute mental health situation, and respond appropriately by referring them to real-world help.
4. Protection of younger users
We also have existing specific protections for minors, designed to provide the most helpful answers and avoid harmful topics when using Gemini. For example:
- Persona protection designed to prevent the Gemini from behaving like a companion, including guardrails that prevent it from claiming to be human or possessing human characteristics.
- Safeguards intended to prevent emotional dependency, avoid language that simulates intimacy or expresses need.
- Safeguards against encouraging bullying or other forms of harassment.
Our safety efforts continue to evolve and reflect our continued commitment to creating a healthy and positive digital environment where young people can explore and learn with confidence.
These updates are part of our long-term commitment to helping people use the best of Google’s technology and the expertise of our clinicians and security experts. We are encouraged by the potential of these tools to make support more accessible, compassionate and effective.
