Is Google Gemini safe for under 13s? What parents need to know

Are Gemini's AI tools safe for under 13s to use?
There’s no doubt that AI is a useful tool. It’s nestled comfortably into our day-to-day routines, acting as confidant, sage, and handyperson. But AI doesn’t come without risk, and we need to be aware of its downsides to help children use it responsibly. Just as experts across the world began to sound the alarm on the risks that AI chatbots can pose to young people, a surprising email made its way into thousands of inboxes. “Hi, Gemini apps will soon be available for your children,” it announced in early May, 2025, to parents using Google’s Family Link service. 

Most AI platforms, in their terms of service, state that users have to be at least 13 before they can start experimenting with their tools. So what makes Google’s approach different? Why the push to get kids using AI, and what can they do using Gemini? 

What will kids be able to do with Gemini?

According to the email sent to Family Link users, kids will be able to use Gemini to:

  • Be creative: the tools can help them create stories, songs, and poetry
  • Be curious: kids can ask Gemini questions if they need or want to
  • Learn: Gemini can act as a homework helper

These uses line up well with how many adults are currently using AI tools – getting generative AI to produce viral images for us, write an email we just can’t find the words for, and prompting it to serve us with the perfect crumbly cookie recipe. It’s important for kids to learn how to use these tools correctly and responsibly, so perhaps early introduction of AI is the way forward. However, beyond the positive sheen, there are darker possibilities that families need to be aware of.

Is Gemini AI’s information age-appropriate?

As Google is making Gemini’s AI tools available for children whose parents use Family Link (outside the European Economic Area and UK), the service is only being rolled out to families monitoring their kids’ tech use – for now. Logically, this would allow parents to supervise their children’s use of AI, and AI bots like Gemini have guardrails in place that intend to protect younger users. This said, in the same email sent to parents, Google acknowledges that “our filters try to limit access to inappropriate content, but they’re not perfect. Your child may encounter content you don’t want them to see.” 

Are AI chatbots safe for children?

A few days before the launch of Google’s email to parents, both TechCrunch and The Wall Street Journal reported AI chatbot bugs: one with ChatGPT, that would have let young users generate erotic conversations, and the other with Meta AI, whose “digital companions” have the ability to engage in explicit roleplay. While these are bugs, and not a feature, they demonstrate the pitfalls of AI bots, especially when faced with curious children who will naturally test their limits. 

Other AI chatbots, such as character.ai, raise the risks even further: their character-based roleplay abilities are less regulated, more explicit, and as they can be created by anyone on the internet, can expose younger users to harmful and dangerous content and extreme ideologies. These certainly aren’t features in Gemini’s tool set, but the more children begin to turn to and trust online bots and characters for advice, the deeper their relationship becomes, and the line between artificial and reality becomes blurred.

What we know about how children are using AI

It’s a good idea to repeat what your kids say throughout the conversation, to reflect their thoughts back, show interest in the different points they might raise, and try to avoid interrupting them or criticizing the way they think. On the other hand, as parents, we shouldn’t be afraid to share our thoughts with our kids, to create an environment where everyone feels they can share and their views are worth listening to. We have to be able to exchange opinions freely, even when they’re different.

Fighting AI is an exercise in pushing against the tide – it’s another innovation that is now part and parcel of how we will use tech for years to come. Just like how adults use AI, the way kids use it is varied, and it’s important to remember that the more nefarious uses are the ones more likely to make the headlines. There are, and always will be, many kids who are simply using it for homework help, asking it to correct their grammar, and creating videos of what they hope will be the next AI-generated trend. A recent Common Sense Media survey of teens found that the most common use of AI tools was schoolwork: 53% of 13-18 year-olds reported they had used generative AI to help them with their homework.

The same report also exposed sides of generative AI that could be less positive: 

  • 42% of teens used it to keep them from being bored
  • 19% used it to create content as a joke or to tease another person
  • 15% used it to keep them company
  • 12% used it to generate new content from a person’s voice or image.

These uses could translate into problematic behaviors if kids and teens aren’t shown how to use generative AI tools responsibly. 

Are Gemini AI tools appropriate for young children?

Generative AI pitfalls parents should watch for

1. The information isn’t always reliable

One big plus of generative AI is that it can simplify or explain concepts that we don’t understand in easy terms, making it simple to digest. We can get an answer for almost anything, without having to sift through books, websites, or forums. The problem is, the answer generative AI gives isn’t always correct. “Hallucinations” can occur, where the models generate an answer that’s false, fake, or doesn’t make sense. For example, when Google released AI Overviews, appearing as a fully-fledged answer above most Google search results, a few responses were called into question for dangerous (but ridiculous) information, such as telling the searcher to add glue to pizza in an attempt to help the cheese “stick” better.

2. It can generate inappropriate content

Most major generative AI models, such as ChatGPT, Gemini, and Stable Diffusion, have guardrails in place to stop users generating inappropriate content, including nudity and violence. However, curious users and researchers have tested these limits in the past, pushing generative AI models to disregard their own policies and generate content that violates their rules. As curious kids experiment with AI, they may come up against content that’s not appropriate for their age, as Google warns in their release email to parents: “Your child may encounter content you don’t want them to see.”

3. Kids can use it to hurt others

AI lets children explore their creative side, helping them generate stories, create images, and experiment. For a minority, however, this also means they can easily make content designed to poke fun at other people. Almost 1 in 5 teens has used generative AI to create content as a joke or to tease another person, and 1 in 10 have used it to generate new content from a person’s voice or image. While out of Gemini’s scope, generation of deepfake nudes – nude images which can be produced of anyone using a simple photo – is on the rise, with 1 in 8 young people reporting in a recent Thorn study that they know someone who used AI technology to create or distribute deepfake nudes. What can start as a simple or innocent “joke” can turn into bullying or even illegal behavior, and the normalization of using AI tools this way can later take a more serious turn.

4. AI can act as a confidant

The role of AI as a trusted companion is both positive and negative: for some people, using chatbots to get advice and talk about things they don’t feel comfortable discussing with others can be a useful avenue for them to explore their feelings and get feedback. Young people may like the fact that AI chatbots don’t judge, and they’re positive by design, often flattering or repeating users’ ideas in a way that makes them feel validated. That said, relying on AI over human connection can become problematic – bots aren’t people, and reliance on them can push aside genuine social connection and affect the way we interact with others in the real world. Explorations of how people use AI for emotional support are beginning to show that extensive use of AI chatbots may correlate to higher feelings of loneliness, and less socialization in the real world.

5. Generative AI can use your data

Google has acknowledged that the queries under 13s run won’t be used to train their AI systems, but children should still be careful with the data they input into these models. As with anything on the internet, data privacy is an issue, and kids should be mindful never to supply AI with personal information, such as name, address, where they go to school, or any other revealing details.

Can you turn off Gemini AI on kids’ devices?

If you don’t want your kids to have access to Gemini through Google Family Link, you can disable the feature in settings. To turn the feature on or off, go to the Family Link app, or web page, and select your child. Tap Controls > Gemini > Gemini apps, and toggle “Gemini Apps” off, or on.

How to keep kids safe while using Gemini AI

Some uses of AI are useful (and fun!) for kids, so it’s important to introduce them to it in age-appropriate ways, while educating them on the dangers. Here are some steps parents can take to make Gemini, and other AI models, a safe space for young people:

  • Foster open communication: Create a trusting environment where your child feels comfortable discussing their online experiences and concerns.
  • Encourage critical thinking: Help your child develop the ability to question the authenticity of online content and consider the role AI may have played in its creation.
  • Prioritize privacy: Review and update privacy settings together to ensure both you and your child are comfortable with the information shared online.
  • Block inappropriate apps: Use tools such as Qustodio to block and supervise AI apps that aren’t beneficial to your child, or that could be potentially harmful, such as character.ai or Talkie AI. 
  • Promote consent: AI tools have the potential to be used to bully, make fun of, or harass other children. Make sure your child understands that personal information and images shouldn’t be shared with AI chatbots to protect people’s data, and that they shouldn’t upload images without the person’s consent. 
  • Keep an eye on online activity: Spending too much time with AI-powered devices can lead to excessive use and negatively impact social interactions and physical activity. Creating screen-free schedules and building a family digital agreement can be helpful in building healthy tech habits that kids can stick to. 
If you think your child is using generative AI in ways that are problematic or harmful, it’s important to approach the topic with curiosity and understanding rather than accusation. Initiate an open conversation about their online activities, ask them about the tools they use, and discuss the potential benefits and risks of generative AI. Together, you can learn about these tools, ensure they feel comfortable coming to you with any questions or concerns, and build healthy AI use habits that will stick with them as they grow.
Qustodio dashboard | kids screen time

How can Qustodio help protect your family?

Qustodio is the best way to keep your kids safe online and help them create healthy digital habits. Our parental control tools ensure they don't access inappropriate content or spend too much time in front of their screens.

Get started free Get started free