Most AI platforms, in their terms of service, state that users have to be at least 13 before they can start experimenting with their tools. So what makes Google’s approach different? Why the push to get kids using AI, and what can they do using Gemini?
What will kids be able to do with Gemini?
- Be creative: the tools can help them create stories, songs, and poetry
- Be curious: kids can ask Gemini questions if they need or want to
- Learn: Gemini can act as a homework helper
These uses line up well with how many adults are currently using AI tools – getting generative AI to produce viral images for us, write an email we just can’t find the words for, and prompting it to serve us with the perfect crumbly cookie recipe. It’s important for kids to learn how to use these tools correctly and responsibly, so perhaps early introduction of AI is the way forward. However, beyond the positive sheen, there are darker possibilities that families need to be aware of.
Is Gemini AI’s information age-appropriate?
Are AI chatbots safe for children?
Other AI chatbots, such as character.ai, raise the risks even further: their character-based roleplay abilities are less regulated, more explicit, and as they can be created by anyone on the internet, can expose younger users to harmful and dangerous content and extreme ideologies. These certainly aren’t features in Gemini’s tool set, but the more children begin to turn to and trust online bots and characters for advice, the deeper their relationship becomes, and the line between artificial and reality becomes blurred.
What we know about how children are using AI
It’s a good idea to repeat what your kids say throughout the conversation, to reflect their thoughts back, show interest in the different points they might raise, and try to avoid interrupting them or criticizing the way they think. On the other hand, as parents, we shouldn’t be afraid to share our thoughts with our kids, to create an environment where everyone feels they can share and their views are worth listening to. We have to be able to exchange opinions freely, even when they’re different.
Fighting AI is an exercise in pushing against the tide – it’s another innovation that is now part and parcel of how we will use tech for years to come. Just like how adults use AI, the way kids use it is varied, and it’s important to remember that the more nefarious uses are the ones more likely to make the headlines. There are, and always will be, many kids who are simply using it for homework help, asking it to correct their grammar, and creating videos of what they hope will be the next AI-generated trend. A recent Common Sense Media survey of teens found that the most common use of AI tools was schoolwork: 53% of 13-18 year-olds reported they had used generative AI to help them with their homework.
The same report also exposed sides of generative AI that could be less positive:
- 42% of teens used it to keep them from being bored
- 19% used it to create content as a joke or to tease another person
- 15% used it to keep them company
- 12% used it to generate new content from a person’s voice or image.
These uses could translate into problematic behaviors if kids and teens aren’t shown how to use generative AI tools responsibly.
![2025-05-[Blog]-Gemini-AI-for-under-13s_InsideImage Are Gemini AI tools appropriate for young children?](https://static.qustodio.com/public-site/uploads/2025/05/27135510/2025-05-Blog-Gemini-AI-for-under-13s_InsideImage.png)
Generative AI pitfalls parents should watch for
One big plus of generative AI is that it can simplify or explain concepts that we don’t understand in easy terms, making it simple to digest. We can get an answer for almost anything, without having to sift through books, websites, or forums. The problem is, the answer generative AI gives isn’t always correct. “Hallucinations” can occur, where the models generate an answer that’s false, fake, or doesn’t make sense. For example, when Google released AI Overviews, appearing as a fully-fledged answer above most Google search results, a few responses were called into question for dangerous (but ridiculous) information, such as telling the searcher to add glue to pizza in an attempt to help the cheese “stick” better.
Most major generative AI models, such as ChatGPT, Gemini, and Stable Diffusion, have guardrails in place to stop users generating inappropriate content, including nudity and violence. However, curious users and researchers have tested these limits in the past, pushing generative AI models to disregard their own policies and generate content that violates their rules. As curious kids experiment with AI, they may come up against content that’s not appropriate for their age, as Google warns in their release email to parents: “Your child may encounter content you don’t want them to see.”
AI lets children explore their creative side, helping them generate stories, create images, and experiment. For a minority, however, this also means they can easily make content designed to poke fun at other people. Almost 1 in 5 teens has used generative AI to create content as a joke or to tease another person, and 1 in 10 have used it to generate new content from a person’s voice or image. While out of Gemini’s scope, generation of deepfake nudes – nude images which can be produced of anyone using a simple photo – is on the rise, with 1 in 8 young people reporting in a recent Thorn study that they know someone who used AI technology to create or distribute deepfake nudes. What can start as a simple or innocent “joke” can turn into bullying or even illegal behavior, and the normalization of using AI tools this way can later take a more serious turn.
The role of AI as a trusted companion is both positive and negative: for some people, using chatbots to get advice and talk about things they don’t feel comfortable discussing with others can be a useful avenue for them to explore their feelings and get feedback. Young people may like the fact that AI chatbots don’t judge, and they’re positive by design, often flattering or repeating users’ ideas in a way that makes them feel validated. That said, relying on AI over human connection can become problematic – bots aren’t people, and reliance on them can push aside genuine social connection and affect the way we interact with others in the real world. Explorations of how people use AI for emotional support are beginning to show that extensive use of AI chatbots may correlate to higher feelings of loneliness, and less socialization in the real world.
Google has acknowledged that the queries under 13s run won’t be used to train their AI systems, but children should still be careful with the data they input into these models. As with anything on the internet, data privacy is an issue, and kids should be mindful never to supply AI with personal information, such as name, address, where they go to school, or any other revealing details.
Can you turn off Gemini AI on kids’ devices?
How to keep kids safe while using Gemini AI
- Foster open communication: Create a trusting environment where your child feels comfortable discussing their online experiences and concerns.
- Encourage critical thinking: Help your child develop the ability to question the authenticity of online content and consider the role AI may have played in its creation.
- Prioritize privacy: Review and update privacy settings together to ensure both you and your child are comfortable with the information shared online.
- Block inappropriate apps: Use tools such as Qustodio to block and supervise AI apps that aren’t beneficial to your child, or that could be potentially harmful, such as character.ai or Talkie AI.
- Promote consent: AI tools have the potential to be used to bully, make fun of, or harass other children. Make sure your child understands that personal information and images shouldn’t be shared with AI chatbots to protect people’s data, and that they shouldn’t upload images without the person’s consent.
- Keep an eye on online activity: Spending too much time with AI-powered devices can lead to excessive use and negatively impact social interactions and physical activity. Creating screen-free schedules and building a family digital agreement can be helpful in building healthy tech habits that kids can stick to.