AI: the stuff of dreams or nightmares? Going by the movies, it has the potential to dominate, and judging by the world’s response so far to ChatGPT, there might be some truth to it. As AI tools like ChatGPT become part of daily life, many young people are increasingly turning to them for schoolwork, advice, and even emotional support.
But is ChatGPT a revolution, or simply a danger for children and teens? Let’s take a closer look at ChatGPT, how it works, and whether you can make it safer for your child to use.
What is ChatGPT?
ChatGPT is a chatbot, created and developed by OpenAI, a research lab dedicated to artificial intelligence. The AI system is designed and trained to simulate more conversational, human responses to prompts and questions.
ChatGPT is a trained artificial intelligence bot, instructed on a huge database of websites, books, articles, and many other texts, which help it to understand the patterns of natural, more human-sounding language. Put simply, it behaves like a language machine, pre-trained to mimic writing, and when prompted, it will use its knowledge to generate a response similar to one a real person might give.
What do kids use ChatGPT for?
Particularly in its earlier days, kids turned to ChatGPT for homework help, using it like an assignment assistant. In 2024, almost a quarter of US teens stated they used ChatGPT for schoolwork – more than double the share of the previous year. ChatGPT often acts as a resource for kids, where they can:
- Ask it to help with homework, summarize content, and learn new concepts.
- Brainstorm stories, songs, or creative projects.
- Practice language learning.
- Ask it to check grammar, rework writing, or, in some cases, even write whole essays (which is why many parents and schools are concerned about its potential as a cheat aid).
- Generate images or voice interactions.
- Access free or paid versions of the tool, which have different capabilities, safety filters, and levels of access.
As time goes by, and kids have had more opportunities to use ChatGPT, new uses of ChatGPT are constantly emerging. It’s turned into a therapist, counselor, confidant, and companion. OpenAI’s CEO, Sam Altman, understands that there’s been a shift towards using ChatGPT this way: “There’s young people who just say, like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me. It knows my friends. I’m gonna do whatever it says’.”
ChatGPT: the risks parents need to know
Inappropriate content
In late 2025, close to the release of its parental control features, OpenAI announced that adult users would soon have access to “erotica” content generation tools. These features are intended for adults only, but older teens or curious kids could easily be exposed to explicit content by lying about their age or using someone else’s account. Kids could also use unverified third-party apps or share the erotica output online, such as in chats with friends.
Aside from the “erotica” feature, some chats and interactions could expose children to content and ideas not appropriate for their age. ChatGPT does have filters, but they aren’t foolproof, and users can find ways to get around them.
Potential to cause harm
ChatGPT can easily be used by bad actors looking to harass or scam young people. Scammers and hackers might use AI-generated messages to make phishing attempts sound more convincing, or to impersonate trusted people or organizations.
Others might use ChatGPT to deliberately spread false information, harass others, or create fake profiles that look real. Children could also use ChatGPT irresponsibly themselves, to spread rumors, cheat on homework and schoolwork, or bully others online.
Attachment to AI
Many teens use ChatGPT simply to chat for fun, or because they’re feeling lonely or anxious. While the case could be argued that ChatGPT is a free space where people can get direct feedback and support, ChatGPT is not a licensed therapist or a mental health professional. Kids and adults alike are increasingly turning to it for support, instead of in the “real” world, developing an attachment or prioritizing ChatGPT advice over a person-to-person connection.
In 2025, a 16-year-old boy took his own life after engaging in emotional discussions with ChatGPT – what started out as homework help turned into more personal discussions that signalled a cry for help. ChatGPT failed to shut down the conversation, instead encouraging the teenager to continue, engaging with him in discussions about suicidal ideation and listing methods that would be most effective.
Data privacy
ChatGPT uses data encryption to protect any details entered during user interactions, and the model itself is hosted on secure, restricted-access servers. However, bear in mind that whenever you use any free product on the internet, you are also part of the product. This means that any data or information you enter into ChatGPT (or other free services online) is no longer in your control. The company behind it has the rights and access to it. In addition, companies are prone to data leaks and hacks – the Chinese AI platform Deepseek suffered a cyber attack in 2025 that left user data and chat histories exposed.
If kids share personal information, like their name, school, or location, this could become part of OpenAI’s stored data. Children might not understand that their chats aren’t really private, putting their personal information at risk.
Misinformation or misleading information
Responses provided by ChatGPT could be wrong or misleading, such as information surrounding health-related topics. Information provided by the bot should be double-checked, talked over with an adult, or verified using a reliable source.
ChatGPT was trained on a wide range of text, from books to online articles, from different sources. This means it could easily create text that sounds accurate and believable, despite being neither. Even OpenAI researchers have stated that ChatGPT or similar tools could aid in the spread of “a particular political agenda, and/or a desire to create chaos or confusion.”
Consent
Several viral trends in 2025 encouraged users to upload a picture of themselves to ChatGPT, such as the Ghibli trend, which transformed images into cartoon versions in the style of Studio Ghibli films. While this might seem fun and cute, it raises concerns about privacy and consent. Teens (and adults) need to understand that uploading someone’s image without their consent is never acceptable, and that any image shared online could be used to train AI systems, or spread without control.
Sycophancy
ChatGPT tends to agree with or flatter users instead of challenging their ideas. This can make kids feel validated, even when they’re wrong, or making poor decisions. Over time, this sycophancy and agreeableness could distort their sense of judgment, making them less open to constructive feedback from others, while also fuelling dependency. Who wouldn’t want to talk more to something that says you’re always right?
Third-party apps
ChatGPT’s popularity means it is now built into many platforms, from learning tools to games and social media. Kids could be using the tool without realizing it, making it harder for parents to know when and how their child is engaging with AI. Some third-party apps claiming to use ChatGPT may lack OpenAI’s safety measures, increasing the risk of exposure to unsafe content or scams.
6 ways to make ChatGPT safer for kids and teens
Many tech experts and specialists argue that ChatGPT could be seen as a new learning opportunity, providing students with an interactive and engaging way to practice skills they are learning, and tailoring learning to match individual style for improved understanding.
However, just as with any AI technology, kids and adults alike will benefit most when managing expectations of tools like ChatGPT, and exercising caution when using it. AI technology represents the future, and young people need to understand its pros and cons to really make the most of what it has to offer.
1. Explore parental controls on ChatGPT
ChatGPT offers family settings and parental controls. By pairing an adult account with a child account, parents can manage chat history, restrict access to certain features, and monitor ChatGPT use on shared devices.
2. Make sure use is age-appropriate
ChatGPT’s terms of use state that children under 13 years old shouldn’t be using the tool. If your child is under this age, you can use parental controls like Qustodio to block the ChatGPT app. Qustodio also allows you to block AI apps and AI websites as an entire category, making your child’s experience more age-appropriate, and keeping them safe from more problematic AI services, such as AI companion bots.
While children under 13 can’t have their own account, children are already using AI in multiple ways, and curious kids will always find a way to explore more – therefore it’s important to talk to your child from a young age about the pros and cons of AI, the ways it can be used, and what it shouldn’t be used for. Exploring AI tools together means you can introduce it on your own terms, with guidance, rather than them trying a new tool out for the first time alone.
3. Encourage shared use
Use ChatGPT together or through a family shared account. This helps you to guide your child’s experience, and talk together about what they’re asking, learning, and exploring, while raising concerns if you see any red flags.
4. Teach kids how to think critically
It’s important that children know how to double-check the information ChatGPT provides, and whether the response it gives is biased or inaccurate.
If you want to see the kind of questions your child is asking ChatGPT, try using it together for a few trial runs, and talking about the responses together. Encourage them to think for themselves, especially when you won’t be there to guide. After you’ve entered the prompt, you can point out things that are good or bad about the bot’s answer, such as:
- How creative it is
- If it’s accurate or factual
- If it actually answers the question or prompt
- If the response needs a little more fine-tuning, or a follow-up
5. Be open about emotions
Assure your child that you’re always there for them to open up to if they need any advice or emotional support. Kids often feel more comfortable speaking to AI because they feel it’s a judgment-free space, so it’s also important to make sure your child has a group of people they can count on in real life to share what they’re feeling, and come to if they have any problems, such as close friends, family, or other trusted adults.
6. Stay informed
Follow announcements from OpenAI to stay in the loop. New features like adult content tools should be on parents’ radar, as they can change the risks for younger users. Staying informed helps you make the right decisions for your family, as rules and boundaries surrounding AI should evolve just as the tools do.
Given ChatGPT’s presence in our everyday lives, it’s almost impossible to expect children not to run across it or test it out at any point. AI technology is here to stay, and if children express a wish to use it, or are excited to engage with it, it’s better to encourage them to do so carefully and safely, so they can understand when it might be appropriate – or not – to use these kinds of tools.