Even in internet days gone by, these scenarios didn’t come without risk. Connecting with strangers, while being exposed to potentially mature or violent themes were very real risks. In the age of AI, newer ways to explore roleplay and connection, through companion bots and character platforms like PolyBuzz, might remove the risk of connecting with strangers, but these machine-based chats pose their own risk to children and teens. Here’s what parents need to know.
What can children do on PolyBuzz?
Despite the 18+ restriction, PolyBuzz’s own guide to features and “benefits” indicates it’s “tailored for young users eager to explore virtual character interaction”, revealing who the platform’s target audience really is.
For teens who enter an incorrect date of birth, or who aren’t prompted at all, PolyBuzz features include:
- A vast selection of character bots, with themes, relationships, and defined back stories for text-based and audio roleplay.
- AI character creation tools.
- AI-generated photo and video sharing, allowing users to create character avatars and scenes.
- A “freemium” subscription model: free users can message and chat, while premium subscribers can save chats, unlock exclusive characters, browse ad-free, and have limits lifted on chats and voice playbacks.
PolyBuzz: The risks
NSFW and inappropriate content
PolyBuzz prohibits “public display” of NSFW (not safe for work) content, and claims that content recommended while browsing the app (e.g. characters, roleplay descriptions) is screened and moderated. However, in private chats, this screening doesn’t apply – and even if NSFW filters are in place, they’re not failsafe, and violent or sexual content still shows up as users browse. Images and “profile pictures” of characters are often highly sexualized and inappropriate for children.
Even if not directly explicit in nature, some character descriptions and scenarios don’t sound like individuals or groups you’d want your child talking to: “School psycho: Has anger issues, is protective, slow-burning, loves you.” “Clingy boyfriend: He cries if you leave him alone.” “MHA pervert sub: There’s a substitute teacher, and he’s a pervert.”
Limited parental controls
Even though the platform states in its terms of use it’s for 18+ users only, PolyBuzz offers parental controls, which allow families to limit the time spent on the app, and with good reason – some users on Reddit boast timelogs of over 11 hours a day on the app. Parents and children can view usage patterns, but there’s no insight into the content children access and the characters they might be engaging with.
Hidden content
PolyBuzz allows users to create their own chats with multiple characters, meaning the conversation is private, even with parental controls activated. With AI, you have no way of knowing or predicting what output your child will come across, and they could be guided towards conversations that are inappropriate for their age, or which expose them to more adult topics.
Emotional attachment
With conversational AI bots – even ChatGPT – there’s a risk that children (and adults) could develop strong emotional attachments to characters that feel real to them, even if they’re not. Engaging conversations, available 24/7 without judgment, and which validate feelings or opinions, mean chatbots like this can be very appealing and difficult to turn away from.
In-app purchases
While there’s a free version of PolyBuzz, the platform offers three tiers of paid access: basic, premium, and ultimate. Features vary between them, but paying for an account allows users to browse ad-free, engage in unlimited chats with longer memory (meaning bots are less likely to “forget” conversations, forging a stronger “connection”), and get faster responses. The most expensive tier gives users access to what PolyBuzz calls the “Passion model”, a more intense, unrestricted, and emotional layer, mostly designed for sexual roleplay.
Incentives and streak-based rewards
Disabling in-app purchases helps reduce the risk of unwanted or spontaneous spending, but PolyBuzz also allows users to collect coins through engagement, watching ads, and building streaks, which they can then choose to spend on specific features not available in “free” mode. This can encourage users to return and engage for longer, making it more difficult to disconnect.
Screen time
In our 2025 annual report, we discovered that while fewer children access applications like PolyBuzz than more mainstream AI applications like ChatGPT, for the children that do use them, engagement time is much higher overall than it is with other AI tools. Unlike messages with friends, a response from a bot always comes in immediately. Character bots and conversational AI in general is designed to keep the user engaged, meaning it’s hard to disconnect.
Talking to your child about AI character bots
But it’s true that children and teens are curious, and they may engage with character bots or apps like PolyBuzz without seeking them out: ads for character bots and AI “partners” are all over the internet – in their favorite games, in social media ads, and likely discussion topics in the group chat.
That means you need to be prepared to have conversations and set guardrails surrounding your child’s AI use and character bots. Some ways you can help make their experience with AI safer include:
1. Having open conversations
For kids, chatbots and characters might just seem like a harmless way to interact and roleplay, testing their creative and storytelling skills. But they need to know the risks these bots pose, and the impact they can have on wellbeing and emotions. If your child lets you know they’ve used AI chatbots, or asks you about them, speak to them calmly and without judgement, highlighting concrete risks and explaining what they can do if they ever feel unsafe or worried by something they see online.
2. Blocking, restricting, and filtering AI chatbot use
One of the most important things we can do as parents when allowing our children to have access to their own device is to make it age-appropriate: this means blocking and limiting exposure to content and apps we know are unsuitable for their age and development. Qustodio automatically filters inappropriate websites, and notifies parents whenever a new app is downloaded and used. Blocking features also allow you to restrict both individual apps like PolyBuzz, or entire categories such as AI, protecting your child from these apps while keeping you in the loop about dangerous content.
3. Creating a digital agreement
Boundaries are important for families to help children understand what’s expected of them, and tech use is no exception. A digital agreement can help you lay out rules beyond just “screen time”, promote discussion about how, when, where, and why your child can use devices, and let your child know what they should do when they feel uncomfortable.
4. Understanding the platforms that are out there
You don’t have to be an AI expert by any means, but understanding what children are already accessing online, or what friends and schoolmates could be using, will help you connect with your child’s digital world and react appropriately.
To keep kids safe as they explore, and inevitably encounter AI tools – whether through ads, conversations with friends, or through their own curious nature – it’s essential that we have open, honest conversations with them about what makes these tools inappropriate, and often dangerous. By combining guardrails with information, knowledge, and discussion, you’ll set your child up to navigate AI with confidence and understanding.