It seems you can’t scroll very far without learning about new ways that artificial intelligence (AI) is changing the internet forever. But some of the ways people are using AI, and the specific kinds of AI tools they use, are far from positive. One of the latest and most worrying (but perhaps inevitable) trends is something called “undress AI”. Let’s take a look at what parents need to know about this type of technology, how it’s actually being used – even among children – and how to keep kids safe.
What is “undress AI”?
Undress AI is a category of generative AI tools that alter digital images of people to make it appear as though they are naked. Digital image manipulation is nothing new – this kind of technique was possible before with tools like Photoshop, but AI has made it much easier, faster, and more realistic. It’s becoming increasingly hard to tell what’s real and what’s fake, especially at a glance.
These tools aren’t just accessed by people actively searching for them. They’re often promoted in online spaces. Along with websites that offer these manipulation services, “nudify apps” are also popular and easily accessible through ads on major social media platforms.
Despite these apps and tools clearly violating platform policies, their makers can be very creative in ways to get around restrictions, so your child could come across the tools without deliberately seeking them out. It’s important to keep this in mind if you do discover that your child has seen or even downloaded this kind of generative AI tool – give them the benefit of the doubt and have an open conversation about where and how they first saw it online.
Is undress AI legal?
No. Creating fake intimate images without consent is illegal in most parts of the world. In general, creating non-consensual fake images – whatever the tool – is illegal, especially when minors are involved. Creating intimate images of a minor using AI (or other technologies) classifies as creating child sexual abuse material (CSAM). The regulations differ somewhat around the world, but in most regions, restrictions and laws are in place to help keep children safe:
In the US
The Take It Down Act prohibits publishing intimate images or videos of minors or non-consenting adults, including any AI-generated content (whether intimate depictions or not) meant to harm someone. It also requires that platforms take this kind of content down within 48 hours of a victim’s report.
In Australia
A combination of federal and state laws addresses the non-consensual creation and sharing of sexually explicit deepfakes. The eSafety Commissioner can act on behalf of victims and order the removal of harmful content online. As of September 2025, the Australian Government has started discussions on banning nudify apps completely.
In South Korea
South Korea has developed a series of particularly strict laws that cover the viewing, saving, creation, and possession of deepfake CSAM, with penalties including years of jail time and significant fines.
In the UK
The UK has a number of laws in place that criminalize the creation of AI-generated CSAM. After an urgent call to action from the Children’s Commissioner in April 2025, the UK government is also considering a complete ban on nudify apps.
In the EU
The Digital Services Act (DSA) and the Artificial Intelligence Act both include rules to protect minors from AI-generated CSAM, and require online platforms to meet specific requirements that protect young people in virtual spaces.
How are deepfake tools being used?
Deepfake and undress AI tools are most often used to create content that can be used to emotionally or reputationally harm others. Among adults, they’re commonly used to create “revenge porn”, and, in some cases, to create AI-generated CSAM. Among teens, we’re increasingly seeing these tools used to create non-consensual images of classmates, or even teachers and school staff. Since 2023, there have been over 30 incidents like this reported in the US alone. There has only been one publicized case so far of a parent using this kind of technology to try to harm their child’s peers.
While incidents like these can have devastating consequences, it’s important to bear in mind that not all children use these tools with harmful intent. Some may experiment out of curiosity, or think it’s a prank, without actually realizing the serious legal and emotional impact their actions can have on them – and others.
What parents need to know
Even though undress AI and deepfake tools break the rules of nearly all social platforms, ads for them can still appear while browsing through harmless material. Children may also encounter them on platforms like Discord, where closed-off spaces like private servers make it easier for inappropriate or harmful content – and apps – to be shared.
The incidents typically shared in the news represent the most dramatic cases of these tools being used to create nude images of classmates. These are then spread either via group chats among students at the same school, or shared on social media with a wider audience.
According to some recent surveys, about 40-50% of students are aware of this kind of content being shared at school. That means your child has likely heard of these tools, even if they haven’t used them themselves.
![[Blog] What is undress AI_InsideImage](https://static.qustodio.com/public-site/uploads/2025/10/30150053/Blog-What-is-undress-AI_InsideImage.png) 
			The consequences of sharing deepfakes – what your child needs to know
Sharing or creating deepfake content can have serious emotional, academic, reputational, and legal consequences. In some countries, existing laws are evolving quickly, and in others, new ones are being set in place to keep up with the fast-paced advances in AI technology. The creating, viewing, storing, and distributing of explicit deepfakes featuring children is treated as criminal behavior in most parts of the world.
Once something is online, it can spread quickly and be impossible to ever fully erase. Images or videos can generate a huge number of views within minutes, and because ‘once something gets online, it stays online,‘ the impact on a child’s reputation can be quick and lasting. Even if platforms or authorities remove the images, there’s no way of knowing how many times the image or video may have been screenshotted and saved, or shared further.
In the aftermath, victims of AI-generated image-based abuse often report long-term emotional trauma, including symptoms of post-traumatic stress disorder (PTSD), along with academic challenges stemming from anxiety and school avoidance.
How to report deepfakes and AI-generated content
- Report to the platform it was shared on immediately (don’t take screenshots for legal reasons)
- If needed or wanted, contact your local police
- If the content involves or seems to have been created by students at the same school, report to your school as well
It’s also important to talk to your child before taking any serious action, especially if the people involved in creating and spreading the deepfake material are close friends, classmates or peers. Ask how they feel, and what outcome they would like. Seek out help from your school – and a mental health professional, if needed – to get some guidance around speaking with the other children involved and their parents or caregivers.
How can I protect my child against deepfakes?
Start by having open, age-appropriate conversations about technology use and what’s safe, respectful, and ethical. Talk about the importance of consent and privacy in digital spaces.
You can also:
- Ask your child’s school about its cyberbullying and AI policies. Find out how you can support the school’s efforts to address the potential use of this kind of technology and help educate other families about this online safety risk as well.
- Learn how to report deepfakes on the platforms your child uses.
- Familiarize yourself with local laws that can help protect your child if they become a victim: for example, the Take It Down Act in the US.
Use parental control tools like Qustodio to monitor app usage and help you stay aware of what your child might encounter online – you can use it to block AI websites, get notifications about new app downloads, and block AI apps altogether.
How to talk to children about consent online
Conversations about consent are really about respect, privacy, and the right to control personal information.
With younger children, these conversations can focus on if and when it’s OK to share people’s photos with others, or information like their address, phone number, or something they share with us in a one-on-one setting. It’s equally important to help children distinguish, from an early age, between the kinds of personal information that they should and shouldn’t share about themselves and your family with others.
As children get older, prepare them to set boundaries in situations where others might ask them for information they are not comfortable sharing – having a simple, standard response can be a good way to start, something like: “I’m sorry, but I don’t share this kind of information online.”
A few ideas for conversation starters with your child about what they should and shouldn’t share online include:
- “Is it ever OK to share passwords with other people?”
- “Are there situations when it’s OK to screenshot someone else’s message or photo?”
- “What can you say if a good friend asks you for some personal information you really don’t want to share?”
- “How can you let your friend know that the information you’ve shared is only for them?”
- “How do you think a stranger might try to convince you to share some personal information, or an inappropriate photo?”
Undress AI and other deepfake tools are part of a worrying trend where technology, serious risks to privacy, consent, and wellbeing meet. But the combination of awareness, education, wellbeing tools, and open communication is a powerful antidote that can help keep children safe. By staying informed and connected, you can help your kids navigate the complexities of the digital world safely and responsibly, no matter how advanced technology gets.
 
			 
	 
			