Life
| Jun 21, 2023

Child-safety investigators fear AI-generated images hinder cases

/ Our Today

administrator
Reading Time: 2 minutes

The growing use of Artificial intelligence has given rise to a concerning development: the creation of disturbingly lifelike images depicting child sexual exploitation.

Child-safety investigators are alarmed by the proliferation of these images, as they fear they will hinder efforts to locate victims and combat real-world abuse. Generative AI tools, referred to as diffusion models, have sparked what experts describe as a “predatory arms race” on paedophile forums.

These tools allow individuals to generate realistic images of children engaged in explicit acts within seconds. Thousands of AI-generated child-sex images have surfaced on the dark web, which is only accessible through specialised browsers. Paedophiles share detailed guides on these forums, teaching others how to create their own explicit content using AI.

Rebecca Portnoff, the director of data science at Thorn, a nonprofit organisation focused on child safety, has witnessed a significant increase in the prevalence of these images since late 2022. She explains that the tools’ ease of use and the realism of the images have posed significant challenges to law enforcement agencies already struggling with victim identification.

The flood of AI-generated images presents a challenge for the central tracking system designed to block such material. The system is primarily built to identify known instances of abuse, not newly generated content. Moreover, law enforcement officials face the daunting task of differentiating between real and fake images, further overwhelming their efforts to protect children.

The legal implications surrounding these AI-generated images remain uncertain. Justice Department officials argue that such images are illegal under federal child-protection laws, even if they depict AI-generated children. However, there have been no cases where suspects have been charged specifically for creating these images.

Midjourney AI logo

Diffusion models like DALL-E, Midjourney, and Stable Diffusion have garnered attention for their visual creativity, winning fine-arts competitions and illustrating children’s books. However, they have also accelerated the production of explicit content. These tools require less technical expertise than previous methods like deepfakes, enabling paedophiles to rapidly generate numerous images from a single command.

Child-safety experts note that many of the AI-generated images on paedophile forums appear to be created using open-source tools, such as Stable Diffusion. Stability AI, the organisation behind the tool, claims to prohibit the creation of child sex-abuse images and cooperates with law enforcement investigations into illicit uses. However, the tool’s open-source nature allows users to download and manipulate it outside the confines of company rules and oversight.

On dark-web forums, users openly discuss strategies for creating explicit AI-generated photos while evading anti-porn filters. Discussions include utilising non-English languages that they believe are less susceptible to detection and suppression.

Comments

What To Read Next