The AI-Generated Child Abuse Nightmare Is Here

A horrific new era of ultrarealistic, AI-generated, child sexual abuse images is now underway, experts warn. Offenders are using downloadable open source generative AI models, which can produce images, to devastating effects. The technology is being used to create hundreds of new images of children who have previously been abused. Offenders are sharing datasets of abuse images that can be used to customize AI models, and they’re starting to sell monthly subscriptions to AI-generated child sexual abuse material (CSAM).

The details of how the technology is being abused are included in a new, wide-ranging report released by the Internet Watch Foundation (IWF), a nonprofit based in the UK that scours and removes abuse content from the web. In June, the IWF said it had found seven URLs on the open web containing suspected AI-made material. Now its investigation into one dark web CSAM forum, providing a snapshot of how AI is being used, has found almost 3,000 AI-generated images that the IWF considers illegal under UK law.

The AI-generated images include the rape of babies and toddlers, famous preteen children being abused, as well as BDSM content featuring teenagers, according to the IWF research. “We’ve seen demands, discussions, and actual examples of child sex abuse material featuring celebrities,” says Dan Sexton, the chief technology officer at the IWF. Sometimes, Sexton says, celebrities are de-aged to look like children. In other instances, adult celebrities are portrayed as those abusing children.

While reports of AI-generated CSAM are still dwarfed by the number of real abuse images and videos found online, Sexton says he is alarmed at the speed of the development and the potential it creates for new kinds of abusive images. The findings are consistent with other groups investigating the spread of CSAM online. In one shared database, investigators around the world have flagged 13,500 AI-generated images of child sexual abuse and exploitation, Lloyd Richardson, the director of information technology at the Canadian Centre for Child Protection, tells WIRED. “That’s just the tip of the iceberg,” Richardson says.

A Realistic Nightmare

The current crop of AI image generators—capable of producing compelling art, realistic photographs, and outlandish designs—provide a new kind of creativity and a promise to change art forever. They’ve also been used to create convincing fakes, like Balenciaga Pope and an early version of Donald Trump’s arrest. The systems are trained on huge volumes of existing images, often scraped from the web without permission, and allow images to be created from simple text prompts. Asking for an “elephant wearing a hat” will result in just that.

It’s not a surprise that offenders creating CSAM have adopted image-generation tools. “The way that these images are being generated is, typically, they are using openly available software,” Sexton says. Offenders whom the IWF has seen frequently reference Stable Diffusion, an AI model made available by UK-based firm Stability AI. The company did not respond to WIRED’s request for comment. In the second version of its software, released at the end of last year, the company changed its model to make it harder for people to create CSAM and other nude images.

Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment