Image generator scandal: Artificial intelligence was mistakenly taught with child pornography

Research conducted by the Stanford Internet Observatory (SIO) has identified more than a thousand child sexual abuse images in the recently opened LAION-5B dataset, which has also been used to train popular AI text-to-image generation models such as Stable Diffusion.

The SIO report, in collaboration with non-profit child online safety group Thorn, found that rapid advances in generative machine learning are also helping to create highly realistic images of child sexual exploitation using AI image generation models open source.

The study demonstrated that such images were contained in the public LAION-5B dataset. Like most other large databases on which AI is trained, it was collected from many online sources, including social media and popular adult websites.

The database custodian is currently removing the identified material and the database has been temporarily withdrawn from public use, while researchers have provided the sources of the images to the US National Center for Missing and Exploited Children.

But it is not the first time that LAION’s image data has been criticized, writes Venture Beat. Already in October 2021, cognitive scientist Abeba Birhane published the article «Multimodal datasets: misogyny, pornography and malicious stereotypes», in which the previous LAION-400M image dataset was studied. The researcher found that this dataset also contained images and text pairs about rape, pornography, malicious stereotypes, racial and ethnic slurs, and other content that the AI ​​could learn from.

“Because users were no longer satisfied with the later, more filtered versions, the old Stable Diffusion 1.5 is still the most popular model.”

2023-12-24 13:58:00
image-generator-scandal-artificial-intelligence-was-mistakenly-taught-with-child-pornography

Share this post :

Facebook
Twitter
LinkedIn
Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest News