Human Rights Watch (HRW) denounced that personal photos of Brazilian children are used without their knowledge to train artificial intelligence (AI) tools. These images are extracted from the web and used in the creation of malicious deepfakes, exposing minors to risks of exploitation and harm.
HRW's analysis revealed that the LAION-5B dataset, used to train AI, includes identifiable photos of children in Brazil. These images contain specific details such as names and locations, making them easy to trace. In one case, a photo shows a girl touching her newborn sister's fingers at a hospital in Santa Catarina, revealing names and the exact location of the hospital.
HRW found 170 photos of children from at least 10 states: Alagoas, Bahia, Ceará, Mato Grosso do Sul, Minas Gerais, Paraná, Rio de Janeiro, Rio Grande do Sul, Santa Catarina and São Paulo. These photos, which capture intimate moments and school events, were posted on blogs and social networks years ago. When used in AI, these images put children's privacy at risk, as AI models can reproduce private data.
The realistic deepfakes created with these photos have led to the harassment of girls in several states, generating sexually explicit images distributed online. LAION, responsible for the dataset, pledged to remove the photos, but blamed the children's guardians for not removing the photos from the internet.
Hye Jung Han, HRW's children's rights and technology advocate and researcher, said children should not have to live in fear that their photos will be stolen and used to harm them. HRW stresses the need for the government to urgently adopt policies to protect children's data from AI-driven misuse.
The Brazilian government should strengthen data protection laws and create policies to protect children's digital rights. Implementing effective policies can help prevent further harm and safeguard the integrity of minors in the digital environment. Congress should also include specific protections for minors in AI regulations.
HRW emphasizes that Brazil's General Law on Personal Data Protection does not provide sufficient safeguards for children. A national policy should prevent the incorporation of children's personal data into artificial intelligence systems, given the risks to privacy and the potential for new forms of misuse as technology evolves.
The creation of deepfakes has been facilitated by the ease of access to AI tools that allow realistic images to be generated in a matter of seconds. At least 85 girls in several states have reported harassment by classmates who used AI tools to create sexually explicit deepfake images of them from photos on their social media profiles.
In response, LAION confirmed that the dataset contained the personal photos of children found by HRW and pledged to remove them. However, LAION also stated that children and their guardians were responsible for removing such personal photos from the internet, which they consider the most effective protection against misuse.
The urgency of these measures is highlighted by HRW, which stresses the need to protect children's privacy and rights in the face of advancing AI technology. The implementation of effective policies can help prevent further harm and safeguard the integrity of minors in the digital environment.