The White House has secured voluntary commitments from several major companies in the artificial intelligence sector to take measures against the creation of non-consensual sexual deepfakes and child sexual abuse material. This initiative is part of a broader effort by the U.S. government to address the risks posed by AI in generating harmful content.
Companies such as Adobe, Cohere, Microsoft, Anthropic, OpenAI, and the data provider Common Crawl have agreed to act responsibly in creating and using datasets that avoid image-based sexual abuse. Although Common Crawl is not participating in all the commitments, the other companies have pledged to implement strategies to protect their AI development processes, ensuring that they do not generate images of sexual abuse. Additionally, these companies have committed to removing nude images from AI training datasets whenever appropriate and depending on the model's purpose.
A self-managed commitment
It is important to note that these commitments are self-managed, meaning that the companies themselves will be responsible for adhering to what they have promised. Furthermore, not all AI companies have joined this initiative. Companies such as Midjourney and Stability AI, which also develop AI technologies, opted not to participate in these voluntary commitments.
OpenAI's commitment has been questioned due to previous statements made by its CEO, Sam Altman, who in May 2023 indicated that the company would explore how to "responsibly" generate AI-generated pornography. This has raised concerns about the real extent of the measures the company will take in this area.
A step forward in the fight against deepfakes
Despite these concerns, the White House has hailed these commitments as an important step in the fight against non-consensual sexual deepfakes and other abuses derived from AI. The administration is focused on identifying and reducing the potential harms of these technologies, especially when it comes to protecting people from the misuse of their image without consent.
This effort represents a first step in regulating AI-generated content, a growing challenge as artificial intelligence tools become more powerful and accessible.