President Joe Biden’s administration is taking a stand against the growing market of abusive sexual images created using artificial intelligence technology. The White House is calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery.
The use of generative AI tools has made it easy to create realistic deepfake images of individuals, including celebrities and children, and share them across various platforms. Victims of this type of abuse have little recourse to stop the spread of these images, which can have devastating consequences.
The White House is seeking voluntary cooperation from companies to address this issue, as federal legislation is currently lacking. By committing to specific measures, officials hope to prevent the creation, spread, and monetization of nonconsensual AI images, particularly those depicting explicit images of minors.
Arati Prabhakar, director of the White House’s Office of Science and Technology Policy, emphasized the urgent need for action, especially as AI technology continues to advance rapidly. Companies in the tech industry, financial institutions, and other key players are being called upon to disrupt the monetization of image-based sexual abuse and take responsibility for preventing its proliferation.
The administration’s efforts come in response to a concerning trend of AI-generated deepfake images targeting women and girls, with notable cases involving celebrities like Taylor Swift and students at schools in the U.S. and elsewhere. While voluntary commitments from major technology companies have been made in the past, there is a growing recognition of the need for legislative action to address this pressing issue.