Meta, the owner of Facebook, has announced significant changes to its policies regarding digitally created and altered media in preparation for the upcoming U.S. elections. The social media giant will now be implementing “Made with AI” labels on AI-generated videos, images, and audio starting in May. This move comes as a response to the increasing use of artificial intelligence technologies to create deceptive content.
Monika Bickert, Vice President of Content Policy at Meta, stated in a blog post that the company will also be introducing separate and more prominent labels for digitally altered media that poses a high risk of deceiving the public on important matters. This shift in approach will move Meta from simply removing manipulated content to providing viewers with information on how the content was created.
Furthermore, Meta had previously announced a scheme to detect images created using other companies’ generative AI tools, but had not provided a start date until now. These changes will apply to content posted on Meta’s Facebook, Instagram, and Threads services, with other services like WhatsApp and Quest virtual reality headsets being subject to different rules.
The implementation of these new labeling approaches comes ahead of the U.S. presidential election in November, where tech researchers anticipate the use of generative AI technologies to play a significant role. Meta’s oversight board had previously criticized the company’s rules on manipulated media as “incoherent,” highlighting the need for a more comprehensive policy that covers all forms of misleading content, whether generated by AI or not.