Meta updates regulations on deepfakes and other manipulated media | Tech News

Reading Time: < 1 minute

Meta, the parent company of Facebook, has announced significant changes to its policies regarding digitally created and altered media in preparation for the upcoming U.S. elections. The move comes as the platform faces increasing challenges in policing deceptive content generated by new artificial intelligence technologies.

Starting in May, Meta will introduce “Made with AI” labels for AI-generated videos, images, and audio shared on its platforms. This expands on their previous policy, which only addressed a limited scope of doctored videos. Additionally, the company will implement separate and more prominent labels for digitally altered media that poses a high risk of deceiving the public.

The shift in approach signifies a move towards providing transparency to users about the origins of manipulated content, rather than simply removing it. Meta had previously announced plans to detect images created using other companies’ AI tools through invisible markers embedded in the files.

These changes come ahead of the U.S. presidential election in November, where the use of generative AI technologies by political campaigns could potentially impact the spread of misinformation. Meta’s oversight board had previously criticized the company’s rules on manipulated media as “incoherent,” highlighting the need for a more comprehensive policy that covers all forms of deceptive content.

The new labeling approach will apply to content on Meta’s Facebook, Instagram, and Threads services, with immediate implementation of the “high-risk” labels. As the digital landscape continues to evolve, Meta’s efforts to combat deceptive content through transparency and accountability are crucial in maintaining the integrity of its platforms.

Taylor Swifts New Album Release Health issues from using ACs Boston Marathon 2024 15 Practical Ways To Save Money