OpenAI, a leading artificial intelligence company, has revealed that it has thwarted five covert influence operations in the past three months, involving networks in Russia, China, Iran, and Israel. These networks were using OpenAI’s AI products to manipulate public opinion and shape political outcomes while concealing their true identities.
In a recent report, OpenAI highlighted how these influence networks utilized AI tools to generate text and images at a larger volume and with fewer errors than human-generated content. Despite their efforts, OpenAI stated that these campaigns ultimately failed to significantly increase their reach.
Ben Nimmo, principal investigator at OpenAI’s Intelligence and Investigations team, emphasized the importance of addressing the potential risks associated with AI-powered influence operations. He stated, “Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI. With this report, we really want to start filling in some of the blanks.”
The identified networks, including groups like “Doppelganger” and “Spamouflage,” used a combination of AI-generated material and traditional formats to spread their messages. OpenAI also uncovered previously unknown networks from Russia and Israel, such as the group “Bad Grammar,” which utilized AI models to automate content posting on messaging platforms like Telegram.
Despite the limited impact of these operations, OpenAI remains vigilant in detecting and defending against such threats. The company is collaborating with industry peers to share threat indicators and plans to release more reports in the future to raise awareness about the risks associated with AI-powered influence campaigns.