Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

OpenAI uses its tools to shut down influence networks in Russia and China

Reading Time: < 1 minute

OpenAI, a leading artificial intelligence company, has revealed that it has thwarted five covert influence operations in the past three months, involving networks in Russia, China, Iran, and Israel. These networks were using OpenAI’s AI products to manipulate public opinion and shape political outcomes while concealing their true identities.

In a recent report, OpenAI highlighted how these influence networks utilized AI tools to generate text and images at a larger volume and with fewer errors than human-generated content. Despite their efforts, OpenAI stated that these campaigns ultimately failed to significantly increase their reach.

Ben Nimmo, principal investigator at OpenAI’s Intelligence and Investigations team, emphasized the importance of addressing the potential risks associated with AI-powered influence operations. He stated, “Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI. With this report, we really want to start filling in some of the blanks.”

The identified networks, including groups like “Doppelganger” and “Spamouflage,” used a combination of AI-generated material and traditional formats to spread their messages. OpenAI also uncovered previously unknown networks from Russia and Israel, such as the group “Bad Grammar,” which utilized AI models to automate content posting on messaging platforms like Telegram.

Despite the limited impact of these operations, OpenAI remains vigilant in detecting and defending against such threats. The company is collaborating with industry peers to share threat indicators and plans to release more reports in the future to raise awareness about the risks associated with AI-powered influence campaigns.

Taylor Swifts New Album Release Health issues from using ACs Boston Marathon 2024 15 Practical Ways To Save Money