Generative artificial intelligence poses a significant threat to election security, according to a recent federal bulletin issued by intelligence agencies. The bulletin, compiled by the Department of Homeland Security and distributed to law enforcement partners nationwide, warns that both foreign and domestic actors could exploit generative AI technology to disrupt the upcoming 2024 election cycle.
Generative AI is capable of creating “deep fake” videos, in which individuals appear to say things they never actually said. This technology could be used to spread misinformation and sow discord, potentially influencing the outcome of the election. Director of National Intelligence Avril Haines testified before Congress about the dangers of generative AI, emphasizing the potential for foreign influence actors to produce convincing and tailored messaging at a large scale.
One example cited in the bulletin was a fake robocall impersonating President Joe Biden, urging recipients to delay their vote until the general election. The bulletin also highlighted a case in southern India where an AI-generated video influenced voters to support a specific candidate on election day, leaving officials with no time to counter the false information.
Furthermore, the bulletin raised concerns about the use of AI to target election infrastructure, including the potential for violent extremists to leverage generative AI for attack plotting. While the Department of Homeland Security has not observed violent extremists using AI chatbots for election-related purposes, the threat remains a significant concern.
As the 2024 election cycle approaches, the bulletin serves as a stark warning about the evolving landscape of election security and the need for heightened vigilance against the misuse of artificial intelligence technologies.