New Delhi: A recent report by a civil rights group has shed light on the alarming trend of political advertisements containing misinformation, hate speech, and incendiary content on social media platforms like WhatsApp, Instagram, and Meta during the ongoing general election. The report, released by US-based Eko and India Civil Watch International, has raised concerns about the potential for electoral manipulation by malicious actors.
According to the report, 22 politically incendiary advertisements were identified across Meta’s advertising platforms, with 14 of them passing Meta’s quality filters. However, Eko claimed that the ads in question were taken down before they could go live on the platforms.
The report alleged that the ads called for violent uprisings targeting Muslim minorities, spread disinformation exploiting communal and religious conspiracy theories, and incited violence through Hindu supremacist narratives. One approved ad even contained messaging similar to a doctored video of union home minister, Amit Shah.
This is not an isolated incident, as another report by Global Witness highlighted 48 advertisements on YouTube that portrayed violence and voter suppression passing the platform’s electoral quality check filters.
Despite the responses from Meta and YouTube defending their content moderation efforts, industry stakeholders and policy experts are calling for third-party audits of Big Tech’s policy enforcements, especially during the election period. They argue that there are clear gaps being exploited by malicious parties, and independent scrutiny may help clarify which filters are working and which aren’t.