Amid considerable fears about the impact that AI-generated content may have on the 2024 Presidential Election in the United States, Meta claims that less than one percent of election misinformation was created by AI, at least on its platforms, which include Facebook, Instagram, and Threads.
2024 was a big year for elections across the world. While the United States understandably sucked up a lot of the air in the room across the Western world, people in India, Indonesia, Mexico, and nations within the European Union all cast ballots this year.
In a new post, “What We Saw on Our Platforms During 2024’s Global Elections,” Meta’s president of Global Affairs, Nick Clegg, breaks down how people shared and communicates across Meta’s platforms, including how people spread misinformation, a particular area of concern for many.
“Since 2016 we have been evolving our approach to elections to incorporate the lessons we learn and stay ahead of emerging threats. We have a dedicated team responsible for Meta’s cross-company election integrity efforts, which includes experts from our intelligence, data science, product and engineering, research, operations, content and public policy, and legal teams,” Clegg writes. “In 2024, we ran a number of election operations centers around the world to monitor and react swiftly to issues that arose, including in relation to the major elections in the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico and Brazil.”
Clegg notes that no platform will ever strike the perfect balance between free speech and safety all the time, but admits that Meta’s error rates have historically been too high. However, Clegg focuses on situations when Meta’s platforms remove harmless content, rather than times when harmful content remains available.
During the U.S. general election cycle, “top of feed reminders on Facebook and Instagram received more than one billion impressions,” per Clegg, and these reminders included information about registering to vote, methods by which Americans can vote, and, of course, reminding people on election day actually to vote.
As for content that users shared, there were significant concerns about how AI-generated content would be shared and what sort of impact it could have on misinformation. While some AI-generated content got a lot of attention online, Meta says that AI-generated content, including deepfakes, did not reach the high levels people thought they might.
“From what we’ve monitored across our services, it seems these risks did not materialize in a significant way and that any such impact was modest and limited in scope,” Clegg says.
“While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content. During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than one percent of all fact-checked misinformation,” he continues. Clegg doesn’t specifically discuss the impressions that AI-generated fact-checked misinformation may have had. Still, regarding all identified misinformation, AI content was a tiny piece of the pie.
Meta, which has its own generative AI tool, Imagine, rejected nearly 600,000 requests to generate images of the candidates and current President Joe Biden. Clegg adds that Meta signed the AI Elections Accord earlier this year, pledging to “help prevent deceptive AI content from interfering with this year’s global elections.”
‘… it seems these risks did not materialize in a significant way and that any such impact was modest and limited in scope’
Clegg also touches on foreign interference, saying that in 2024, Meta’s dedicated teams took down “around 20 new covert influence operations around the world, including in the Middle East, Asia, Europe, and the U.S.”
“With every major election, we want to make sure we are learning the right lessons and staying ahead of potential threats,” Clegg says. “Striking the balance between free expression and security is a constant and evolving challenge.”
Image credits: Header photo licensed via Depositphotos.