With the first presidential primary just weeks away, Alex Stamos already is bracing for problems at the ballot box.
Stamos, the former chief security officer at Facebook who is now an adjunct professor at Stanford University’s Center for International Security and Cooperation, fears generative AI will turbocharge the spread of disinformation.
“What once took a team of 20 to 40 people working out of [Russia or Iran] to produce 100,000 pieces can now be done by one person using open-source gen AI,” he said in a recent interview.
Stamos has cause for worry. A confluence of factors could add up to a fraught election year in 2024, security experts say.
With more than 2 billion people around the world — including in the U.S., the EU and India — expected to vote in a record number of elections next year, technology and social media have helped hyper-accelerate the spread of misinformation at the same time Big Tech has eased election-protection measures with layoffs that have ravaged trust and safety teams.
Also see: 2024 elections are threatened by AI abuse, experts say. Literacy is key.
The topic of online election obstruction was front and center during a couple of forums on AI on Capitol Hill this year, led by Sen. Chuck Schumer, D-N.Y., who convened the forums with lawmakers, executives and privacy advocates. “Our world is already changing in dramatic ways because of artificial intelligence, but we’re likely just at the start,” Schumer said in September, adding that he hoped the meetings could help “supercharge” the formation of AI regulations.
Election interference is a threat at every stage of the process — from misinformation that can sway votes, to breaches of voting systems and manipulation of voting results, Megan Shahi, director of technology policy at the Center for American Progress, said in an interview.
“The threat is everywhere in the election cycle, and ChatGPT makes a lot of existing risks even worse,” added Shahi, a former policy team member at Meta Platforms Inc.
META,
for four years and at Twitter/X for a little over a year.
“Meta did a good job in 2018” after the 2016 debacle surrounding Cambridge Analytica, but 2024 presents a new challenge, she said.
Meta has steadfastly downplayed “what-if” election scenarios. “Protecting the U.S. 2024 elections is one of our top priorities and our integrity efforts continue to lead the industry,” the company said in a statement to MarketWatch.
Also see: Meta updates policy to combat deceptive AI-generated political ads in 2024
Shahi, Stamos and others suggest several solutions, some of which are already being addressed by Meta, Microsoft Corp.
MSFT,
and others, such as: Platforms should publish at least two reports detailing how AI is being used in content moderation and election risk-mitigation efforts, or AI content issues that were encountered; and platforms should offer transparency into political advertisements.
Earlier this year, Alphabet Inc.’s
GOOGL,
GOOG,
Google launched an ad transparency center, and said it blocked or removed more than 5 billion problematic ads in 2022.
More: Google to political advertisers using AI: Be ‘clear’ about any digitally created content
Starting next year, Meta will require advertisers to disclose any digital creation or alteration of images, videos, or audio within ads on Facebook and Instagram.
But with unlimited AI algorithms, AI brings a “lot more velocity” to nation-state cybercriminals attempting to distribute targeted disinformation, or infect voting machines and voter-registration databases, said Casey Ellis, chief technology officer at cybersecurity firm Bugcrowd.
Ellis recited an infamous quote from Winston Churchill on the blur-fast spread of misinformation: “A lie gets halfway around the world before the truth has a chance to get its pants on.”
“We won’t know until it happens,” lamented Jessica Furst Johnson, a lawyer specializing in election integrity online. “The great unknown is what gen AI can do.”