Hack the Vote: Is AI on the Ballot in 2020?
Even though the November U.S. elections are still months off, artificial intelligence already has emerged as a potent technology that can sway voters all the way until the moment the polls close that fateful Tuesday. Yet just how AI is used in the election’s contest between truth and disinformation will shape the narrative of this emerging technology’s impact on American democracy for years to come.
Experts on technology and democracy have been sounding an alarm since Russian hackers during the 2016 election exploited social media algorithms, many of which are enabled by AI software. In 2018, Elaine Kamarck wrote in a Brookings Institution report, “Malevolent Soft Power, AI, and the Threat to Democracy,” that “by 2016, social media had become a weapon against democracy as opposed to a tool for democracy. Unless we are vigilant, the new world of artificial intelligence (AI) has the potential to be an even more dangerous weapon in the years ahead.”
U.S. officials fingered Russia for its influence operations in 2016, and nations such as China are heavily investing in AI. Meanwhile, non-government companies, individuals and activist groups are developing ever-more sophisticated commercial AI software that can be utilized for undermining the democratic process in the U.S. and around the world with two powerful tactics: targeted disinformation and deepfakes.
Disinformation on social media has proven to be one of the more difficult challenges, particularly for self-policing platforms whose AI-enabled, ultra-precise targeting of users is what underpins their economic models. Yet even in the final months of campaigning for the 2020 primary election cycle, Facebook continues to allow un-fact-checked political advertisements, even if those candidate-oriented messages are effectively disinformation that targets users where they are most susceptible. “While Twitter has chosen to block political ads and Google has chosen to limit the targeting of political ads, we are choosing to expand transparency and give more controls to people when it comes to political ads,” a Facebook executive wrote in a Jan. 9, 2020, blog post explaining the company’s decision.
Many of these messages are spread by fake users themselves. A team of Digital Forensics Lab and Graphika researchers in November announced the results of “Operation FFS”—Operation Fake Face Swarm—in which investigators uncovered an AI-enabled ring of fake accounts affiliated with “The Beauty of Life” or “The BL” on Facebook and other social media platforms spreading disinformation. In a twist, many of those accounts contributed to closed user groups that were themselves run by fake users. Content varied, but notable for the upcoming US election, it included over 80 pro-Trump pages and accounts. A common trait for these accounts was AI-generated fake pictures, according to a report by DFRL and Graphika; many of the accounts also were enabled to auto-post content to ensure they were algorithmically optimized to be relevant. “The BL network is the first time the authors have seen AI-generated pictures deployed at scale to generate a mass collection of fake profile pictures deployed in a social media campaign,” according to the report.
Beyond watchdogs like DFL and Graphika, companies like Factmata use AI technology to hunt down fake accounts and disinformation and label them in a manner the company likens to a “nutrition label” for content. Yet what’s different in the 2020 election cycle from 2016 is the emerging front in the disinformation battle from deepfakes, which rely on generative adversarial networks (GAN) to modify videos into realistic and usually provocative facsimiles. Just as the first televised presidential debate in 1960 between John F. Kennedy and Richard Nixon showed the emerging power of a new visual medium to shape voter perspectives on a candidate, a recent spate of AI-altered videos depicting leading politicians such as House Speaker Nancy Pelosi bumbling and slurring through a speech or President Donald Trump walking his son-in-law through the basics of money laundering portend another before-and-after moment for U.S. politics.
The threat is serious enough that the House Intelligence Committee held hearings in June 2019 to explore how to detect deepfakes and what can be done to thwart them. At the hearing, Clint Watts, a former FBI agent and expert on disinformation and counterintelligence with the Alliance for Securing Democracy, told lawmakers that deepfake tactics carry risks for society that go beyond embarrassing politicians. “Regardless of whether the purveyor of deepfakes is international or domestic, the net effect will be the same: degradation of democratic institutions and elected officials, lowered faith in electoral processes, weakened trust in social media platforms, and potentially sporadic violence by individuals and groups mobilized under false pretenses,” he said.
Elected officials clearly see the threat posed by deepfakes. Last year, California and Texas passed legislation outlawing their use in cases like revenge pornography or political disinformation. Within the tech industry, collaborations between industry and academics are at the vanguard of working to combat the threat. A consortium of tech giants, including Facebook, Amazon and Microsoft, rolled out a “Deepfake Detection Challenge” last year that will run through March 31 and award prizes valued up $1 million. Twitter moved to ban deepfake videos in February, stating plainly, “You may not deceptively share synthetic or manipulated media that are likely to cause harm.”
Yet as with other forms of asymmetric conflict or competition, the edge increasingly seems to go to adversaries able to nimbly exploit gaps and loopholes. With disinformation and deepfakes, there are plenty, which will keep AI in the spotlight—for better or worse—in the 2020 election.