Donald Trump understands minority communities. Just ask Pepe Luis Lopez, Francisco Palma, and Alberto Contreras. These guys are among the candidate’s 7 million Twitter followers, and each tweeted in support of Trump after his victory in the Nevada caucuses earlier this year. The problem is, Pepe, Francisco, and Alberto aren’t people. They’re bots—spam accounts that post autonomously using programmed scripts.
Trump’s rhetoric has alienated much of the Latino electorate, a fast-growing voting community. And while it’s unclear who’s behind the accounts of Pepe and his digital pals, their tweets succeed in impersonating Latino voters at a time when the real estate mogul needs them most.
If bots can spread lies about your rivals, why not unleash them?
Bots tend to have few followers and disappear quickly, dropping propaganda bombs as they go. Or they just sit around and do nothing. According to the site TwitterAudit, one in four of Trump’s followers is fake, and similar ratios run through the accounts of the other presidential hopefuls. Even if most of these bots are inactive, they still exaggerate a candidate’s popularity. Our team of researchers at the University of Washington and the University of Oxford tracks bot activity in politics all over the world, and what we see is disturbing. In past elections, politicians, government agencies, and advocacy groups have used bots to engage voters and spread messages. We’ve caught bots disseminating lies, attacking people, and poisoning conversations.
Automated campaign communications are a very real threat to our democracy. We need more transparency about where bots are coming from, and we need it now, or bots could unduly influence the 2016 election.
Plenty of smart, funny, good bots exist online. Built by everyone from digital artists to data wonks, they range quite drastically, and hilariously, in their AI sophistication. @twoheadlines randomly combines the headlines of the day (“rocket league sees no path forward; says he’ll skip next republican debate”); when you tweet the phrase “sneak peak,” @stealthmountain surfaces to correct your spelling to “peek.”
Lately, Silicon Valley has been touting bots as a new tool for social engagement. Policy makers, journalists, and civic leaders often use them transparently: @congressedits uncovers political interference on Wikipedia, @staywokebot critiques racial injustice, and The New York Times’ new election bot promotes political participation.
But as the power of bots grows, so does the capacity for misuse. Bots now pollute conversations around topics like #blacklivesmatter and #guncontrol, interrupting productive debate with outpourings of automated hate. We’ve seen antivaccination bots reach out to parents in a campaign to discourage child inoculations.
So it’s no surprise that bots are creeping into election politics. Researchers at Wellesley College found evidence that when Scott Brown successfully ran for senator in 2010, a conservative group used bots to attack his opponent, Martha Coakley. Gawker reported in 2011 that Newt Gingrich’s campaign bought more than a million fake followers. Outside the US, Mexico’s Institutional Revolutionary Party was caught using thousands of bots to spread campaign messages.
This is only the start. For years, robocalling and push polling have been used to manipulate voters—but not everyone is reachable by landline anymore. We believe bots could become the go-to mode for negative campaigning in the age of social media. Say the race is close in your state. If an army of bots can seed the web with negative information about the opposing candidate, why not unleash them? If you’re an activist hoping to get your message out to millions, why not have bots do it?
Don’t underestimate bots: There are tens of millions of them on Twitter alone, and automated scripts generate 60 percent of traffic on the web at large. The worst bots undermine voter sophistication by pervading the networks people go to for news and information.
The Federal Elections Commission has issued very few advisory statements on how campaigns should use social media, and there’s no evidence it has even started thinking about bots. It certainly wouldn’t serve democracy to block speech, but we need to make it easier for everyone to recognize political bots.
Studies at Indiana University have suggested that obvious bot accounts are much less effective at spreading political lies. Facebook and Twitter currently rely on passive and somewhat arbitrary methods for combating automated speech; they tend to wait for users to report suspicious activity and have a patchy record when it comes to stopping harmful propaganda. Yet they’re perfectly capable of labeling nonbot messages derived from a platform API or mobile phones. Just as Wikipedia alerts readers to flawed articles, social media sites should clearly identify fake users—with big red flags, say. For their part, candidates need to be more vigilant in policing their accounts, vowing to fight computational propaganda.
American political discourse is ugly enough; we already endure so many dirty tricks. Demanding bot transparency would at least help clean up social media—which, for better or worse, is increasingly where presidents get elected.
Samuel Woolley (@samuelwoolley) is project manager of PoliticalBots.org. Phil Howard (@pnhoward) is a professor at the University of Oxford and the University of Washington.
Go Back to Top. Skip To: Start of Article.