Crazy things happen when armies of chatbots execute algorithms designed to re-program your mind. Truth and reason lose their importance, internet trolls take on epic proportions, people get sucked into mindless discourse. Public opinion gets skewed, elections get rigged, and you wonder who’s really programming who.

Consider this: AI has already beaten the very best human players of chess and Go. These two games are said to be so sophisticated that winning against masters takes extraordinary creativity and intuition refined over a lifetime of practice. Our species recently got trounced soundly and decisively by artificial ingenuity and sheer computing power.

Artificial intelligence can also manipulate how we form opinions, views, and expectations in our personal, political, and purchasing decisions. With names like IBM’s Deep Blue and Google’s DeepMind, you can easily imagine how AI can worm deep into our psyche to reconstruct our worldviews.

In the most recent 2016 US Election battle between Hillary Clinton and Donald Trump, legions of social media bots were deployed by both presidential campaigns to help sway public opinion. These bots are software-automated accounts — on Twitter, Facebook, and other social media sites — that algorithmically like, post, comment, tweet, retweet, reply, and follow other accounts.

A study featured in the MIT Technology Review revealed that almost 20% of all election-related posts on Twitter came from these opinion-bending robots. That’s hardly surprising since a chatbot can post more than a thousand tweets per hour. Even the most rabid internet trolls can only envy that mind-boggling rate.

You’ve likely agreed and argued with robots during the campaign period, depending on the candidate you were rooting for. What’s even more alarming is that in many social media conversations, you might have made a rational and emphatic case for your candidate without realizing you’re talking to a chatbot.

Don’t feel bad if you’ve been duped. One study showed that chatbots deceive people into thinking they’re talking to another human 30% of the time. Even during the early stages of chatbot technology in the mid-1960s, a conversational robot named ELIZA fooled a lot of people into thinking that it was a person. Many scholars consider ELIZA to be the computer who first passed the Turing Test, a test that checks whether a machine demonstrates intelligent behavior that rivals or is indistinguishable from that of a human being. Five decades have passed since then and chatbot chatter has undergone quite a few upgrades.


The Mathematics of Using Bots

Even if users realize a particular social media account is not human, the sheer number of automated accounts can overwhelm detection and shift public perception about different issues. Since bots can like, retweet, and follow accounts, the topography of online conversations can easily be skewed when millions of tireless bots join the chatter.

Who wouldn’t want to have thousands of followers, even if half of them are not human? Who wouldn’t want their posts to get thousands of likes and retweets? After all, even democracy is a numbers game.

When things boil down to numbers, the one who controls the most number of bots has a real, practical advantage. President-elect Donald Trump, for example, cited his 30-million strong following on Twitter and Facebook as evidence of mainstream popularity not reflected in traditional polls. Trump claimed in a 60 Minutes interview that “The fact that I have such power in terms of numbers with Facebook, Twitter, Instagram, etc. I think it helped me win all of these races.”

A paper published by Oxford University’s Project on Computational Propaganda did find that the vast majority of election-related bots were rooting for the real estate mogul. The MIT study cited earlier estimated this support at around 75% and overwhelmingly positive in tone. In contrast, bots working for Hillary Clinton’s campaign constituted only a quarter of the total and largely used a more neutral than positive tone to get their message across.

“I think that social media has more power than the money [Clinton] spent,” Trump explains. His social media following, whether human or bot, overpowered the much higher digital and traditional advertising spend incurred by the Clinton campaign.


Negative Implications of Mind-Bending Bots

The political bot is not unique to America. Campaigns around the world have been integrating bots and other social media assets to skew online polls, increase follower population, pad social media traffic, and inject brands in trending topics.

Bloomberg published an insightful article on how young Colombian IT specialist Andrés Sepúlveda manipulated election results across South America for more than 10 years. On top of stealing campaign strategies and spying on opposition parties, Sepúlveda used $600,000 and an army of chatbots to sway public opinion on social media and get Enrique Peña Nieto elected as President of Mexico. He’s also hacked elections in Nicaragua, Panama, Honduras, El Salvador, Colombia, Mexico, Costa Rica, Guatemala, and Venezuela and laundered his technical operations through a slew of middlemen and shell companies. 

“My job was to do actions of dirty war and psychological operations, black propaganda, rumors—the whole dark side of politics that nobody knows exists but everyone can see,” admits Sepúlveda. The political hacker is currently serving 10 years in a Colombian prison for digital crimes, espionage, data theft, and hacking related to the 2014 Colombian presidential election. Many of this hacker’s techniques — such as the extensive use of chatbots — figure prominently in the recently concluded US national elections.

Social media operators in the Philippines, the UK, and China have also been very busy trying to influence public opinion with automated technologies. After the Davao terrorist bombing in the Philippines, fake Facebook accounts and pages deliberately spread a lie that the offender was caught, justifying the government’s draconian measures after the attack. Before the controversial Brexit vote, researchers at Oxford’s Computational Propaganda arm discovered that over 314,000 accounts that tweeted about the subject were automated, including the two most active accounts on both sides.

The Washington Post accuses the Chinese government of faking over 450 million social media comments every year that “praise and distract.” Instead of stirring up controversy or breeding negativity about foreign countries, these bots promote relentlessly positive sentiment about the Chinese government and overwhelm smaller numbers of dissenting opinions and “dangerous” protests.

While bot effectiveness varies depending on AI quality, campaign strategy, and audience demographics, the increasing use of chatbots as propaganda tools has serious implications:

  1. Bots can blur the distinction between facts and fabrications. During the presidential elections, humans as well as bots retweeted links to fake news sites that have spread like mushrooms. These sites leveraged sensational titles and false content to monetize people’s interest about the campaigns. Buzzfeed discovered that many of the pro-Trump sites were built in the Balkans by young, unscrupulous developers out for some click-generated cash.  
  2. Bot usage can degenerate into a new version of spam. Bots are extensively used in marketing, but if deployed unprofessionally, they can become annoyingly oppressive on increasingly more private channels like SMS, Facebook, and other social networks and messaging platforms. 
  3. Bots can be used to twist socio-political discourse into a purely numbers game. The weight, soundness and factuality of an argument won’t matter when people merely see strength in numbers. This is especially true if a large number of people are unaware that bots have infiltrated their ranks. Malicious misuse of bots can have dangerous consequences in the realm of public opinion. 


Keeping Bots Safe & Sensible

If bots were intrinsically evil, doing away with them would be a no-brainer, but the opposite is true. Bots have a net benefit whose potential is just being explored. There are bots who actively strive for good and many support important human activities such as managing your finances, keeping fit, and learning a new language. Virtually any business in any industry has valid use cases for conversational technology. 

To curb the negative impact of bots, technology companies are improving the methods for tracking, regulating and neutralizing malevolent chatbots. Social media users can also increase their awareness on how to detect and report abusive bots. Chatbots are an inevitable evolution of technology that will impact nearly every aspect of our lives.

Bots only do what they are programmed to do, so we humans need to actively work to keep conversations safe and sensible.