Seeing Double: How I Learned to Stop Worrying and Love the Bot
These days, social media bots are everywhere. They ask deep, pseudo-intellectual questions like a blazed Michel Foucault. They write half-nonsensical jokes like a lobotomized John Mulaney. They even combine real tweets to make questionable poetic couplets as if Shakespeare had gone to Williams. At this point, I wouldn’t be surprised to discover that my own co-columnist is a bot. Surely no human could use the words “democracy,” “unions” and “norms” with such frequency.
Indeed, bots have become an increasingly important part of our political discourse, as people of all age groups spend more time on social media. Many analysts see them as a serious threat to truth itself. It’s not a hard case to make. Bots have been responsible for the dissemination of mass disinformation about topics like the 2020 presidential election and the Covid-19 pandemic, either through impersonating real people or “hacking” social media algorithms by mass-sharing false messages.
The most common response to the problem is a push for more extensive regulation of bots and disinformation. But as the past years have shown, regulation is an imperfect solution, and difficult to implement on a wide and consistent scale. Scanning the flood of social media posts for bots and disinformation is like trying to drink from a fire hose. There’s too much to filter, even for artificial intelligence, and regulators can only target the most public and prevalent cases of disinformation. It seems unlikely that regulators will get a handle on the flood of lies anytime soon. But what people often ignore is that bots can spread facts as well as lies. That’s why I propose a different approach: fighting fire with fire.
Like any tool or technology, social media bots can be used in good or bad ways. Instead of criticizing the legitimacy of the election, bots can also post and share public health information, or strengthen the voices of experts. Governments can do some of this work, but as our two most recent presidential elections have shown, even democratically-elected governments cannot always be relied upon to fight disinformation. Therefore, some of the responsibility must also fall to you, the internet-savvy, chronically bored and infuriatingly politically-minded students of Amherst College.
Bots allow regular people like you and me to amplify our voices a hundredfold. During the 2016 election, for example, a single internet user, going by the name of MicroChip, created thousands of bots which generated tens of thousands of pro-Donald Trump retweets per day. And these sorts of messages can reap real results. This past year alone, the wave of bot-encouraged Covid-19 disinformation has undoubtedly claimed thousands of lives. To leave such a powerful megaphone to be misused by bad faith actors would be a disservice to the truth and, quite literally, enabling a public health hazard.
If an individual does decide to fight the good fight by using bots to counter online disinformation, they must follow certain rules. After all, even people with the best intentions can turn to the dark side if they imitate the methods of online disinformers too closely. Below is the botmaster’s oath, the code of conduct for conductors of online coding:
- Never hide the identity of your bots. Pretending to be a real person is obviously immoral and manipulative. Besides, a bot doesn’t need to masquerade as a person to be effective. Even if no one reads your bots’ messages, they can still have a profound impact on algorithms, hashtags and the overall direction of an online discussion.
- Double and triple check your facts. The enemy here is disinformation of all kinds, and spreading a lie accidentally is no less harmful than doing it on purpose.
- Choose important issues to discuss. Not all factual stories need to be told. Wasting people’s time with pointless updates is unlikely to inform or persuade them of anything. Just look at the emails AAS sends us. Only use your bot empire to disseminate a message if that information could tangibly improve the wellbeing of the public.
These three rules map out a new role for bots in social media, where bots are champions of truth and spread vital information to people who wouldn’t get it otherwise. In the past, the country used public broadcasting as its source for a solid foundation of facts. But the days of nonpartisan television are over. I hope to live in a world where a body of people, of varying political affiliations but all devoted to honest discourse, take it upon themselves to “broadcast” their own programs. The new, bot-based social media could take on a plethora of forms, from public service announcements to article links to memes. That way, no age group or demographic will be left behind in the pursuit of truth. Active and conscientious use of bots won’t solve online disinformation by itself, but it will mark a milestone in the battle against falsehood.
People who condemn all social media bots as harmful to the public are missing the point entirely. Bots are simply a new and powerful tool for communication, analogous to radio or television. Right now, they’re being used for nefarious purposes. But bots can also help us fight disease, disinformation and despair. Above all, bots are here to stay. Rather than fight the tide, it’s time to ride the wave.
Comments ()