Seeing Double: Should Facebook Ban Political Ads?

In recent years, political advertisements on social media have become a hotly contested issue. Much of the criticism revolves around Facebook, which runs the vast majority of social media political ads. Some say that these ads help spread misinformation and weaken our democracy. Others say they are an important part of the free market of ideas. In light of the controversy, some companies, like Twitter and LinkedIn, have already banned political ads. Should Facebook do the same? 

Thomas’ Take 

Social media presents a problematic platform for political ads for three reasons. Firstly, social media is a natural breeding ground for disinformation, which can then spread like a virus. In a speech at Georgetown University earlier this year, Mark Zuckerberg declared that Facebook would never fact check either posts or advertisements on its platform, citing the platform’s dedication to free speech. If Facebook remains committed to that irresponsible policy, the implications for political advertising are potentially devastating. 

For instance, we’ve already seen President Donald Trump’s reelection campaign publish blatantly false attack ads against Democratic nominee Joe Biden, which then receive thousands of clicks. In 2016, even Russia got on board, spending thousands of dollars on misleading attacks against Hillary Clinton, and American intelligence agencies have predicted Russia will do so again in 2020, with other American rivals like Iran and China joining in as well. There’s no reason to think that this epidemic of falsehood will stop unless serious action is taken. 

All this disinformation wouldn’t be such a big issue if not for the second unique quality of social media: its ability to target individuals based on their interests and search history. With commercial advertisements, this can be annoying and sometimes alarming. But with political advertisements, it’s a recipe for mass propaganda and extremism.

Brad Parscale, the digital strategist for Trump’s 2016 campaign, estimated that the campaign’s social media ads were “100 [times] to 200 [times]” more effective than those of the opposition because of more precise targeting. Part of the reasoning behind Parscale’s boast was the Trump campaign’s use of illegally taken user data from Cambridge Analytica. Nor is this a problem unique to the Trump campaign. Many other groups have gotten in trouble for targeting emotionally vulnerable individuals with provocative and misleading ads. With social media, it doesn’t matter who has a more factual or persuasive argument. All that matters is who has more money and better user data. 

Social media’s third major problem is the lack of accountability. Recently, the Michael Bloomberg campaign figured out that because of loopholes in the Facebook ad policy, they could post political ads anonymously, with no public record of where the posts came from. That means that political organizations can escape blame for misleading ads. If that doesn’t tempt campaigns to use the harmful tactics listed above, I don’t know what else would. 

But the accountability problems don’t stop with the political campaigns. Facebook is simply not the right organization to control the nation’s political discourse. Despite its stated commitment to free speech, Facebook has repeatedly made huge censorship concessions to repressive governments around the world. As a private company, Facebook has the right to decide whose advertisements they run. If it chose, Facebook could run exclusively neo-nazi propaganda. 

To make matters worse, unlike radio and TV outlets, Facebook is not classified as a publisher, so it isn’t liable for any false or harmful information contained in its ads. In a country where the vast majority of the adult population uses Facebook, that’s an enormous and unacceptable amount of political power for one company to possess. 

My co-columnist argues that we should trust Facebook to clean up its act and change its ad vetting policy. But the problem isn’t just Facebook’s policies. The problem is Facebook itself. With hundreds of millions of posts per day, Facebook is such a huge organization that effective regulation is almost impossible. Senator Elizabeth Warren, (to whom my co-columnist prays every night) has argued many times that it is impossible to reform or regulate Facebook’s “Disinformation-for-Profit” system, either from within or from outside. 

Even if Facebook did suddenly decide to fact-check political ads, and had the capability to do it, we’d be just as badly off, because then Facebook would become the final arbiter of truth in politics. In short, Facebook’s political ad system is rotten to the core, and it’s not likely that any half-measures will fix it. 

No form of advertising is as important as political advertising. Political campaigns allow us to learn about politicians’ platforms and help us decide who should run our economy, distribute services, and yes, even regulate social media. But social media turns the tradition of political ads into a dystopian parody of itself where victory is granted to the best liar, all under the watchful eye of Corporate America. 

Even Facebook acknowledges the danger its political ads pose to democracy. That’s what spurred Mark Zuckerburg to recently announce a ban on political ads in the week before the November election, saying that ads could cause “an increased risk of civil unrest.” And while that’s all very good, I’d personally prefer a decreased risk of violence and mass misinformation year-round, rather than just for a week. 

My co-columnist will argue that if political ads leave the scene, the void will be filled with big business lobbyists trying to push their commercial agendas. While big business advertising is certainly a cause for concern, it’s nowhere near as harmful or prevalent as political advertising. This is because only a tiny portion of social media ads are paid for by big businesses. At Facebook, for example, about three-quarters of ad revenue comes from small businesses. The top 100 companies advertising on Facebook, in contrast, make up only 6% of ad revenue. So if political ads left Facebook, you wouldn’t suddenly see Exxon Mobil or Monsanto all over your feed. More likely, you would see ads for local barbers or bookstores. 

It’s obvious that something is seriously wrong with social media politics in America. A plague of virtual disinformation costs real lives. Political campaigns and organizations target and radicalize vulnerable individuals. Through social media, a single private company can exert almost complete control over online political advertising and, thus, offline politics. Many of these problems are fundamentally linked to Facebook’s ability to run political ads. Of course, banning political ads wouldn’t solve all of social media’s problems by itself. But it needn’t be the last step towards that goal. What we need right now is a decisive first step on the road to a better internet. 

Cole’s Comment

The only social media that I use regularly is Twitter, where I doom-scroll through the news of wildfires, political outrage and the occasional incomprehensible K-pop meme. Nearly every time I see an ad on Twitter, I block the account, which means that I’ve blocked more than 1800 accounts ranging from Spotify to some person named “ALMIGHTY GOAT TREY.” In other words, I despise ads — on the internet, on billboards and in newspapers. 

So when Twitter banned political ads last October, I was excited. Say goodbye to misinformation and irritating political advertisements, at least on my platform of choice! 

But a few weeks later, I was dismayed to see an ad praising natural gas as a “clean fuel” for the future, which it isn’t. The ad was run by none other than Exxon Mobil, the company that knew about the dangers of climate change in 1977 and continued to spread disinformation about climate science anyway.

I was confused. How could Twitter allow such a blatantly political ad? And if big business is allowed to run ads hawking their dangerous wares, why can’t climate change groups run counter-ads exposing their lies?

It turns out that blanket bans on “political ads” by social media giants don’t really ban ads with political content but rather ban which groups can advertise. Twitter’s ban specifically prohibits ads from PACs, SuperPACs and 501(c)(4)s, regardless of whether or not they reference an election, candidate or ballot issue. And most grassroots organizations — particularly those involved in advocacy — are 501(c)(4)s, a special IRS designation for “social welfare” groups. Under these policies, they aren’t allowed to advertise at all, even in rebuttal to overtly political ads from opponents.

I agree with my co-columnist that Facebook’s decision to ban political ads in the week immediately before the presidential election will probably save lives and improve democracy. But banning ads year-round will only allow misinformation from established business interests to flourish. Fossil fuel companies will still be allowed to play down climate change, and pharmaceutical and insurance companies will still be permitted to advocate against single-payer healthcare.

Instead, we should expect better from platforms like Facebook and Twitter. New and unknown politicians, especially progressive challengers, rely on political ads to increase name recognition and build momentum. Rather than banning all political ads to eliminate the harmful ones, these massive internet companies should ensure that political ads are neither misleading nor dangerous. In fact, they have a responsibility to do so. When social media companies ban all political ads, they abdicate their responsibility to the community just as much as when they engage in no ad moderation at all. 

It’s precisely because of Facebook’s lack of moderation that political ads have become so bad. If the Trump campaign’s microtargeting is actually 100 to 200 times more effective than Democrats’, as my co-columnist discusses above, it’s because of his campaign’s reliance on illegal data notoriously stolen by Cambridge Analytica under Facebook’s nose. In fact, Brad Parscale, the campaign manager who made those claims, is intimately connected with Cambridge Analytica. When the theft was discovered, Facebook did almost nothing to prevent that data from being misused — they didn’t even publicly discuss the breach.

Even now, the Trump campaign regularly runs ads that break Facebook’s policies. Over the summer, Trump’s ads targeted personal characteristics like gender, which all advertisers on the platform are forbidden from doing. Facebook only took action after an independent investigative reporter broke the news about the ads, days after they were first put up.

If Facebook had actually followed its own policies, neither of these techniques would have worked. And if Facebook had more stringent ad policies, we’d all be much better off. As I argued in my last piece, we should protect our privacy by limiting data collection and targeted ads from all sources, not just political groups. Without targeting, political ads — and ads in general — become far less dangerous.

Finally, banning political ads would barely impact misinformation on Facebook. Most dangerous claims on social media spread organically, from person to person, rather than through paid advertising. Coordinated disinformation campaigns create deceiving fake news sites and then share those links with fake accounts and troll armies. As more real people start to see that content, some believe it and share it with their own networks. Since we all trust our family and friends more than random advertisers or accounts, fake news can reach huge audiences this way. Through these organic dissemination tactics, coronavirus misinformation has spread faster than the virus it denies.

By banning political ads, social media companies gain crucial cover to avoid responsibility for the organic spread of dangerous falsehoods on their platforms. Instead, it’s time we hold social media companies responsible for what they allow to spread in addition to what they’re paid to advertise.

No matter what, Facebook will have to be some sort of arbiter of what is true and false on its platform. This has always been the case — false and misleading commercial ads are banned on Facebook, and have been for some time. But instead of handing Facebook the reins of truth, as my co-columnist rightly worries about, we can regulate through law what companies like Facebook must do to misleading and dangerous speech online. Which posts should sites remove? Which posts should they flag? And which posts should they prevent from spreading? These are questions for a democratic society, not individual companies.

At the same time, we should build coalitions of nonprofit, impartial news sources and third-party moderators that can fact-check ads and popular posts. That way, Facebook loses control over the practice of moderation in addition to our expectations for moderation.

Only a lack of imagination can force us to choose between a blanket ban on political ads and the status quo. By ending microtargeting and outsourcing fact-checking to trusted third parties beholden to federal law and the American people, we can eliminate dangerous political ads while allowing those that enrich our democracy. And those fact-checkers can address organic misinformation at the same time, with the same techniques and standards.

I hate ads enough that I refuse to browse the web without an ad-blocker. However, banning political ads outright would do far more harm than good. We should require social media companies to do what’s best, rather than what’s easiest. In a connected world, it’s the only way to save our democracy.