Facebook vows to block ads on election day; here’s why it won’t make a difference.

Polina Kroik
Digital Diplomacy
Published in
5 min readOct 14, 2020

--

With the presidential elections less than a month away, concerns about Facebook’s influence on its outcome are again in the headlines. Following the President’s call for his supporters “to go into the polls and watch very carefully”, the tech giant vowed to restrict content that uses “militarized language” or implies that the aim is to intimidate voters — though it stopped short of removing the President’s son’s post with such a message. Late last week Facebook also announced that it would ban all political ads right after the election, and notify users that no candidate has been elected prior to the official results. Twitter went a step further, banning all political advertising prior to the election.

Facebook’s self-regulation is unlikely to stop the spread of disinformation
Photo by Marvin Meyer on Unsplash

But will Facebook’s proposed self-regulation have any effect on the disinformation that is habitually spread through the network? The experience of the last five years, along with what we know about the way Facebook’s algorithm work to distribute information and manipulate opinion makes me doubt that these superficial measures will make much of a difference. These policies seem mainly designed to placate aging officials, who have a vague understanding of technology, while allowing Big Tech to keep its mind-boggling financial and political power.

As I explain below, previous attempts to regulate Facebook, Twitter, Google and others have been ineffective because they miscategorized these organizations as private entrepreneurial companies or neutral platforms that facilitate the sharing of information among users. In fact, as these companies evolved, they have become powerful media outlets with outsized influence on public opinion and public health. As such, they shouldn’t be allowed to get away with weak self-regulation, but must be monitored and regulated on the federal level by an organization such as the FCC.

Current and previous attempts to rein in the Big Tech haven’t worked for three main reasons:

1. The few mandated restrictions have almost always followed the model of self-regulation. This might have made sense in the 2000’s, when social media was the playground for a few tech-savvy teens, but it’s difficult to justify today. Today, the tech industry likely benefits from the weakest oversight in the U.S. economy. In contrast, broadcast media is heavily regulated by the FCC which sets the rules for both content and advertising. Violations of FCC rules can result in fines and even the shutting down of offending networks. Social media’s effects on public opinion arguably outweighs that of the traditional networks. It also has far-reaching consequences for public health, especially when it comes to young adults’ mental health. It isn’t unreasonable to suggest that not only the FCC but the FDA should have some say in the addictive and harmful content big tech markets to young adults.

Social media has become more powerful than traditional media
Photo by William Iven on Unsplash

2. Much of the existing regulation focuses on advertising. Unfortunately, we all know that blocking political ads online does little to block politically biased, persuasive information. Even if you block the ad that keeps popping up in your feed, the rest of the content from your friends’ and followed accounts will likely be just as one-sided, aimed at “selling” you a product or a point of view.

The idea of regulating paid advertising is likely designed to placate an older generation which imagines Facebook as a kind of TV channel, where supposedly neutral content is interrupted every so often by commercials that have an agenda. In fact, pretty much everything on Facebook has an agenda: its algorithm is designed to capture as much of your attention as possible for as long as possible. For this reason, the posts you’ll see most are the ones that evoke a strong emotional reaction. Those will be posts that your “friends” loved or hated, especially those whose content was powerful enough to start a comments thread. Anything that could parallel easy-viewing TV shows like “The Office” or a slow-paced news program gets pushed down to oblivion.

3. Facebook has successfully sold us on the idea that it’s a “social network,” or a platform in which users freely share self-created content. In fact, Facebook’s algorithm has such an outsized effect on the content we consume that’s it’s no longer a platform, but a broadcaster.* This might seem to contradict my last point, but bear with me, as this may be the key to true regulation. Though Facebook began its life as a social platform, today its core function, the news feed, no longer represents the conscious choices of its users. Instead the “authors” of the content that we consume are Facebook’s complex algorithm and that algorithm’s designers.

Yes, it may seem like a stretch when we are the ones posting pictures and venting our opinions on the day’s news. But “traditional” media has many examples in which ordinary members of the audience or non-professional participants provide much of the content: in a reality show or a talk show such participants provide most of the footage or dialogue that audiences watch. And yet neither the shows’ producers nor the network are absolved of responsibility for the programs that they ultimately broadcast to the world.

Social media grew and developed much faster than its predecessors, making it difficult for users and regulators to adequately asses its power. But more than fifteen years since Facebook’s birth, and four years into a political disaster that it arguably helped create, it’s high time for us to see the media giant for what it really is and advocate for real regulation. As large-scale media outlets, Facebook, Twitter, and others should come under the purview of the FCC, which, in turn, should put together a set of rules for this new kind of broadcaster.

Of course, realistically, that kind of regulation isn’t likely to happen in the near future. For now, it’s up to us to be vigilant about Facebook’s effects on our consciousness and our democracy. As the election approaches, we should expect the social media giants to sow disinformation and facilitate extremism as they have done in the past. To mitigate their effects, we can look for opportunities for real dialogue both on and — especially — outside these networks.

*This line of argument is inspired by an interview I heard a long time ago about “broadcasting” vs. “diffusion” in online media. I couldn’t find the interview, so if you have a reference to this please share it with me.

--

--

Polina Kroik
Digital Diplomacy

I write about tech, women, culture and the self. Book: Cultural Production and the Politics of Women’s Work. https://polinakroik.com/