Death of the Gatekeeper
The medium chooses the messenger
Vietnamese children run from a village that has just been bombed by South Vietnamese forces. At the center of the image is a young girl, naked, her body seared by napalm. You’ve seen the photo—“The Terror of War,” more commonly known as “Napalm Girl.” It remains an enduring emblem of the Vietnam conflict.
In 2016, a Norwegian journalist was temporarily suspended from Facebook for posting that image as part of a collection of historic wartime photographs. More than a week after removing the post, Facebook issued a public apology. Vice President Justin Osofsky explained:
In many cases, there’s no clear line between an image of nudity or violence that carries global and historic significance and one that doesn’t. Some images may be offensive in one part of the world and acceptable in another, and even with a clear standard, it’s hard to screen millions of posts on a case-by-case basis every week. Still, we can do better. In this case, we tried to strike a difficult balance between enabling expression and protecting our community and ended up making a mistake.
Most would agree that a “no photos of naked children” rule belongs in any social media company’s Terms of Service. It’s as close to a universal standard as you will find online. Facebook followed that guideline—and got it wrong.
As Tarleton Gillespie argues in Custodians of the Internet:
There is no question that this image is obscenity. The question is whether it is the kind of obscenity of representation that should be kept from view, no matter how relevant, or the kind of obscenity of history that must be shown, no matter how devastating
Huỳnh Công Út (known professionally as Nick Ut), took the photo on June 8, 1972 after an accidental strike on the village of Trảng Bàng. At first, the AP photo bureau rejected the photo because it showed full frontal nudity. Only after extended—and heated—discussions was the image greenlighted by New York photo editor Hal Buell.
In the age of mainstream media, editors had the final say as to what got published. Television producers decided what would or would not get aired. They might question the boundaries in cases like “The Terror of War.” But they knew those boundaries were there, and for the most part they stayed within them.
The advent of the Internet promised something special: a world without gatekeepers. Anybody with access to a computer and a modem could evade censors and political tyrants and share their thoughts anonymously. Usenet would be a place where free speech reigned supreme, a virtual world where nobody could decide what you read, what you saw, or what you said.
The problems appeared almost immediately.
Some attacks were political. In one case, a group of Turkish nationalists began posting long screeds about the origins of the Armenian genocide. The word “Turkey” triggered an immediate response—which made for some very interesting Thanksgiving discussions.
Another infamous crosspost, “alt.fuck.the.skull.of.jesus,” propagated across thousands of groups. “Killfiles,” user-built filters designed to block unwanted content, struggled to keep up with endless title changes from offended Usenetizens and trolls.
And then there were shock links—most notoriously Goatse.cx—posted across countless groups to provoke and disgust unsuspecting readers. Alongside them came endless “Make Money Fast” schemes.
It soon became clear that a system without gatekeepers did not produce open and elevated discourse. Instead it produced noise, abuse, and constant attempts to game the system. Users began building their own filters. Communities imposed their own rules. Moderation, formal and informal, re-emerged almost immediately.
Ultimately the chaos of Usenet gave way to walled gardens like Facebook, Twitter, and Reddit. But as those new startups became increasingly popular, governments began turning a nervous eye to their potential for social engineering.
The early and mid-2000s saw a wave of “Colour Revolutions.” Protesters in Georgia, Ukraine, Kyrgyzstan, and other countries used the Internet alongside email and SMS to coordinate protests, mobilize observers, and counter state-controlled media. Western NGOs provided training in election monitoring, communications strategy, and digital literacy.
The Colour Revolutions succeeded in driving out a few autocratic leaders, though holding onto new democracies has proven more challenging. But they also taught leaders around the world that people could use the Internet not only for sharing cat pictures but also for organizing large-scale political actions.
The Arab Spring of 2010-2012 further showed how social media could be used to coordinate and organize revolutions. These uprisings were primarily fueled by long-standing social, political, and ethnic grievances. But Arab leaders insisted that these protests were staged uprisings led by Western intelligence agencies.
In a 2011 New York Times article, Stephen McInerney of Project on Middle East Democracy clarified his NGO’s role in the Arab Spring:
We didn’t fund them to start protests, but we did help support their development of skills and networking. That training did play a role in what ultimately happened, but it was their revolution. We didn’t start it.
The Internet was a tool that helped protesters put their grievances into action, not the spark that ignited them. But distinguishing between the two can be difficult. Even if you acknowledge that the NGOs did not directly foment revolution, there’s no denying that they helped organize it. And if small groups with limited funding can organize protests in the Arab world and in former Soviet states, they can also organize them against your government.
To make matters even more pressing, internet organizing has a low entry barrier. You don’t need large groups with significant funding. A small cadre with smartphones can set up an operation; a single computer-savvy revolutionary can spread the message through anonymizers and botnets. These techniques not only increase the likelihood of unrest; they multiply the number of actors who might help organize it.
Law enforcement and intelligence agencies work with limited datasets. They put together fragments of information to create a narrative that explains their opponents’ present behavior and helps predict future actions. But ultimately those explanations are like interpreting a Rorschach blot. Some see a sheep; some see a cloud; some see an island. And when they’re facing a threat to their power, they’re likely to look at those fragments and see danger.
In the 2016 presidential election, the American political establishment was shocked by a political outsider’s victory. Donald Trump used Twitter as a base to rally his troops and promote his causes; Hillary Clinton largely let her staff handle social media as part of the wider campaign. Many political insiders were unwilling to admit flaws in their platform or their candidate. Like the Arab and ex-Soviet leaders who faced political unrest, they blamed their defeat on outside forces.
From 2017 onward, we saw the rise of “fact checkers” who point out “misinformation.” Organizations like PolitiFact and FactCheck.org sorted out the journalistic wheat from the “fake news” chaff. Snopes partnered with Facebook in 2017 to detect and flag bogus data.
Many came to respect these fact checkers as arbiters of truth in a disinfo-clogged landscape. Others came to see them as propaganda outlets. For them, the fact-checker’s condemnation was proof it was true. And when fact-checkers got a detail or two wrong, that was further proof of an ongoing coverup.
We also saw a greater focus on “hate speech” and various “phobias” against different minority groups. Hostile groups looking to destabilize a country often use ethnic tensions as leverage. For people who feared an American Arab Spring, our longstanding racial tensions were a spark that could easily be fanned into a fire.
Many social media users jumped into the war against racism, fascism, etc. They scanned their feeds in search of questionable content, then sent it along to their circles for mass-reporting. Outlets that tolerated controversial opinions frequently found themselves debanked, DDoSed, or disconnected.
The push against racism and hate speech led to a countercultural upsurge. What Usenet called trolls re-emerged as shitposters. They shared images of Jewish caricatures being stuffed in ovens, Black caricatures being lynched, and liberal “soyboy” caricatures tearfully watching their girlfriends have sex with other men. They pushed tirelessly against the terms of service, and kept multiple accounts in cold storage so they could pop back up soon after they got banned.
Ongoing conflict created an outrage loop. The shitposters made the anti-racists hop up and down as entertainment. But their provocative memes also reinforced the idea that there was an enormous underground organization of White Supremacists. This led to increasing calls for moderation, and a growing demand for tasteless and offensive material. Social media thrives on engagement and conflict, and each side provided that to the other.
As 2016 saw complaints of Russian interference, 2020 brought claims of a Democratic coup. Questions about Biden’s election, valid or otherwise, were dismissed as the “Big Lie”; restrictions on COVID-19 “denialism” shut down honest questions and doubts from medical professionals. All this only reinforced the idea that the federal government was illegitimate. By 2024, anti-government feelings were strong enough to bring Trump back for a second term.
Art teacher Jennifer Bloomer has used Instagram to share activism-themed artwork and announce classes for eight years. Then last fall, while trying to promote a class called “Raising anti-racist kids through art,” her online megaphone stopped working.
It’s not that her account got suspended. Rather, she started to notice her likes dwindled and the number of people seeing her posts dropped by as much as 90 percent, according to her Instagram dashboard.
Daniel Fowler,
“Shadowbanning is real: Here’s how you end up silenced by social media.” Washington Post, December 27, 2022
Editors, producers, and institutional filters were visible, slow, and, to a degree, accountable. You might not be able to change or question their decisions, but you knew where the buck stopped. But as the information flow became a torrent, these old gatekeepers became unable to keep up with an ever-increasing number of users.
Today, most moderation is handled not by people but by systems. Proprietary algorithms determine what phrases and images violate terms of service; engagement metrics and recommendation engines shape what does and does not appear in your feed. They decide what gets seen, what spreads, and what disappears.
It’s no longer necessary to ban or remove controversial ideas. They can be buried, deprioritized, and throttled. They can still be found, but only with a greater or lesser degree of work. And with an ever-flowing scroll of content rolling across the screen, only a few will make that effort.
Many social media users today believe that they have been “shadowbanned” for their political or social stances. These complaints can be found across the political spectrum, from Leftist activists like Jennifer Bloomer to Right-leaning commentators like Dan Bongino.
Some wear their alleged shadowban as a badge of honor: their material is so honest and forthright that Elon Musk and Mark Zuckerberg don’t want you to see it. Most of these people are not being throttled. But the suspicion persists because, like Calvin’s Elect, the shadowbanned can neither confirm nor refute their status.
There are websites which claim to test your account to see if it is or is not being shadowbanned or restricted. These sites often come up with conflicting diagnoses. And since individual posts and comments can be deprioritized while the account stays at its current status, it’s hard to tell how far any given statement will spread. Social media no longer needs to use the hard power of cancellation to shape discourse. The user stays, but their words are kept on a short leash.
This is not to say that these algorithms are part of a sinister plot to control the discourse. As we saw in the opening sections, moderation is both necessary and difficult. It’s also worth noting that Reddit, a community moderated by volunteers, has a worse reputation for censorship, suspension, and arbitrary bans than algorithm-moderated sites like X and Facebook.
Algorithms may restrict the spread of controversial ideas, but the medium promotes controversy in general. Controversial content gets more engagement and more visibility. Feuding groups each promote their opponents even as they mock and insult them. Sites which are geared to a specific political slant (leftists on BlueSky, right-wingers on Truth Social and Gab) find it hard to retain long-term viewers. Endless affirmation doesn’t hold the same attraction as constant conflict.
Successful sites must simultaneously amplify controversy while at the same time constraining illegal, dangerous, or disruptive content. This is much harder than it sounds. To hearken back to our first example, pictures of naked children are generally considered illegal, unless they are historically significant. And “Dangerous” and “disruptive” are loaded words. In his oft-cited essay, “Repressive Tolerance,” Herbert Marcuse complained of:
the active, official tolerance granted to the Right as well as to the Left, to movements of aggression as well as to movements of peace, to the party of hate as well as to that of humanity.
Platforms are asked to distinguish between legitimate expression and dangerous, disruptive speech—but those distinctions are rarely clear, and rarely agreed upon. While both Marcuse and his critics might be comfortable silencing “the party of hate,” they would have disagree on which party they were talking about.
Moderation by algorithm is a work in progress. In time, we will come to an acceptable if controversial balance between oversight and freedom. Social media continues to provide a voice to minority communities and marginalized groups. It also welcomes people who just came to argue. All these groups will engage with each other as they once did in the town square. And they will help to decide, with their likes and with their presence, the boundaries of acceptable behavior.
The algorithm will shape their beliefs, but those beliefs will also shape the algorithm. Its code will be modified to suit new controversies and social shifts. There will be arguments and disagreements, as there are with every change. But ultimately we will recognize that we have met the gatekeeper, and he is us.




