fbpx
Salvo 04.26.2022 5 minutes

Why Censorship Fails

Book on Fire

Shutting down dissenting speech only serves to strengthen its appeal.

According to a recent study reported in the Washington Times, “tech giants such as Facebook, Twitter, and Google are facing all-time highs in hate speech and misinformation, with such content increasing twentyfold between 2017 to 2021 on Facebook.”

Though ostensibly well-intentioned, heavy-handed censorship on these platforms has obviously backfired. Removing hateful posts and hateful users was supposed to clean up the platform and encourage only civil discourse, not the opposite.

It doesn’t help that these tech companies define “hate” as what used to be called “dissent,” and “speech” as any degree of association with a dissenting view. Hence, even a sitting president of the United States can be censored on suspicion of sympathizing with “hateful” characters, while dictators around the world can spout real hate without any consequence.

This still doesn’t explain why censorship doesn’t work. Just as a kid shouting “I can’t hear you!” as he covers his ears and closes his eyes doesn’t actually protect himself from harmful speech, so social media censorship draws attention to the unwanted speech and the insecurity of the censoring powers.

As professors Nick Phillips and Sean Stevens explain in an article on this phenomenon, attempts to censor unwanted speech increase people’s desire to hear the censored speaker, especially if the speaker is similar to them; make the censored speaker’s listeners feel threatened; unite a censored speaker’s audience into an aggrieved community; and unite those supporting the censorship into a community devoted to marginalizing the censored community.

Thus, what starts as an unpopular opinion on some trivial issue soon morphs into fiery political and cultural conflict.

This also explains why users increasingly drift to extremes as they use and consume social media. Writer Mary Harrington identifies this pattern as “Flanderisation,” a trope based on The Simpsons character Ned Flanders, “whereby a comic character becomes ever more grotesquely defined by a single trait.” Extremism usually generates more engagement and builds up larger audiences; thoughtful moderation is more often ignored.

But threatened by hate and misinformation, isn’t censorship better than nothing? Otherwise these platforms would degenerate into hate-filled virtual dens filled with angry incels, right-wing terrorists, and Russian hackers.

This perspective, however, assumes that many people are extremists from the start, which generally isn’t so. Taking their cue from the world of face-to-face interaction, many users set and meet expectations for civil discourse both for themselves and others. Actual extremists are recognized and ignored or sidelined, just as people tend to ignore or shun extremists in real life. When Big Tech steps in to silence extremists it makes normal people wonder if the cancelled voices had a point to make after all.

When it comes to the suppression of speech, doing nothing is indeed the better option. If the goal of social media is providing an online medium for socializing, as opposed to editing an open-source publication suitable for all audiences, then doing nothing (aside from removing obscene material, libel, or threats) will go further in discouraging hate speech and misinformation than censorship.

The marketplace of ideas can work if left alone. Indeed, there was a time when the best ideas and arguments would rise to the top while their mediocre counterparts sunk to the bottom. In most cases, those bad ideas weren’t canceled, but were refuted or reformed. As a result, society progressed.

It’s telling that dictators, totalitarian demagogues, criminals, and leftist college professors all rely heavily on silencing the other side.

This is a fact that social media users, both those who support and oppose censorship, should understand. In an open forum where different voices and arguments are contending with one another, there’s always going to be something that misleads or offends people, particularly on a medium that allows anonymity and rewards voices who attract the most attention.

Free speech is messy, and free speech online is even messier. No one can really change this reality, but they can adapt to it.

The American Mind presents a range of perspectives. Views are writers’ own and do not necessarily represent those of The Claremont Institute.

The American Mind is a publication of the Claremont Institute, a non-profit 501(c)(3) organization, dedicated to restoring the principles of the American Founding to their rightful, preeminent authority in our national life. Interested in supporting our work? Gifts to the Claremont Institute are tax-deductible.

Suggested reading from the editors

to the newsletter