Post

In Search of Principles to Govern Free Speech, Hate Speech, and Web Censorship

By Bronwyn Howell

AEIdeas

April 25, 2023

My previous blog highlighted the difficulties web platforms face when censoring content placed on them that is intended for circulation around the globe in jurisdictions with vastly different and constantly changing laws and societal standards as to what counts as “speech”—that is, acceptable content to host and circulate. The illustrative case—featuring a Cyprus-based betting firm, a celebrity New Zealander, and New Zealand’s gambling and internet content distribution laws—showed how Google had apparently violated its own rules for withdrawing the content. The censorship occurred because Google did not appear to understand New Zealand laws to mean the same thing as the government agency charged with implementing them did.

This begs the question of whether there could be a simple set of internationally-applicable principles for web platforms to use when considering whether to remove content.

In search of such principles, I turned to the work of two prominent free speech scholars: New York University Law School’s Nadine Strossen (former president of the American Civil Liberties Union) and Danish lawyer and human rights advocate Jacob Mchangama (founder and director of the Copenhagen-based think tank Justicia). Strossen’s 2018 HATE: Why We Should Resist It With Free Speech, Not Censorship (Oxford University Press) and Mchangama’s 2022 FREE SPEECH: A History from Socrates to Social Media (Basic Books) make a clear and cogent case for less, rather than more, online censorship.

Both hold that the foundation of the debate is the fundamental human right of free speech as expressed in Article 19 of the United Nations’ Universal Declaration of Human Rights. This states that “everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” Both argue that the best way to address hateful or unpleasant speech is with more speech (principled debate) whereby all speakers and listeners can become better informed. If it is feared that a speaker can use a platform to amplify an unpleasant message, then drawing even more attention to that message by banning it will likely be counterproductive.

Mchangama shows how, over time, any regime that has sought to close down the expression of selected ideas, or speech by selected groups of individuals has led to significant harm. Even when censorship has been used to protect vulnerable groups, the effect has in most cases harmed minorities and vulnerable populations.

For example, laws used in Germany’s Weimar Republic to prevent Nazi propaganda from being distributed were subsequently used by the Nazis when they were in power to censor not just Jewish speech but any messages contrary to Nazi policy. Imprisoning the Nazi authors for “hate speech” created the illusion of them being victims of a repressive government, giving them publicity not even money could buy in subsequent elections. This prompted stalwart efforts by Eleanor Roosevelt to resist exemptions enabling censorship of or punishment for hate speech when the UN drafted the declaration. She rightly feared that such provisions “would only encourage Governments to punish all criticisms in the name of protection against religious or national hostility” and warned that the UN should not, “include … any provision likely to be exploited by totalitarian States for the purpose of rendering the other articles null and void.”

Based on her years of extensive research, Strossen asserts that even though hateful messages may cause some harm, and lead to calls for specific rules and laws to be customized to specific types of speech, specifying precisely what can and cannot be shared (as is required by the artificial intelligence algorithms used to automatically censor or promote online content) is challenged by the imprecision of language and the ability to twist it from one set of circumstances into others out of context. Hence, university professors have been dismissed for simply citing others’ words in class as examples, as these actions were deemed in violation of university speech codes.

Importantly, Strossen finds that, after examining laws and cases from just about every country, two US principles seem the most resilient and effective. The viewpoint neutrality principle bars the US government from regulating speech solely because the speech’s message, idea, or viewpoint is disfavored. However, the US government may regulate speech when its message inflicts independent harm “such that there is no realistic possibility that official suppression of ideas is afoot” (hence enabling fraud, perjury, bribery, and pornography to be addressed). Further, under the emergency test, speech can be suppressed or punished only when it “directly, demonstrably, and imminently causes certain specific, objectively ascertainable serious harms that cannot be averted by non-censorial measures”—notably counterspeech. As media platforms increasingly become like town squares, these principles seem like a good foundation on which to build the content moderation debate.


Sign up for AEI’s Tech Policy Daily newsletter

The latest on technology policy from AEI in your inbox every morning