Hate speech online

Friday, September 13, 2002

The Internet is a revolutionary medium of communication that has enhanced
speech of all types — the good, bad and the ugly. Racists, hatemongers,
cyberbullies and terrorists have used the Web as a haven to communicate their
noxious views, harass others, even plan nefarious deeds.

Some Web sites deny that the Holocaust occurred. Others promote the beating of gays and lesbians. Still others rail against Muslims and Islam in the United
States, or are anti-Christian. Some sites advocate various strands of racial
superiority and separatism. Many such sites target young people, seeking to
influence them through their favored medium of communication.

“From cyberbullying to terrorists’ use of the Internet to recruit and incite,
Internet hate speech is a serious problem,” said Christopher Wolf, immediate
past chair of the International Network Against Cyber-Hate, in an e-mail
interview. “The most notorious hate crimes of late — such as the shooting at the
Holocaust Museum (in Washington, D.C.) — were committed by individuals who used the Internet to spread
hate and to receive reinforcement from like-minded haters, who made hatred seem
normal and acceptable.”

Some contend that hate speech infringes on the 14th Amendment’s guarantee of
equal protection under the law. Alexander Tsesis, for example, wrote in a 2009
article that “hate speech is a threatening form of communication that is
contrary to democratic principles.” 1

However, the First Amendment provides broad protection to offensive,
repugnant and hateful expression. Political speech receives the greatest
protection under the First Amendment, and discrimination against viewpoints runs
counter to free-speech principles. Much hate speech qualifies as political, even
if misguided. Regulations against hate speech are sometimes imposed because the
government (at any level) disagrees with the views expressed. Such restrictions
may not survive constitutional scrutiny in court.

Furthermore, the U.S. Supreme Court in Reno
v. ACLU
(1997) noted (albeit in a non-hate speech context) that the Internet
is entitled to the highest level of First Amendment protection, akin to the
print medium. In other words, online hate speech receives as much protection as
a hate-speech pamphlet distributed by the Ku Klux Klan.

Given these factors — high protection for political speech, hostility to
viewpoint discrimination and great solicitude for online speech — much hate
speech is protected. However, despite its text — “Congress shall make no law …
abridging the freedom of speech” — the First Amendment does not safeguard all
forms of speech.

Unprotected categories

Unless online hate speech crosses the line into
incitement to imminent lawless action or true threats, the speech receives
protection under the First Amendment.

Incitement to imminent lawless action

In Brandenburg v. Ohio (1969), the Supreme Court said that “the constitutional guarantees
of free speech and free press do not permit a State to forbid or proscribe
advocacy of the use of force or of law violation except where such advocacy is
directed to inciting or producing imminent lawless action and is likely to
incite or produce such action.”

Most online hate speech will not cross into the unprotected category of
incitement to imminent lawless action because it will not meet the imminence
requirement. A message of hate on the Internet may lead to unlawful action at
some indefinite time in the future — but that possibility is not enough to meet
the highly speech-protective test in Brandenburg.

For this reason, some legal commentators have urged that the
Brandenburg standard be modified with respect to online hate speech. One commentator wrote
in 2002: “New standards are needed to address the growing plague of Internet
speech that plants the seeds of hatred, by combining information and incitement
that ultimately enables others to commit violence.” 2

Another agreed, writing: “Although Brandenburg may be suitable for the
traditional media outlets, which were well-established when it was decided,
Internet speech and many unforeseen changes have made such a standard outdated.” 3 Still another called for a revised imminence requirement in Internet
hate-speech cases to update Brandenburg and make it applicable online. 4

True threats

Some online hate speech could fall into the unprotected category
of true threats. The First
Amendment does not protect an individual who posts online “I am going to kill
you” about a specific individual. The Supreme Court explained the definition of
true threats in Virginia v. Black (2003) — in which it upheld most of a Virginia cross-burning statute — this way:

“‘True threats’ encompass those statements where the speaker means
to communicate a serious expression of an intent to commit an act of unlawful
violence to a particular individual or group of individuals. The speaker need
not actually intend to carry out the threat. Rather, a prohibition on true
threats protect(s) individuals from the fear of violence and from the disruption
that fear engenders, in addition to protecting people from the possibility that
the threatened violence will occur.”

The Court in Virginia v. Black reasoned that crosses burned with an intent to
intimidate others could constitutionally be barred as provided in the Virginia
law. (But the Court did strike down a part of the law that said there was a
presumption that all cross-burnings were done with an intent to intimidate; for
instance, in the consolidated cases the Court considered, one involved a
cross-burning with a property owner’s permission.) Thus, online hate speech
meant to communicate a “serious expression of an intent” to commit violence and
intimidate others likely would not receive First Amendment protection.

A few cases have applied the true-threat standard to online speech. In Planned Parenthood v. American Coalition of Life Activists (2002), the 9th U.S.
Circuit Court of Appeals held that some vigorous anti-abortion speech — including a Web site called the Nuremberg Files that listed the names and
addresses of abortion providers who should be tried for “crimes against
humanity” — could qualify as a true threat. The 9th Circuit emphasized that “the
names of abortion providers who have been murdered because of their activities
are lined through in black, while names of those who have been wounded are
highlighted in grey.”

Similarly, the 5th U.S. Circuit Court of Appeals ruled in
U.S. v. Morales (2001) that an 18-year-old high school student made true threats
when he wrote in an Internet chat room that he planned to kill other students at
his school.

Even in the speech-restrictive world of the military, the U.S. Court of Appeals for the Armed Forces ruled in United States v. Wilcox (2008) that a member of
the military could not be punished under the Uniform Code for Military Justice
for posting racially offensive and hateful remarks he made over the Internet
about white supremacy. The court wrote that the service member’s “various
communications on the Internet … are not criminal in the civilian world …
[and] did not constitute unprotected ‘dangerous speech’ under the circumstances
of this case. No evidence was admitted that showed the communications either
‘interfere[d] with or prevent[ed] the orderly accomplishment of the mission,’ or
‘present[ed] a clear danger to loyalty, discipline, mission, or morale of the
troops.’”

Conclusion

So if even hateful Internet communications do not cross the line
into incitement to imminent lawless action or a true threat, they receive First
Amendment protection. The First Amendment distinguishes the United States from
other countries. Alan Brownstein and Leslie Gielow Jacobs, in their book Global
Issues in Freedom of Speech and Religion, write that the U.S. is a
“free[-]speech outlier in the arena of hate speech.” Many other countries
criminalize online hate speech.

However, even in the United States, certain forms of hateful speech — such as
cyberbullying in schools and targeted harassment — may continue to face increased regulation.

Wolf, chair of the Anti-Defamation League’s Internet Task Force, said much
could be done to counter online hate speech besides criminalizing it. “There is
a wide range of things to be done, consistent with the First Amendment,
including shining the light on hate and exposing the lies underlying hate and
teaching tolerance and diversity to young people and future generations,” he
said. “Counter-speech is a potent weapon.”

Wolf’s view brings to mind Justice Louis Brandeis’ famous concurring opinion
in Whitney v. California (1927), in which he wrote: “If there be time to expose
through discussion the falsehood and fallacies, to avert the evil by the
processes of education, the remedy to be applied is more speech, not enforced
silence.”

Updated September 2009


Notes

1 Alexander Tsesis, “Dignity and Speech: The Regulation of
Hate Speech in a Democracy,” 44 Wake Forest L. Rev. 497, 502 (2009).

2 Tiffany
Kamasara, “Planting the Seeds of Hatred: Why Imminence Should No Longer Be
Required to Impose Liability on Internet Communications,” 29 Capital University
L. Rev.
835, 837 (2002).

3 Jennifer L. Brenner, “True Threats — A More
Appropriate Standard for Analyzing First Amendment Protection and Free Speech
When Violence is Perpetrated over the Internet,” 78 North Dakota L. Rev. 753,
783 (2002).

4 John P. Cronan, “The Next Challenge for the First Amendment: The
Framework for an Internet Incitement Standard,” 51 Catholic University L. Rev. 425 (2002).