A War of the Words: Assessing Liability for Online Incitement of Terrorism

By Zachary Shufro, University of North Carolina School of Law; Tufts University.


Abstract

Social media posts and terrorism have become inextricably linked in public analysis of recent terror incidents. Terrorist organizations rely on the internet to recruit new members, incite violence, and publicize their actions. Despite the significant reliance of terrorist organizations on social media, social media companies have inconsistently and infrequently attempted to address this usage. At present, there is insufficient domestic regulation of terrorist presence on social media, and American law struggles to adapt from the print medium to a flexible and responsive First Amendment jurisprudence of the internet age. By taking inspiration from recent German and French regulation of online terrorist activity, the United States might be able to address some of these shortcomings. The government could limit terrorist organization presence online by (1) re-defining ‘hate speech’ to include language that incites imminent terrorist action and (2) instituting a compliance regime that regulates social media companies as internet platforms. This paper proposes a course of action that could result in an online environment that stifles, rather than fosters, terrorist organization activity moving forward.

Keywords: Terrorism, Internet, Social media, First Amendment, Technology, Internet law, Social media regulation, Communications law, Communications Decency Act.


1. Introduction

On June 19, 2016, Omar Mateen posted on Facebook and other social media networks, and repeatedly checked his posts to see their engagement levels.1 Unlike his peers, however, Mateen was not checking social media as a gauge of popularity online; rather, he was online in order “to verify that his pledge to Abu Bakr al-Baghdadi, the leader of ISIS, had been properly publicized during the five-hour standoff in the [Pulse nightclub] where he killed 49 people[.]”2 The trope has become commonplace: a terror attack, then a flurry of headlines about the attacker’s social media presence in the weeks leading up to the attack. Social media posts and terrorism have become inextricably linked in public analysis of recent terror incidents. However, the media’s focus on a terrorist’s social media presence is not new, though the internet’s role in this discussion has evolved.

Terrorism has never been confined to an on-the-ground war: it involves not only the kinetic attacks that come to mind at the mention of the word “terrorism,” but also the online propaganda campaigns that draw supporters from all over the world and incite them to action. Social media is uniquely compatible with these sort of propaganda and recruitment campaigns. Lone individuals can engage with their ideologically aligned ‘peers’ from behind the veil of anonymity that the internet provides. They can encourage, plan, and memorialize their actions, without leaving the comfort of their homes. Terrorist organizations offer a multitude of of training materials(3) and morale-boosting propaganda4 to mobilize these individuals to action. However, social media companies, as a whole, have been reticent to remove these materials over fears of violating free speech rights. This paper proposes new methods to deter terrorist-supporting social media usage, through the model of new German and French legislation on the topic.


Analysis proceeds in six parts. Part I outlines a basic introduction to terrorist reliance on the internet, and on social media specifically, for networking, propaganda, recruitment, and incitement of further violence. Part II addresses current difficulties with prosecution of terrorrelated inchoate offenses under American law. Part III lays out the current German and French systems of assessing liability for dissemination of terrorist incitement online. Parts IV-VI examine the possibilities of assessing liabilities for individuals posting terrorism-related content on social media platforms (in Part IV); for ISPs in general (in Part V); and for the social media companies whose platforms are used by terrorist organizations (in Part VI). Ultimately, the protections of the First Amendment complicate the creation of a criminal statute regulating individuals who post terrorist-supporting content online, but one way in which it may be feasible to deter behavior would be through a re-definition of unprotectable hate speech. Another manner of deterring individual action would be through changes to the “material support” statutes of the Patriot Act, following the French statute’s model. Finally, it might be feasible for the United States to adopt a regulatory regime, modeled on German law, under which social media companies may be held liable for inaction in mitigating terrorist presence online. However, there are significant barriers to broader regulations of ISPs used by terrorist organizations online .


I. Online Terrorist Activity:

A Brief Overview Over the past two decades, the internet has played an increasingly significant role in the recruitment, incitement, and publicity wings of terrorist organizations.5 While originally used mainly for propaganda and communications, “the internet has developed into an indispensable medium for terrorist planning, organization, and incitement.”6 In the past five years, “terror organizations like ISIS have made heavy use of social media and other digital platforms to recruit, fundraise, and communicate,” often without interference from social media companies or law enforcement authorities.7 A 2012 study focusing on the decade following the September 11, 2001 terror attacks “revealed that ‘90 per cent of organized terrorism on the internet is being carried out through social media.’”8 Estimates on the number of terrorist websites and terrorist social media accounts, in aggregate, vary from “a lower estimate” of “around 5,300” to “an upper estimate” of “as many as 50,000.”9 Furthermore, “[i]nteractive forums on Twitter, Facebook, Google+, and Tumblr host discussions, membership drives, threats, and calls to arms of terrorist organizations such as ISIS.”10


Terrorist organizations rely on the internet to recruit new members, incite violence, and publicize their actions. Recruitment is the means by which these organizations use social media campaigns to gain supporters. One such recruitment campaign was the January 2017 Hamas campaign “commemorating the twenty-first anniversary of the assassination of Yahya Ayyash . . . [who] was the chief bomb maker for Hamas” and whose bombs ultimately “resulted in a total of four hundred thirty-nine casualties.”11 The recruitment process can be quite successful: for example, “ISIS has drawn over 20,000 foreign fighters to Syria from more than 90 countries, mainly through cyber contacts.”12 Incitement involves encouraging others to move from the planning stage of an attack to physical action. Examples range from posts exhorting individuals to commit acts of terror to the use of social media for terrorists who “post personal messages . . . right before they commit suicide bombings (to further inspire the new recruits to use the information they are learning on the chat forum to fight real world battles).”13 Incitement also includes the posting of information that facilitates and encourages kinetic attacks. For example, an “al-Qaeda jihadi Internet forum has uploaded a fifty-one page manual entitled The Art of Recruitment,” which provides instructions on how to use social media to “eventually establish active terrorist cells.”14 Finally, publicity includes posts that disseminate images of ongoing terrorist attacks, posts from groups taking responsibility for attacks, or other actions that promote, celebrate, or glorify ongoing or completed terrorist actions.


Despite the significant reliance of terrorist organizations on social media, social media companies have inconsistently and infrequently attempted to address this usage. In 2015, when asked whether “Facebook should adopt a proactive policy regarding online content” and specifically focusing on terrorist usage, Facebook Director of Policy Simon Milner “explained that Facebook has no intention to be more proactive in inspecting content on its server.”15 Indeed, despite Facebook’s 2015 “implementation of its ‘more aggressive suppression tactics’ of ISISrelated use of its website, about half of ISIS-related arrests in the U.S. involved the use of Facebook.”16 YouTube, a repository of over 68,000 terrorist-related videos, does not appear to have a policy of removing inciting content.17 Twitter takes a slightly more proactive approach: “Sinead McSweeney, Twitter’s vice president of public policy, said that since mid-2015, Twitter has suspended more than 360,000 accounts for violating Twitter’s policy on violent threats and promoting terrorism.”18 The sheer volume of accounts removed within this fourteen-month period (mid-2015 to September 2016) merely emphasizes the scale of terrorist social media usage. Furthermore, Twitter has not “indicate[d] what measures the company used to decide whether an account was sufficiently linked to terror-related crime to warrant termination, how it monitored such accounts, or whether it had any standard practices in place to address these issues.”19


In late 2016, Facebook, Microsoft, Twitter, and YouTube announced the creation of a joint database they were creating intended to help identify and remove terror-related content from their platforms: “the four companies pledged to share among themselves ‘the most extreme and egregious terror images and videos [they] have removed from [their] services–content most likely to violate all their companies’ content policies.’”20 However, even such a removal system relies on the presumption that a user from one service flags content as offensive, triggering either a review and deletion process (like on Twitter), or an automatic removal. 21 Either way, the content must first be posted online and then objected to by a third party, by which point it is already available for online consumption. Without a way to assess liability for posting this content, there is no incentive for an individual poster to not simply create a new account to post materials, or to re-post these materials on another network.


Keep reading here


Suggested Citation:

Shufro, Zachary, A War of the Words: Assessing Liability for Online Incitement of Terrorism (February 23, 2019). Available at SSRN: https://ssrn.com/abstract=3372314 or http://dx.doi.org/10.2139/ssrn.3372314

  • Black Twitter Icon
  • Black Facebook Icon

© 2020 by Talking About Terrorism.  

  • Twitter Social Icon
  • LinkedIn Social Icon