top of page

The Right to Freedom of Expression in a World Confronted with Terrorism Propaganda Online

Full tittle: Privatised Enforcement and the Right to Freedom of Expression in a World Confronted with Terrorism Propaganda Online. By Eugénie Coche, Institute for Information Law, Amsterdam.


Abstract

The purpose of this paper is to explore the risks of privatised enforcement in the field of terrorism propaganda, stemming from the EU Code of conduct on countering illegal hate speech online. By shedding light on this Code, the author argues that implementation of it may undermine the rule of law and give rise to private censorship. In order to outweigh these risks, IT companies should improve their transparency, especially towards users whose content have been affected. Where automated means are used, the companies should always have in place some form of human intervention in order to contextualise posts. At the EU level, the Commission should provide IT companies with clearer guidelines regarding their liability exemption under the e-Commerce Directive. This would help prevent a race-to-the bottom where intermediaries choose to interpret and apply the most stringent national laws in order to secure at utmost their liability. The paper further articulates on the fine line that exists between ‘terrorist content’ and ‘illegal hate speech’ and the need for more detailed definitions.


Introduction

Terrorism is not a new issue (Ansart, 2011), but terrorism propaganda online is. As early as 2008 the EU Council officially recognised the internet as a medium used by terrorist recruiters for the dissemination of propaganda material (EU Council Framework Decision 2008/919/JHA). Several studies revealed the important role played by social media platforms, predominantly Twitter, in ISIS’ 1 propaganda strategy (Badawy & Ferrara, 2017, p. 2). A 2015 report illustrated that members of ISIS, on average, posted 38 propaganda materials each day, ranging from videos to photographs or articles and on a diversity of platforms, including Facebook, Tumblr, Twitter or Surespot (Winter, 2015, p. 10). Countering this type of speech has challenged traditional law enforcement in many ways. In 2014, the EU Commission recognised that traditional law enforcement is insufficient to deal with evolving trends in radicalisation and that all of society ought to be involved in the countering of terrorism online (COM (2013) 941 final, para. 8).


On 31 May 2016, four IT companies (Facebook, Microsoft, Twitter and Youtube, 2016) adopted the EU Code of conduct against illegal hate speech online (hereinafter, the Code). This instrument places enforcement responsibilities into the hands of private companies and gives rise to the practice of ‘privatised enforcement’. The dangers stemming from such practice can be illustrated by Twitter’s latest biannual report (2017), in which it indicates that from July 2017 through December 2017, 274,460 accounts were suspended because of terrorism’ related activities in violation of the company’s terms and services. It also specifies on its webpage concerning removal requests that ‘out of the 1,661 reports received from trusted reporters and other EU non-governmental organisations (NGOs), 19% resulted in content removal due to terms of service (TOS) violations and 10% in content being withheld in a particular country based on local law(s)’. In other words, more posts seem to have been removed because of noncompliance with the companies’ policies than due to illegality. Consequently, when placing private companies at the frontline of law enforcement online, the risk may arise that our right to freedom of expression is merely guided by their terms of service, which may not always be in accordance with the level of protection guaranteed under human rights instruments, such as under Article 10 of the European Convention on Human Rights (hereinafter, ECHR) or Article 11 of the Charter of Fundamental Rights of the European Union. Moreover, taking into account the primary profit-making nature of platforms, it is questionable in how far delegation of such large-scale public functions, which are fundamental to the proper function of our democracy, may be at odd with their business objectives and thereby result in a conflict of interests. As was pointed out in an article which discussed the liability of Google when faced with removal of defamatory content: ‘in order to pursue its profit (emphasis added), Google did not adopt precautionary measures that could have prevented the upload of illegal materials […] Google is profiting from people uploading materials on the internet’ (Sarter et al., p. 372). Taking into account the intermediaries’ data-driven business model, placing them at the frontline of law enforcement may be dangerous from a legal point of view but also for democracy in general.


Whereas the privatised enforcement phenomenon has already received considerable academic attention, this paper specifically focuses on the risks stemming from the Code, in the field of illegal hate speech and, in particular, terrorism propaganda. Through identifying such risks and by taking into account subsequently adopted EU instruments, recommendations are made on how to better guarantee respect for fundamental human rights in the online environment. These findings are especially relevant as the EU Commission issued, on 12 September 2018, a proposal for a Regulation on the prevention of terrorist content online. Besides the proposal’s general requirement that hosting service providers should remove or disable access to terrorist content within one hour after receipt of a removal order, it also encourages the use of ‘referrals’, whose content should be assessed against the companies own terms and conditions. In that respect, it makes no reference to the law.


In order to draw a conclusion and make recommendations, the content of the Code and its relationship with privatised enforcement is first discussed. This section also delineates to what degree terrorism propaganda falls within the scope of the Code. Doing so is necessary, seeing as the Code merely focuses on the removal of ‘illegal hate speech’ whereas the countering of terrorism propaganda formed one of the main incentives for its adoption. This was made clear by EU Commissioner Vera Jourová who declared, when announcing the Code, that recent terror attacks have strengthened the need for it and that ‘social media is unfortunately one of the tools that terrorist groups use to radicalise young people’ (European Commission, 2016). In other words, it investigates whether and to what extent terrorist propaganda can be countered through hate speech tools. In the second section, different reasons behind privatised enforcement in the field of terrorism propaganda are presented. This is followed by a discussion on the dangers of such practice from a free speech perspective. In the subsequent section, recommendations to outweigh the identified risks are proposed, by taking into account subsequently adopted EU instruments building upon the Code, namely the communication and recommendation on tackling illegal content. The final section presents important developments that have taken place since the adoption of the Code.


Privatised Enforcement Through the EU Code of Conduct on Countering Illegal Hate Speech Online

The Code is a self-regulatory initiative under which Twitter, Microsoft, YouTube and Facebook made a commitment to put in place a notice-and-take down system for the countering of illegal hate speech, the ambit of which is laid down in Framework Decision 2008/913/JHA. This nonbinding instrument encourages companies to assess the legality of a post within 24 hours after being notified and to remove or block access to it in case of unlawfulness. Importantly, it explicitly stipulates that the notified posts have to be primarily reviewed against the company’s rules and community guidelines and only ‘where necessary’ (emphasis added) against national laws transposing the Framework Decision. Through these means, specifically encouraging the companies to ‘take the lead’ and initiative in tackling illegal hate speech online, the Code stimulates the occurrence of privatised enforcement.


This phenomenon was defined as a practice in which private companies undertake ‘non-law based “voluntary” enforcement measures’ (Council of Europe, 2014, p. 86). Legal scholars define this practice as: ‘instances where private parties (voluntarily) undertake law-enforcement measures’ (Angelopoulos et al., 2015, p. 6). These two definitions show that privatised enforcement has three key components: enforcement of the law; by a private party; and imposed voluntarily (in the sense that the enforcement measures flow from self-regulatory initiatives and are thus ‘non-law based’). This is sometimes also referred to as ‘intermediarization’ (Farrand, 2013, p. 405) or ‘delegated’ enforcement, in the sense that the regulator’s role is delegated to companies and private sector actors (ADF International, 2016, p. 1). This practice has already been encouraged in different fields of law such as copyright law (EDRi, 2014, pp. 2-14) or the countering of ‘fake news’ on social media (OSCE, FOM.GAL/3/17, 2017, section 4(a)).


Whereas terrorism propaganda formed one of the main reasons for adopting the Code, such speech is not explicitly mentioned in it. The companies are merely required to counter ‘illegal hate speech’. In the Commission’s Communication on ‘tackling illegal content online’ (COM (2017), 555 final) a clear distinction is made between ‘incitement to terrorism’ and ‘xenophobic and racist speech that publicly incites hatred and violence’ (p. 2). The latter refers to the type of hate speech that is criminalised under Framework Decision 2008/913/JHA and which serves as legal basis for content removal under the Code. Concerning incitement to terrorism, the Communication refers to Article 5 of the Terrorism Directive (EU Directive 2017/541), which covers the ‘public provocation to commit a terrorist offence’. Bearing this in mind, how can the Code thus contribute to the countering of terrorism propaganda?


An important distinction to be drawn between ‘incitement to terrorism’ and ‘illegal hate speech’ is that the former only covers incitement to violence (See Article 3(1), point (a) to (i) of the Terrorism Directive) while the latter also extends to incitement to hatred. The relation between these two was made clear by Vera Jourová who stated, in the context of terrorism propaganda, that ‘there is growing evidence that online incitement to hatred leads to violence offline’ (European Commission, 2015). In this respect, it is important to highlight that the United Nations General Assembly (2013) has determined that ‘the likelihood for harm to occur’ is a factor that should be taken into account when assessing whether incitement to hatred is present (para. 29). Although ‘incitement’ is by definition an inchoate crime, there is thus an implicit assumption that the speech has a reasonable probability to incite the intended actions and thereby cause harm. In the Surek v. Turkey case, this implicit relation between incitement to hatred, on the one hand, and actions, on the other, was made clear by the European Court of Human Rights (hereinafter, ECtHR) which noted that the speech was ‘capable of inciting to further violence by instilling a deep-seated and irrational hatred’ (§62).


In the context of terrorism, the Commission claimed, in June 2017, that ‘countering illegal hate speech online’ serves to counter radicalisation (COM (2017), 354 final, p. 3). The link between radicalisation through hate speech and terrorist acts was also made explicit by Julian King, Commissioner for the Security Union who declared that: ‘there is a direct link between recent attacks in Europe and the online material used by terrorist groups like Da’esh to radicalise the vulnerable and to sow fear and division in our communities’ (European Commission, 2017). This overlap between incitement to hatred and incitement to terrorism may be explained by the fact that terrorism relies on extremist ideologies. These were identified by Europol (2013) to include religious, ethno-nationalist and separatist ideologies as well as left-wing and anarchistic ones (pp. 16-30).


However, it is relevant to highlight the Leroy v. France case, which illustrates how the Code, and thereby illegal hate speech, would fall short in countering all types of terrorism propaganda. In this case, a cartoonist was accused of glorification to terrorism after having published, on the day of the 9/11 terrorist attacks, a drawing representing the American Twin Towers. The drawing was interpreted by the Court (§42) as a call for violence to and glorification of terrorism but was not perceived as a reflection of the cartoonist’s anti-American ideologies. This type of speech, in which the underlying extremist ideologies are implicit within the speech – and therefore ‘hidden’– will not easily be caught under the Code. Indeed, for ‘illegal hate speech’ to be present, some kind of discrimination must be expressed (Article 1(a) Framework Decision 2008/913/JHA). Such a discriminatory element is however not required for ‘incitement to terrorism’ as defined under the Terrorism Directive.


In light of the above, it can be inferred that incitement to terrorism and illegal hate speech complement each other in the fight against terrorism propaganda online. However, for removal of less obvious terrorism propaganda, where no discriminatory element or incitement to violence is present, new instruments should see the light of day. The recently proposed Regulation (COM(2018), 640 final) which adopts a very broad definition of ‘terrorist content’ extending beyond ‘incitement to terrorism’, may be one of these. Having regard to the complexity of these legal definitions, which carries the risk of misinterpretation by non-legal persons, it is important to find out what the impetus is for involving internet intermediaries in the countering of such type of speech.


Keep reading and access the full article here.


Suggested Citation:

Coche, Eugénie, Privatised Enforcement and the Right to Freedom of Expression in a World Confronted with Terrorism Propaganda Online (December 5, 2018). Coche, E. (2018). Privatised enforcement and the right to freedom of expression in a world confronted with terrorism propaganda online. Internet Policy Review, 7(4); Amsterdam Law School Research Paper No. 2018-33; Institute for Information Law Research Paper No. 2018-05. Available at SSRN: https://ssrn.com/abstract=3296217


Comments


bottom of page