Measures to Further Improve the Effectiveness of the Fight Against Illegal Content Online

By Daphne Keller, Stanford Law School Center for Internet and Society. Full title: Inception Impact Assessment: Measures to Further Improve the Effectiveness of the Fight Against Illegal Content Online


Introduction

In its Recommendation on measures to effectively tackle illegal content online, the Commission proposes that Internet platforms should deploy automated content detection technologies to identify terrorist content and block or remove it. Because filters or over-zealous removal efforts may suppress lawful information and expression, the Recommendation proposes human review of algorithmically identified content, and opportunities for affected individuals to challenge (“counter-notice”) removal decisions. Such corrective measures may be suspended, however, “where the illegal character of the content has already been established or where the type of content is such that contextualisation is not essential,” or where content has been identified by law enforcement authorities. The Recommendation also states that, where content appears to evidence “serious criminal offences involving a threat to the life or safety of persons,” platforms are to report it to law enforcement.


This Comment addresses issues unique to potentially terrorist content targeted by Internet platforms’ Countering Violent Extremism (CVE) efforts.2 It focuses in particular on Islamist extremism, though some of the analysis may be generalized to other contexts.


The Comment begins with the recognition of the grave threats posed by terrorist activity, and the acknowledged need to combat those threats, including through regulation of online content. Placing certain responsibilities on online platforms as part of this effort is appropriate. However, experience with existing platform liability regimes tells us that such legal responsibilities must be very carefully calibrated. Poorly defined and structured obligations predictably incentivize platforms to “throw out the baby with the bathwater” – silencing a substantial margin of lawful expression beyond the genuinely unlawful content. As the Comment will explain, the resulting individual and societal harms go well beyond information and expression rights. They include pervasive discriminatory impact on Internet users based on their ethnicity, language, or religion – and they may well include real-world harms to safety and security in the face of terrorist threats.


In Section I, the Comment will review unique attributes of potentially terrorist content, as they affect the Commission’s recommended courses of action. These include the particularly serious dangers associated with terrorist content; the complex relationship between terrorist content and lawful, important public discourse; and the role of context in distinguishing the two. It will also discuss the likely effectiveness of both filters and measures intended to correct for filtering errors, including counter-notice and human review.


The second Section will consider discriminatory impact. Errors in platforms’ CVE content removal and police reporting will foreseeably, systematically, and unfairly burden a particular group of Internet users: those speaking Arabic, discussing Middle Eastern politics, or talking about Islam. State-mandated monitoring will, in this way, exacerbate existing inequities in notice and takedown operations. Stories of discriminatory removal impact are already all too common. In 2017, over 70 social justice organizations wrote to Facebook identifying a pattern of disparate enforcement, saying that the platform applies its rules unfairly to remove more posts from minority speakers.3 This pattern will likely grow worse in the face of pressures such as those proposed in the Recommendation.


The third Section will focus on security. Improved public safety is the ultimate goal of CVE measures. It is the metric by which their success should be measured, both as a general policy matter and in balancing the interests of Internet users whose fundamental rights are curtailed. A sober assessment of the Recommendation’s likely security benefits and costs is therefore imperative. This Comment cannot undertake to map out the entire security picture, which the Commission will presumably develop in consultation with experts in that field. It can, however, identify specific security costs that foreseeably arise from aggressive platform CVE enforcement. These include driving extremists into echo chambers in darker corners of the Internet; chilling important public conversations; and silencing moderate voices. Over-zealous platform removals and law enforcement reports can also build mistrust and anger among entire communities, adding fuel to existing frustrations with governments that promote such efforts, or with platforms that appear to act as state proxies. These security considerations should inform discussions of both platform monitoring and allocation of state policing resources.


Finally, Section IV will enumerate fundamental rights concerns. It will not closely analyze particular legal claims, but will instead list rights and foreseeable harms. In addition to the obvious concerns about information and expression rights, the Recommendation raises important concerns relating to equality and non-discrimination, data protection and privacy, and fair legal process. EU lawmakers should examine all affected rights carefully, and weigh them against the demonstrated security benefits of CVE campaigns, in determining recommendations to platforms and Member State governments.


I. Attributes of Potentially Terrorist Content and Review Mechanisms Proposed in the Recommendation

A. Attributes of Potentially Terrorist Content

The first important attribute of potentially terrorist content is the degree of harm associated with it. This attribute tends to support aggressive state enforcement measures. Terrorist attacks pose extreme danger to individual safety and public order. The state’s interest in preventing attacks is accordingly of the highest order. Because of the gravity of this threat, the filtering measures proposed in the Recommendation may be more likely to be necessary and proportionate, despite the burden they place on fundamental rights, than the same measures would be if used to target other kinds of unlawful content.


The second key attribute of potentially terrorist content, and one that weighs against parts of the Recommendation, is its link to discourse on topics of public importance. Both the causes and consequences of terrorism – including disputes over religion, immigration, regional selfdetermination, and more – are matters of considerable newsworthiness and legitimate public discussion. This means that true terrorist content may be difficult to distinguish from controversial or confusing, but lawful and important, expression. Platform or law enforcement errors can easily lead to suppression of important voices and public participation.


The second key attribute of potentially terrorist content, and one that weighs against parts of the Recommendation, is its link to discourse on topics of public importance. Both the causes and consequences of terrorism – including disputes over religion, immigration, regional selfdetermination, and more – are matters of considerable newsworthiness and legitimate public discussion. This means that true terrorist content may be difficult to distinguish from controversial or confusing, but lawful and important, expression. Platform or law enforcement errors can easily lead to suppression of important voices and public participation.


The third key attribute of potentially terrorist content is its context-dependency. This, too, weighs against depending on automation to suppress content, and in favor of robust errorcorrection processes. In practice, context will often be essential in determining whether a particular online communication is legal – even when a communication duplicates material previously identified as unlawful in another context. Images, video, or text concerning politically motivated violence can be illegal in one situation but important and legal in another. A standout example comes from videos posted by human rights activists to document war crimes in Syria, honor the victims, and enable future prosecution of perpetrators. YouTube has all too often taken these down, presumably because identical footage was used elsewhere by extremists.4 Other important online information that may incorporate such content includes citizens’ and civil society organizations’ responses to recruitment or propaganda materials; educators’ and anti-radicalization experts’ critiques of those materials; and academic researchers’ and news reporters’ analysis. This context-dependency is another key point of difference between terrorist content and CSAM. Because the latter is illegal in every context, reliance on blunt instruments like filters poses markedly less risk of systematic error. In the terrorism context, by contrast, the risk of error is high.


B. Review Mechanisms Proposed in the Recommendation

The Recommendation’s overall mechanism – automated filtering and police reporting for terrorist content, paired with human review and counter-notice in some but not all cases – is poorly calibrated to protect against removal of lawful and important online information. As will be discussed in Sections II-IV, this poses significant risks for social equality, safety and security, and for fundamental rights.


Keep reading and access the full article here.

  • Black Twitter Icon
  • Black Facebook Icon

© 2020 by Talking About Terrorism.  

  • Twitter Social Icon
  • LinkedIn Social Icon