top of page

Internet Platforms Observations on Speech, Danger, and Money

Radicalization and online content: What can we do? By Daphne Keller, Director of Intermediary Liability at the Stanford Center for Internet and Society.


I. Introduction

Public demands for internet platforms to intervene more aggressively in online content are steadily mounting. Calls for companies like YouTube and Facebook to fight problems ranging from “fake news” to virulent misogyny to online radicalization seem to make daily headlines. Some of the most emphatic and politically ascendant messages concern countering violent extremism (CVE).(1) As British prime minister Theresa May put it, “Industry needs to go further and faster” in removing prohibited content,(2) including by developing automated filters to detect and suppress it automatically.


The public push for more content removal coincides with growing suspicion that platforms are, in fact, taking down too much. Speakers across the political spectrum charge that platforms silence their speech for the wrong reasons. Over seventy social justice organizations wrote to Facebook in 2017, saying that the platform enforces its rules unfairly and removes more speech from minority speakers.(3) Conservative video educator Dennis Prager says that YouTube suppressed his videos in order to “restrict nonleft political thought,”4 and pro-Trump video bloggers Diamond and Silk told the House Judiciary Committee that Facebook had censored them.(5) Prager is suing YouTube and demanding reinstatement. As he points out, speech that disappears from the most important platforms loses much of its power because many potential listeners simply don’t find it. In extreme cases—as with Cloudflare’s banishment of the Daily Stormer(6) — disfavored voices may disappear from the internet completely.


One thing these opposing public pressures tell us is that platforms really are making both kinds of mistakes. By almost anyone’s standards, they are sometimes removing too much speech, and sometimes too little. Well-publicized hiring sprees(7) on content moderation teams might help with this problem. Increased public transparency (8) into those teams’ rules and processes almost certainly will as well.


The other thing the conflicting public sentiments about platforms and speech illuminate, though, is a set of fundamental problems with delegating complex decisions about free expression and the law to private companies. As a society, we are far from consensus about legal or social speech rules. There are still enough novel and disputed questions surrounding even long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, we cannot reasonably expect private platforms to do much better. However they interpret the law, and whatever other ethical rules they set, the outcome will be wrong by many people’s standards.


The study of intermediary liability tells us more about what to expect when we delegate interpretation and enforcement of speech laws to private companies. Intermediary liability laws establish platforms’ legal responsibilities for content posted by users. Twenty years of experience with these laws in the United States and elsewhere tells us that when platforms face legal risk for user speech, they routinely err on the side of caution and take it down. This pattern of over-removal becomes more consequential as private platforms increasingly constitute the “public square” for important speech. Intermediary liability law also tells us something about the kinds of rules that can help avoid over-removal.


In this essay, I will describe the lessons learned from existing intermediary liability laws and the foreseeable downsides of requiring platforms to go “further and faster” in policing internet users’ speech. Policy makers must decide if these costs are justified by the benefits of a more regulated and safer internet.


The first cost of strict platform removal obligations is to internet users’ free expression rights. We should expect over-removal to be increasingly common under laws that ratchet up platforms’ incentives to err on the side of taking things down. Germany’s new NetzDG law, for example, threatens platforms with fines of up to €50 million for failure to remove “obviously” unlawful content within twenty-four hours’ notice.(9) This has already led to embarrassing mistakes. Twitter suspended a German satirical magazine for mocking a politician,(10) and Facebook took down a photo of a bikini top artfully draped over a double speed bump sign.11 We cannot know what other unnecessary deletions have passed unnoticed.


Such a burden on individual speech raises constitutional questions. Does the First Amendment limit laws that incentivize private platforms to silence legal speech? If so, what obligations can the government impose on platforms before encountering a constitutional barrier? In this paper’s first analytical section, I discuss precedent on this question. Courts in the United States have spent little time considering it because our speech-protective intermediary liability statutes largely render constitutional analysis unnecessary. But Supreme Court cases about “analog intermediaries” like bookstores provide important guidance. In addition, courts outside the United States have wrestled with these questions in the internet context, often drawing on US precedent. Based on the US cases and international experience, I will suggest four considerations that would make any new US intermediary liability laws more likely—or less—to survive constitutional review.



Biography

Daphne Keller is the Director of Intermediary Liability at the Stanford Center for Internet and Society.  She was previously Associate General Counsel for Intermediary Liability and Free Speech issues at Google.  In that role she focused primarily on legal and policy issues outside the U.S., including the E.U.’s evolving “Right to Be Forgotten.” Her earlier roles at Google included leading the core legal teams for Web Search, Copyright, and Open Source Software. Daphne has taught Internet law as a Lecturer at U.C. Berkeley’s School of Law, and has also taught courses at Berkeley’s School of Information and at Duke Law School.  She has done extensive public speaking in her field, including testifying before the UK’s Leveson Inquiry. Daphne practiced in the Litigation group at Munger, Tolles & Olson.  She is a graduate of Yale Law School and Brown University, and mother to some awesome kids in San Francisco.

bottom of page