Intermediary Liability in the United States

This paper describes and assesses the intermediary liability landscape in the United States, providing an overview of major US legal regimes that protect online intermediaries from liability for user content. It then offers a series of case studies describing ways in which US-based companies and other organizations have structured their operations in compliance with and in response to US law.

Photo: Axel Taferner CC BY NC-SA-2.0

Intermediary Liability in the United States

Authors: Adam Holland, Chris Bavitz, Jeff Hermes, Andy Sellars, Ryan Budish, Michael Lambert, and Nick Decoster
Berkman Center for Internet & Society at Harvard University

Abstract: This paper describes and assesses the intermediary liability landscape in the United States. It provides an overview of major US legal regimes that protect online intermediaries in cases where third-parties seek to hold them liable for the conduct of their users, addressing both the Digital Millennium Copyright Act safe harbor enshrined in Section 512 of the United States Copyright Act and Section 230(c) of the Communications Decency Act. It then offers a series of case studies describing ways in which US-based companies and other organizations have structured their operations in compliance with and in response to US law. The paper describes Craiglist’s response to efforts to hold it responsible for sex trafficking that occurred on the site; the ContentID copyright and VERO trademark programs implemented by YouTube and eBay, respectively; and the reactions of intermediaries to allegations of wrongdoing by Wikileaks. It provides an assessment of the importance of transparency reporting for online intermediaries as they seek to address tensions between requirements of legal compliance and the need to secure users’ trust. And, it concludes with a detailed and thematically-organized literature review that summarizes the state of scholarship in this space.

Table of Contents

I. Introduction
II. Legal Landscape Primer
A. General Content Liability
1. Traditional Defamation Liability for Intermediaries
2. Traditional Privacy Liability for Intermediaries
3. Section 230 of the Communications Decency Act
B. Copyright
1. A General Overview of Secondary Liability for Copyright Infringement
2. The DMCA’s Safe Harbor
C. Other Intellectual Property Laws
1. Trademark
2. Misappropriation and Right of Publicity Laws
3. The Espionage Act
4. Surveillance Law
III. Case Studies
A. Sex Trafficking in Online Classified Advertising – and
1. Introduction
2. “Erotic” and “Adult” Advertisements on Craigslist – Negotiation Leads to Concession
3. “Adult Content” on – State Legislation and Defiance
4. Attention Turns to Section 230 Itself – The Current Legislative Debate
5. Conclusion
B. Private Ordering to Respond to Copyright Concerns: YouTube’s ContentID Program
1. YouTube Is Created
2. What Is Content ID?
3. What Can An Examination Of YouTube And ContentID Tell Us About Online Intermediaries And Private Ordering?
4. What Has Content ID Made Possible?
5. Negative Outcomes
6. Conclusion
C. Private Ordering to Respond to Trademark Concerns – eBay’s VERO Program
1. Tiffany v. eBay
2. Moving Forward
3. The VeRO Program
4. History of VeRO
5. Outcomes
D. The State as Soft Power – The Intermediaries Around Wikileaks
1. Introduction
2. Background
3. Legal Liability
4. Online Intermediaries React
5. Analysis
E. Online Intermediaries and Transparency Reporting
1. Introduction
2. Legal Background
3. Transparency Reporting: Resolving the Tension Between Compliance and Trust?
4. National Security Data is Complicated
5. Transparency Reports Describe a Passive Event
6. Companies Are Competing With Transparency Reports
7. Conclusion
F. Appendix A: Literature Review
G. Appendix B: Youtube and ContentID Timeline
H. Appendix C: Business Strategies Mind-Map

I. Introduction

The United States offers a unique and interesting case, from both a legal and policy perspective, for study of the governance landscape for online intermediaries. This is true for at least two major reasons.

First, the US is the birthplace of, and home to, many major global Internet platforms that host content and make this content available to users. It is thus unsurprising that US law incorporates significant protections for such online intermediaries in cases where third parties seek to hold them liable for the conduct of their users. At the same time, the US is also home to a significant and robust content industry that has played a major role in shaping its intellectual property – particularly copyright – regimes. The tension between content owners (who place a premium on preventing infringement of the content that drives their traditional business models) and intermediaries (which require immunity from third-party claims in order to avoid crippling financial liability) raises fundamental questions about the role of government and the prioritization of business interests.

Second, US law provides robust protections for speech, rooted in the First Amendment to the United States Constitution. Government-sanctioned restraints on speech – particularly prior restraints imposed without significant consideration to due process – are very strongly disfavored under US law. A court order requiring that a piece of content – e.g., a blog post or image or video – be removed from an online platform implicates the free speech rights of the person who created that content. State and federal legislatures crafting laws (and courts applying and interpreting them) must consider the rights of that speaker, along with the rights of the subject of the speech in question and the role of the intermediary, in crafting appropriate remedies.

This paper offers a short legal primer describing the two major provisions of federal law – the “Digital Millennium Copyright Act” or “DMCA”, and the safe harbors embodied in Section 512 of the United States Copyright Act and Section 230(c) of the “Communications Decency Act” or “CDA” – that govern liability and immunity of online intermediaries in the United States, and the common law provisions that fill gaps not addressed by these two statutory regimes. After mapping the landscape for intermediary liability in the US, the paper turns to a series of case studies that highlight how a range of actors in various sectors of the Internet ecosystem have grappled with intermediary liability concerns in addressing their business and related needs. These case studies demonstrate both the importance and the limitations of existing intermediary liability regimes and the creative ways in which companies and others have worked within (and around) existing law to allocate liability in ways that work for them. Finally, the paper turns to a discussion of the role of transparency for intermediaries attempting to balance the competing interests described above and the need to maintain positive relationships with both the public and their user base.

II. Legal Landscape Primer

A. General Content Liability

1. Traditional Defamation Liability for Intermediaries

Publishing a false factual statement about a person that harms their reputation can lead to a civil (and, extremely rarely, criminal[1]) claim of defamation.[2] Defamation has a complicated structure; the tort evolved from the common law of the individual states, with a series of United States Supreme Court cases adding some specific, nationwide carve-outs and requirements deemed to be necessary in light of the First Amendment.[3] The law still varies considerably across each state, but to make out a claim of defamation today a plaintiff generally needs to show, among other things, (1) that a defendant published a statement; (2) that the statement was a false statement of fact (as opposed to true facts or an opinion); and (3) that the defendant acted with a certain level of fault (depending on the person involved, either negligence or “actual malice,” a term of art roughly meaning the defendant knew the statement was false at the time it was published).[4]

Claims against content intermediaries need to satisfy these elements as well, but any party against whom all of the elements of a defamation claim exist is potentially liable.[5] Prior to the advent of the Internet, courts limited the universe of possible defendants by requiring that an intermediary only be held liable if they “know[] or ha[ve] reason to know” of the statement’s defamatory character.[6]

This leads to different results based on different intermediaries in the offline world. Newspapers and magazines tend to be held responsible for their content, even when the content clearly owes its origin to a third party – e.g., with a letter to the editor.[7] The opposite result is usually reached when considering contract printing shops or “vanity presses.”[8] Those who distribute or host physical copies of defamatory publications are usually protected for the same reason, and scholars openly question whether a library or bookseller could ever be held liable for distributing defamatory books, even if they had reason to know of the book’s character.[9] Telegraph and telephone companies have generally been protected against claims for transmitting defamatory statements, though often with a stated exception for when the company knew of the message’s defamatory nature.[10]

Radio and television stations are generally held responsible for pre-recorded content, but live broadcasting presents a curious analytical challenge, as the station may not have the time to harbor any knowledge of a statement’s defamatory and false nature between when it is spoken and when it is aired.[11] At least one court has held that open solicitation of content without a broadcast delay system could lead to liability under a recklessness standard,[12] but most other courts take the opposite approach.[13]

Even when an intermediary publisher or conduit is held responsible for the content it is disseminating, other doctrines in defamation law provide protection to avoid inappropriate results. States adopt variations on a “fair report privilege,” which allows for the fair and accurate republication of statements made in official public documents or proceedings.[14] Many states also provide a “wire service defense,” which allows for the republication of defamatory content from a reputable news agency, provided the re-publisher did not know or have reason to know the information was defamatory and did not substantially alter the content.[15] Some states have also adopted a “neutral reportage” defense, to protect the republication of statements that are worthy of public discussion because they were made, even if the re-publisher believes them to be false – e.g., a wild allegation made by one politician against another during an election.[16] Such defenses, in particular cases, could extend to intermediaries hosting or republishing the content of others.

In the early days of Internet’s widespread adoption, commentators and cases sought to analogize re-publisher and distributor liability when considering bulletin boards and other online content platforms.[17] After one court assigned liability for the Internet service provider Prodigy Services Co. for content on one of its bulletin boards, based on the fact that Prodigy exercised general editorial control over the platform, Congress opted to define a different standard for online intermediary liability.[18]

[1] See David Pritchard, Rethinking Criminal Libel: An Empirical Study, 14 Comm. L. & Policy 303, 313 (2009) (finding 2-9 prosecutions a year in the state of Wisconsin, but noting this to be significantly a significantly higher rate than commonly thought). The Media Law Resource Center reported no criminal defamation cases in 2013. See New Developments 2013, Media L. Resource Ctr. Bulletin 90 (December 2013).
[2] See generally
[3] Robert C. Post, The Social Foundation of Defamation Law: Reputation and the Constitution, 74 Cal. L. Rev. 691 (1986).
[4] Parties must also show that the statement was about the plaintiff and that the statement harmed the plaintiff’s reputation. Most states also require a plaintiff to show that they suffered “actual damages” based on the statement, or that the statement falls into one of several categories where damages are presumed. See Defamation, Digital Media Law Project, (last updated Aug. 12, 2008). When discussing public officials and figures, the First Amendment case law requires a plaintiff to show that the defendant acted with “actual malice,” a term of art meaning that the defendant knew the statement was false when they published it, or acted with reckless disregard of the truth. For more on private and public figures, see Proving Fault: Actual Malice and Negligence, Digital Media Law Project (last updated Aug. 7, 2008). There are other overlapping claims that may be asserted in conjunction with defamation, but they are usually confined to the same general requirements as to falsity and fault. See Other Falsity-Based Legal Claims, Digital Media Law Project, (last updated Aug. 15, 2008).
[5] Rodney Smolla, Law of Defamation § 4:87.
[6] Restatement (Second) Torts § 581. This scienter requirement has now spread to all claims of defamation through Supreme Court precedent, but nevertheless serves as a useful heuristic for separating parties traditionally liable for defamation from those who were not. See Smolla, supra note [[x]], at § 4:92.
[7] Sack on Defamation § 7.1; Marc A. Franklin, Libel and Letters to the Editor: Toward an Open Forum, 57 U. Colo. L. Rev. 651 (1986).
[8] Sack on Defamation § 7.3.4.
[9] Sack on Defamation § 7.3.4 (“Suppose a person were to inform public libraries and news vendors that a book, newspaper, or newsmagazine they are distributing contains false and defamatory statements . . . . May the libraries or vendors then be held liable for continuing to sell or circulate the offending material? That is possible, although the potential for use of that tactic to turn financially vulnerable distributors into censors . . . argues strongly for a complete distributors’ immunity from suit.”); Prosser and Keeton on Torts § 113 (1984) (“It would be rather ridiculous, under most circumstances, to expect a bookseller or a library to withhold distribution of a good book because of a belief that a derogatory statement contained in the book was both false and defamatory . . . .”); Loftus E. Becker, Jr., The Liability of Computer Bulletin Board Operators for Defamation Posted by Others, 22 Conn. L. Rev. 203, 227 (1989) (“[N]o one seems to have sued a library for defamation in this century.”). For an example of a case that held a bookseller liable based on this theory, see Janklow v. Viking Press, 378 N.W.2d 875 (S.D. 1988); Restatement (Second) Torts § 581 cmt. e (acknowledging possible liability for libraries and bookstores in exceptional cases).
[10] See Liability of Telegraph or Telephone Company for Transmitting or Permitting Transmission of Libelous or Slanderous Messages, 91 A.L.R.3d 1015 (1979) (citing numerous cases where courts applied the Restatement’s knowledge requirement or found categorical immunity for telegraph and telephone companies). Courts acknowledge the policy reasons for giving telegraph companies the leniency in deciding whether they should have known that a dispatch was defamatory. Gray v. W. Union Tel. Co., 13 S.E. 562 (Ga. 1891); but see Paton v. Great N.W. Tel. Co., 170 N.W. 511 (Minn. 1919) (finding potential liability for telegraph company for transmission).
[11] See Sack on Defamation § 7.3.5.A.2.
[12] Snowden v. Pearl River Broad. Corp., 251 So. 2d 405 (La. Ct. App. 1971).
[13] Sack on Defamation § 7.3.5.A.2 n. 66 (gathering cases).
[14] See Fair Report Privilege, Digital Media Law Project, (last updated July 22, 2008).
[15] Wire Service Defense, Digital Media Law Project, (last updated July 22, 2008).
[16] See Neutral Report Privilege, Digital Media Law Project, (last updated July 22, 2008); Sack on Defamation § 7.3.5.D.
[17] See, e.g., Becker, supra note [[x]].
[18] See David Ardia, Free Speech Savior or Shield for Scoundrels: an Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act, 43 Loyola of L.A. L. Rev., 373, 407-11 (2010) (chronicling the history of the lead-up to Section 230, including the Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995)).

2. Traditional Privacy Liability for Intermediaries

Privacy laws in the United States consist of a patchwork of common law torts and specific statutory enactments, overlaid with nationwide exceptions made in light of the First Amendment.[19] Intermediaries primarily concern themselves with privacy law to the extent it impacts their own businesses operations and practices – for example, how they represent their data handling practices to the public, and how they handle their own data security.

A second form of privacy liability for intermediaries stems instead from the actions taken on behalf of others, and whether the intermediary can ever be held liable for contributing (willingly or not) to those actions. The laws around such invasions of privacy can be generally clustered into two categories: those that address the unlawful gathering of information (e.g., intruding into one’s private spaces or unlawfully recording conversations), and those that address publishing private information (e.g., the “public disclosure of private facts” tort or publishing specific information proscribed by statute[20]). The First Amendment plays a role in this space by both limiting the universe of defendants for intrusion claims[21] and by substantially limiting the types of claims that can be brought regarding the disclosure of private information.[22]

With respect to information gathering, many states recognize a tort called “intrusion upon seclusion,” which punishes one who intrudes into the solitude or seclusion of another in a way that is highly offensive to a reasonable person.[23] Because the defendant’s conduct usually must be intentional for liability to attach, it is rare to see liability extend to disinterested intermediaries.[24] At least one court has found secondary liability could attach to a newspaper for running a classified ad that facilitated intrusion of another, though in that case the plaintiff pleaded that the newspaper published the ad with the intent to invade the plaintiff’s privacy.[25]

Some intrusion laws attempt to indirectly target intrusion by punishing those who later disclose or receive the information that was unlawfully acquired. But First Amendment doctrine prevents the application of such laws to those who did not actively participate in the unlawful acquisition, at least when the information is true and a matter of public concern.[26] This would seem to preclude most information intermediaries from liability for transmitting content that was unlawfully acquired by others.

Laws concerning the disclosure of private information directly can vary considerably, but most states have some form of the tort called “public disclosure of private facts,” which concerns the intentional disclosure to the public[27] of non-newsworthy information about an individual that is highly offensive to a reasonable person.[28]

Unlike defamation or intrusion, the specific mental state of defendants varies considerably between states, so the mens rea does not generally limit liability for disinterested intermediaries in the same way as other torts.[29] That said, the few cases that consider a distributor’s liability tend to impart the same requirement from defamation cases that the distributor know the information to be tortious in order to be held liable.[30] Also, information obtained from public sources are considered protected under the First Amendment,[31] and republishing content originally published widely by others does not lead to liability in most cases, as the fact that the content was published previously means that the information is no longer considered private.[32]

The traditional standards for intermediary liability in privacy are applied in a radically different manner online, in large part due to Section 230 of the Communications Decency Act, which is discussed in the following section.

[19] Daniel J. Solove & Paul M. Schwartz, Information Privacy Law 77 (3d ed. 2009).
[20] For an example of this, see 18 U.S.C. § 2710 (governing when and how a customer’s video rental history may be disclosed).
[21] See notes x–y, infra, and accompanying text.
[22] While the states that recognize a public disclosure tort include a definitional balance that precludes claims against newsworthy information, the Supreme Court has yet to directly consider a challenge to public disclosure torts in other cases. See Geoffrey R. Stone, Privacy, the First Amendment, and the Internet, in The Offensive Internet (Saul Levmore & Martha C. Nussbaum eds. 2010). For more on the history of balancing between free speech and privacy has had a complicated century of history. See Geoffrey R. Stone, Anthony Lewis, Freedom for the Thought That we Hate 59-80 (2009).
[23] Restatement (Second) Torts § 652B.
[24] See, e.g., Marich v. MGM/UA Telecomm., Inc., 113 Cal. App. 4th 415 (2003) (defining intent for California’s intrusion tort). For examples of cases where parties were liable as aiders or abettors of another’s intrusion, see David A. Elder, Privacy Torts § 2:9.
[25] Vescovo v. New Way Enters., Ltd., 60 Cal. App. 3d 582 (1976).
[26] See, e.g., Bartnicki v. Vopper, 532 U.S. 514, 526 (2001) (The First Amendment prevents a radio broadcaster from being punished for disclosing the contents of an unlawfully-intercepted communication); Smith v. Daily Mail Publ'g Co., 443 U.S. 97, 104 (1979); Food Lion, Inc. v. Capital Cities/ABC, Inc., 194 F.3d 505 (4th Cir. 1999) (refusing to escalate damages for breach of duty of loyalty based on subsequent disclosure of information); Doe v. Mills, 536 N.W.2d 824 (Mich. 1995) (knowing receipt of information unlawfully obtained does not lead to intrusion claim for the recipient). Scholars have been mindful to point out that the exact meaning and scope of the “Daily Mail principle” is not entirely clear. Janelle Allen, Assessing the First Amendment as a Defense for Wikileaks and Other Publishers of Previously Undisclosed Government Information, 46 U.S.F. L. Rev. 783, 798 (2012).
[27] This is deliberately made a wider audience than defamation, for which liability attaches when a statement is “published” to a single person. Restatement (Second) Torts § 652D cmt. a.
[28] Restatement (Second) Torts § 652D.
[29] David A. Elder, Privacy Torts § 3:7.
[30] See, e.g., Steinbuch v. Hachette Book Grp., 2009 WL 963588 at 3 (E.D. Ark. April 8, 2009); Lee v. Penthouse Int’l Ltd., 1997 WL 33384309 at 8 (C.D. Cal. March 19, 1997).
[31] See, e.g., The Florida Star v. B.J.F., 491 U.S. 524 (1989).
[32] See, e.g., Ritzmann v. Weekly World News, 614 F. Supp. 1336 (N.D. Tex. 1985); Heath v. Playboy Enters., Inc., 732 F. Supp. 1145 (S.D. Fla. 1990); but see Michaels v. Internet Ent. Grp., Inc., 5 F. Supp. 2d 823 (C.D. Cal. 1998) (disclosure of more than the ways originally revealed in first publication can give rise to claim for republication).

3. Section 230 of the Communications Decency Act

As noted in the preceding sections, liability for offline content distributors or hosts largely turns on whether the host knows or has reason to know that they are hosting tortious content. In the earliest days of the Internet, courts used these standards to assess liability of online intermediaries, but found that the law created a perverse result. Online intermediaries possessed the technical ability to filter or screen content in the way an offline intermediary never could, but under existing standards this meant that the intermediary would assume liability for all the content over which they had supervisory control. In the most famous case on point, this included a service that was trying specifically to curate a family friendly environment, at a time when the public was greatly concerned about the adult content on the Internet.[33] In order to “to promote the continued development of the Internet and other interactive computer services and other interactive media [and] to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services,” Congress enacted Section 230 of the Communications Decency Act.[34]

Section 230 prevents online intermediaries from being treated as the publisher of content from users of the intermediaries. By the terms of the statute, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” [35] An “interactive computer service” under Section 230 is defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server . . . ”[36] Online intermediaries of all sorts meet this definition, including Internet service providers, social media websites, blogging platforms, message boards, and search engines.[37] An “information content provider” in turn is defined as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”[38]

Section 230 covers claims of defamation, invasion of privacy, tortious interference, civil liability for criminal law violations, and general negligence claims based on third-party content,[39] but it expressly excludes federal criminal law, intellectual property law, and the federal Electronic Communications Privacy Act or any state analogues.[40] Its terms also specify that the coverage is for “another’s” content, thus not protecting statements published by the interactive computer service directly.[41] Thus, to apply Section 230’s protection, a defendant must show (1) that it is a provider or user of an interactive computer service; (2) that it is being treated as the publisher of content (though not with respect to a federal crimes, intellectual property, or communications privacy law); and (3) that the content is provided by another information content provider.

The law was designed in part to foster curation of online content, and courts have found that a wide array of actions can be taken by “interactive computer services” over third-party content are covered by Section 230. These include basic editorial functions, such as deciding whether to publish, remove, or edit content;[42] soliciting users to submit legal content;[43] paying a third party to create or submit content;[44] allowing users to respond to forms or drop-downs to submit content;[45] and keeping content online even after being notified the material is unlawful.[46] This applies to both claims rooted in defamation and those rooted in invasion of privacy.[47]

On the other hand, if the intermediary creates actionable content itself, it will be liable for that content.[48] Courts are also unlikely to find that Section 230 applies when an interactive computer service edits the content of a third party and materially altering its meaning to make it actionable;[49] requires users to submit unlawful content;[50] or if the service promises to remove material and then fails to do so.[51] When an intermediary takes these actions, it is deemed to have “developed” the content by “materially contributing to the alleged illegality of the conduct.”[52]

While stated very simply, the law upsets decades of precedent in the areas of content liability law, and radically alters the burdens on online services for claims based on user content.[53] By limiting any assumed liability for a wide range of content-based claims (and given the other content areas discussed below), Section 230 effectively removes any duty for an interactive computer service to monitor content on its platforms, a tremendous boon for the development of new intermediaries and services.[54] Virtually all liability for content-based torts is pushed from the service to others, often the user. In practical terms, however, this has yet to manifest a windfall for online services; many claims are still brought against online intermediaries, and the question is often litigated extensively and at great expense before courts find that claims are invalid.[55]

As noted above, Section 230 does not cover intellectual property laws, and thus different rules apply in these cases. These are now addressed.

[33] Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995). See also Lawrence Lessig, Code 2.0 249-52 (2006) (discussing the Internet anti-pornography efforts happening around the time of the Communications Decency Act debate).
[34] 47 U.S.C. § 230. The section was part of a greater law that sought to relegate the transmission of offensive content to minors, the majority of which was later struck by the Supreme Court. See Reno v. ACLU, 521 U.S. 844 (1997).
[35] 47 U.S.C. § 230(c)(1).
[36] § 230(f)(2).
[37] See Ardia, supra note [[x]], at 387-89.
[38] § 230(f)(3).
[39] See Ardia, supra note [[x]], at 452.
[40] § 230(e)(1)–(4). The Electronic Communications Privacy Act governs the voluntary and compelled disclosure of electronic communications by electronic communications services.
[41] See § 230(c)(1).
[42] See Donato v. Moldow, 865 A.2d 711 (N.J. Super. Ct. 2005).
[43] See Corbis Corporation v., Inc., 351 F.Supp.2d 1090 (W.D. Wash. 2004); see also Global Royalties, Ltd. v. Xcentric Ventures, LLC, 544 F. Supp. 2d 929, 933 (D. Ariz. 2008) (holding that even though a website “encourages the publication of defamatory content,” the website is not responsible for the “creation or development” of the posts on the site).
[44] See Blumenthal v. Drudge, 992 F. Supp. 44 (D.D.C. 1998).
[45] See Carafano v., 339 F.3d 1119 (9th Cir. 2003).
[46] See Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997). Promising to remove content and then declining to do so, however, can expose an interactive computer service to liability. See Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009). For more examples of actions likely to be covered under Section 230, see Online Activities Covered by Section 230, Digital Media Law Project, (last updated Nov. 10, 2011).
[47] See, e.g., Jones v. Dirty World Entertainment Recordings, LLC, 2014 WL 2694184 (6th Cir. 2014) (defamation claim preempted by Section 230); Doe v. Friendfinder Network, 540 F. Supp. 2d 288, 302–303 (D.N.H. 2008) (intrusion upon seclusion and public disclosure of private facts claims preempted).
[48] See MCW, Inc. v., LLC, 2004 WL 833595, No. 3:02-CV-2727-G at * 9 (N.D. Tex. April 19, 2004) (the operator of a website may be liable when it is alleged that “the defendants themselves create, develop, and post original, defamatory information concerning” the plaintiff).
[49] See Online Activities Not Covered by Section 230, Digital Media Law Project, (last updated Nov. 10, 2011).
[50] See Fair Housing Council v., LLC, 521 F.3d 1157, 1175 (9th Cir. 2008) (en banc).
[51] See Barnes v. Yahoo!, Inc, 570 F.3d 1096 (9th Cir. 2009).
[52] See Jones v. Dirty World Entertainment Recordings, LLC, 2014 WL 2694184 (6th Cir. 2014).
[53] See Ardia, supra note [[x]], at 411.
[54] See, e.g., Jack M. Balkin, Old-School/New-School Speech Regulation, 127 Harv. L. Rev. 1, 17 (2014) (“Section 230 immunity . . . ha[s] been among the most important protections for free expression in the United States in the digital age. [It] has made possible the development of a wide range of telecommunications systems, search engines, platforms, and cloud services without fear of crippling liability.”).
[55] Id. at 493.

B. Copyright

1. A General Overview of Secondary Liability for Copyright Infringement

In U.S. law, copyright liability comes in two main forms, “primary” or “direct” liability, and “secondary liability”.[56] The first, direct liability, is the liability that attaches to an actual infringer of the copyright(s) in question, whether by copying without authorization or by violating any of the other rights that copyright owners possess, as described in 17 U.S.C. 106 of U.S. law. Direct liability, although it can become more complex depending on the facts surrounding an alleged infringement, is generally quite straightforward. Either copyright was infringed or it wasn’t.

The second type of liability, secondary liability, is more nuanced, in large part because there is nothing in U.S. copyright statute that expressly provides for such liability. Secondary liability in the United States is therefore what is known as “judge-made” law, a set of rules and guidelines, rising out of other areas of liability law[57], that have accumulated over time on a case-by-case basis, that then exist as binding precedent. This makes secondary liability more fact specific and also potentially more prone to evolve based on changes in technology and normative behaviors.[58]

Within this framework, secondary liability is conceptualized as taking on one of two forms[59]: that resulting from “vicarious infringement” and that resulting from “contributory infringement.” Each version requires that there first be a direct infringement. The remaining differences are subtle but critical, especially with respect to the implicit incentives for potential secondary infringers, and address a potential secondary infringer’s “knowledge” of any direct infringement, the degree to which the infringer has the ability to control the direct infringement, and their financial benefit, if any. Each of these facets are critical to understanding the competing imperatives that online intermediaries (“OI”s) face, and it is with respect to OIs that this section’s further discussion will proceed.

[56] There are mentions in the literature and case law of a concept of “tertiary liability, “those who help the helpers”; see, e.g. Mark A. Lemley & R. Anthony Reese “Reducing Digital Copyright Infringement Without Restricting Innovation” 56 Stan. L. Rev. 1345, 1345-54, 1373-1426 (2004); Benjamin H. Glatstein “Tertiary Copyright Liability” The University of Chicago Law Review, Vol. 71, No. 4 (Autumn, 2004), pp. 1605-1635, as well as Eric Goldman “Offering P2P File-Sharing Software for Downloading May Be Copyright Inducement–David v. CBS Interactive” (discussing how courts may view P2P filesharing as a special case) but this theory of liability has typically been dismissed as representing too diffuse a chain of causality, and unsupported by case law.*

[57] See, e.g., Metro-Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), 930 (2005).: “[T]hese doctrines of secondary liability emerged from common law principles and are well established in the law.” (quoting Blackmun’s dissent in Sony).
[58] “[T]he lines between direct infringement, contributory infringement, and vicarious liability are not clearly drawn” Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 435(1984).
[59] Pamela Samuelson has hypothesized that the “active inducement” theory laid out in the MGM Studios, Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), case may amount to a new form of secondary liability. See Pamela Samuelson, Three Reactions to MGM v. Grokster, 13 Mich. Telecomm. Tech. L. Rev. (2006).

i. Contributory Infringement

For an OI to be liable for “contributory infringement,” the OI must have actual or constructive knowledge of the direct infringement[60] and make a “material contribution” to the direct infringement as well.[61] As can easily be imagined, cases on this turn on the nature of “knowledge” and what sort of contribution is “material”. For example, in Perfect 10 v. Visa International,[62] the majority found that the role of credit card companies in processing payment transactions for infringing material was too attenuated from the infringing activity to be considered a "material contribution."[63] With respect to knowledge, ignorance of the direct infringement does not necessarily immunize an OI to a claim of secondary liability, since courts have also introduced the idea of “willful blindness”[64] for situations in which a defendant “should have” known about the direct infringement, but deliberately chose not to know about it, or at least to not take notice oo act upon facts or circumstances that pointed into the direction of infringement.

Important cases addressing contributory infringement, especially with respect to online intermediaries, are Sony Corp. of America v. Universal City Studios, Inc., Metro–Goldwyn–Mayer Studios Inc. v. Grokster, Ltd., and the recently settled Viacom International, Inc. v. YouTube, Inc.[65] Critically for OIs whose business model or technology may involve copyright infringement, but may also be used in non-infringing ways, the Sony case gave rise to the “substantial non-infringing uses” test, borrowed from patent law’s “staple article” doctrine, with respect to intermediary technologies that only make direct infringement possible rather than definite. [66]

The court in Sony held that in the case of an infringer selling a technology that makes infringement possible, (here, through copying) if a substantial non-infringing use for the technology exists, then the vendor of the technology cannot be found liable[67] because constructive knowledge of the (potential) direct infringement cannot and should not be imputed to the OI. However, the Grokster case expanded on and modified this theory, holding that simply because an OI’s technology was merely capable of substantial non-infringing uses did not categorically immunize the OI from liability, and that contributory liability may still be found if there is clear evidence of an OI’s intent to induce and facilitate infringement.[68] This has become known as the Grokster “inducement rule.”[69]

[60] Compare the DMCA’s “actual knowledge” requirement 17 USC 512(c)(1)(A)(i)
[61] The classic case on this topic is Fonovisa, Inc. v. Cherry Auction, Inc., 76 F.3d 259, 264 (9th Cir. 1996), although this does not have to do with OIs. The lodestar case for OIs is now Metro-Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), which also adopted the doctrine of “inducement” for copyright liability.
[62] 494 F.3d 788 (9th Cir. 2007)
[63] “Copyright: Infringement Issues - Internet Law Treatise,” accessed June 18, 2014,
[64] In re Aimster Copyright Litigation 334 F.3d 643, 650 (C.A.7 (Ill.),2003) (“Willful blindness is knowledge, in copyright law (where indeed it may be enough that the defendant should have known of the direct infringement”)
[65] See: <>
[66] Sony Corp. of America v. Universal City Studios, Inc. 464 U.S. 417, 442 (1984)
[67] “The so-called “Sony safe harbor”. See (“the sale of copying equipment, like the sale of other articles of commerce, does not constitute contributory infringement if the product is widely used for legitimate, unobjectionable purposes. Indeed, it need merely be capable of substantial noninfringing uses.”)
[68] Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 934-935 (2005) (“Thus, where evidence goes beyond a product's characteristics or the knowledge that it may be put to infringing uses, and shows statements or actions directed to promoting infringement, Sony's staple-article rule will not preclude liability.”); See also Columbia Pictures Industries, Inc. v. Fung, 710 F.3d 1020 (9th Cir. 2013)
[69] Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 936-937 (2005) (“[O]ne who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.”)

ii. Vicarious Infringement

For an OI to be liable for vicarious infringement, it must benefit financially from the direct infringement and have both the right and ability to supervise the direct infringer,[70] a concept rooted in the “respondeat superior” doctrine of agency law. Critically for OIs, especially those that are so large that they cannot monitor all the content that they host or is under their purview, actual knowledge of the infringing conduct is not a requirement.[71] It is the OI’s ability to supervise the direct infringer that becomes dispositive.

Whether or not an OI has benefitted financially from another’s direct infringement may seem like a clear dichotomy. There must be a “causal relationship between the infringing activity and any financial benefit [the] defendant reaps.”[72] However, this question has become quite nuanced with respect to the many disparate revenue streams that attach to an OI. As just one example, if an OI hosts third party content, and typically serves advertisements next to that content, for which the OI receives payments, and the content in question proves to infringe copyright, the revenue from that advertising may well be enough to render the OI liable,[73] whether those advertisements appear automatically or are curated.

Whether an OI has the ability to supervise the direct infringer is a fact-specific question, focusing on the relationship between the direct infringer and the would-be secondary infringer. Key cases here are Fonovisa v Cherry Auction,[74] where a flea market was held liable for a vendor’s infringing sales and A&M Records, Inc. v. Napster, Inc.[75] So far, most definitions of “supervision” have been imported from non-Internet fact patterns[76], and no online-specific variation of what it means to be able to “supervise” that might be uniquely applicable to OIs has emerged from the case law. Note, though, that the U.S. Supreme Court in Grokster described an OI’s failure to deploy “filtering tools or other mechanisms to diminish the infringing activity using their software” as giving added significance to other evidence of unlawful objectives and "underscore[ing] Grokster's and StreamCast's intentional facilitation of their users' infringement.”[77]

A final note on one of the most basic features of the modern Internet: linking.[78] Whether an OI, such as a search engine, link aggregator, or some other variety of OI can be held secondarily liable for merely linking to directly infringing material is typically described as “unsettled” law.[79] Certainly rights holders, especially large institutional ones, would like to be able to sue wealthy OIs rather than individuals for damages, and OIs who link to content would prefer to be shielded from liability if that content turns out to infringe, but courts have described both a “general principle that linking does not amount to copying,” and stated that “Although hyper-linking per se does not constitute direct copyright infringement because there is no copying, in some instances there may be a tenable claim of contributory infringement or vicarious liability.”[80] The Supreme Court has also, in a longer discussion of “inducement,” unfavorably mentioned providing links to known infringing content.[81] Compare the 2014 European Court of Justice ruling that linking to publicly available material is not infringement, but that linking to restricted or unauthorized material may well be.[82]

[70] Compare 47 U.S.C §230(f)(3)’s “responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” as well as 47 U.S.C §230(f)(4); See Grokster, 545 U.S. at 930, for a variant of the definition. (“One ... infringes vicariously by profiting from direct infringement while declining to exercise a right to stop or limit it.”)
[71] 3 Nimmer § 12.04[A][1].
[72] “It may also be established by evidence showing that users are attracted to a defendant's product because it enables infringement, and that use of the product for infringement financially benefits the defendant. “Arista Records LLC v. Lime Grp. LLC, 784 F. Supp. 2d 398, 435 (S.D.N.Y. 2011)
[73] Columbia Pictures Indus. v. Gary Fung, 710 F.3d 1020 (“Under these circumstances, we hold the connection between the infringing activity and Fung's income stream derived from advertising is sufficiently direct to meet the direct "financial benefit" prong of § 512(c)(1)(B).). but see Perfect 10, Inc. v., Inc., 487 F.3d 701, 730 C.A.9 (Cal.), (2007) (Google’s ability to terminate an AdSense partnership did not amount to a right or ability to control an infringing AdSense participant.)
[74] Fonovisa v. Cherry Auction, 76 F. 3d 259 (9th Cir. 1996).
[75] “Fonovisa essentially viewed “supervision” in this context in terms of the swap meet operator's ability to control the activities of the vendors, 76 F.3d at 262, and Napster essentially viewed it in terms of Napster's ability to police activities of its users, 239 F.3d at 1023.” Perfect 10, Inc. v. Visa Intern. Service Ass'n 494 F.3d 788, 802 (C.A.9 (Cal.),2007)
[76] Metro-Goldwyn-Mayer Studios, Inc. v. Grokster Ltd. 380 F.3d 1154, 1164 -1165 (C.A.9 (Cal.), 2004) (“A salient characteristic of that relationship often, though not always, is a formal licensing agreement between the defendant and the direct infringer”) (internal cites omitted)
[77] Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 939, 125 S. Ct. 2764, 2781, 162 L. Ed. 2d 781 (2005)
[78] C.f. 17 U.S.C. 512(d)’s “information location tools”
[79] “Copyright: Infringement Issues - Internet Law Treatise.”
[80] Online Policy Grp. v. Diebold, Inc., 337 F. Supp. 2d 1195, 1202 n.12 (N.D. Cal. 2004) ( referencing as notable the DMCA’s 512(d).)
[81] Columbia Pictures Indus., Inc. v. Fung, 710 F.3d 1020, 1036-1038 (9th Cir. 2013) cert. dismissed, 134 S. Ct. 624, 187 L. Ed. 2d 398 (U.S. 2013)
[82] “On the other hand, where a clickable link makes it possible for users of the site on which that link appears to circumvent restrictions put in place by the site on which the protected work appears in order to restrict public access to that work to the latter site’s subscribers only, and the link accordingly constitutes an intervention without which those users would not be able to access the works transmitted, all those users must be deemed to be a new public, which was not taken into account by the copyright holders when they authorised the initial communication, and accordingly the holders’ authorisation is required for such a communication to the public. This is the case, in particular, where the work is no longer available to the public on the site on which it was initially communicated or where it is henceforth available on that site only to a restricted public, while being accessible on another Internet site without the copyright holders’ authorisation.”).

2. The DMCA’s Safe Harbor

Section 512(c) of the Digital Millennium Copyright Act, “Limitations on liability relating to material online,” provides for four separate sets of circumstances in which a “service provider”[83] “shall not be liable for monetary relief.” This shield from liability has come to be known as the DMCA’s “safe harbor”, and these four circumstances are: transitory digital communications, system caching, information residing on systems or networks at direction of users, and information location tools. Of these, the latter two are most germane to a discussion of online intermediaries. It is the “user” explicitly referenced in the “direction of users” that renders the service provider an intermediary, and “information location tools” involve a provider “referring or linking users to an online location”.

In each case, the protection from liability that an OI can enjoy is predicated on meeting certain conditions. To enjoy 512(c) immunity regarding infringing “information residing on an OI’s system or network at the direction of a user”, it must be true that the OI:

  • (A)(i) does not have actual knowledge that the material or an activity using the material on the system or network is infringing;
  • (ii) in the absence of such actual knowledge, is not aware of facts or circumstances from which infringing activity is apparent; or
  • (iii) upon obtaining such knowledge or awareness, acts expeditiously to remove, or disable access to the material;
  • (B) does not receive a financial benefit directly attributable to the infringing activity, in a case in which the service provider has the right and ability to control such activity; and
  • (C) upon notification of claimed infringement as described in paragraph (3), responds expeditiously to remove, or disable access to, the material that is claimed to be infringing or to be the subject of infringing activity.

Note the inclusion of the phrases that are similar to the requirements in the two forms of secondary liability. To summarize, an OI is not liable for monetary damages or for injunctive relief, except for the specific types of the latter outlined in 512(j), or for any (allegedly) infringing material on their systems or networks unless they know or have been told it is there and have failed to remove it. It is important to note that if the material in question is not removed, that does not render the OI liable, it simply means they could be found liable, whereas if the material in question is removed, there can be no liability regardless of the outcome of a suit against the user.

The language describing the conditions for Section 512(d)’s safe harbor are virtually identical to those in 512(c), in fact using identical language to that of 512(c) regarding notifications, simply clarifying the new variety of information to which the notification refers.[84] It is a DMCA notice submitted under 512(d) that leads to results being removed from Google Search. There are a also few further requirements described in 512(i) that apply to all of Section 512’s safe harbors. An OI should have a “repeat infringer policy” that aims to terminate users of the service that repeatedly infringe and an OI should also accommodate and not interfere with “standard technical measures.” In short, an online intermediary can enjoy the Section 512(c) and (d) “safe harbor” and avoid all liability for any copyright infringement committed by its users as long as it expeditiously removes allegedly infringing material once notified of that material’s presence, and fulfills Section 512’s requirements that apply to all safe harbors. However, the OI may still be subject to the injunctions described in 512(j).

The system’s general weighting is therefore toward easy and unquestioned removal. Section 512(f)’s penalties for a sender’s misrepresentation in a notice apply only when the misrepresentation is material and knowing, and even then, the only available penalties are attorneys’ fees.[85] Section 512(g) absolves the OI from any liability for mistakenly removing material as long as it was done in good faith; under 512(g)(3) a counter-notice sender must swear on penalty of perjury that the material was removed in error; and even in the event of a counter-notice, restoring material that has been removed can happen only after a 10 day period.

[83] 17 U.S.C. 512(k) (“(1) Service provider. — (A) As used in subsection (a), the term “service provider” means an entity offering the transmission, routing, or providing of connections for digital online communications, between or among points specified by a user, of material of the user's choosing, without modification to the content of the material as sent or received. (B) As used in this section, other than subsection (a), the term “service provider” means a provider of online services or network access, or the operator of facilities therefor, and includes an entity described in subparagraph (A).”
[84] 512(d)(3) upon notification of claimed infringement as described in subsection (c)(3), responds expeditiously to remove, or disable access to, the material that is claimed to be infringing or to be the subject of infringing activity, except that, for purposes of this paragraph, the information described in subsection (c)(3)(A)(iii) shall be identification of the reference or link, to material or activity claimed to be infringing, that is to be removed or access to which is to be disabled, and information reasonably sufficient to permit the service provider to locate that reference or link.”
[85] The standard for misrepresentation is quite high as it requires “actual knowledge” of misrepresentation on the part of the copyright owner: Rossi v. Motion Picture Association of America, Inc., 391 F.3d 1000 (9th Cir. 2004), 1005 (2004).: “A copyright owner cannot be liable simply because an unknowing mistake is made, even if the copyright owner acted unreasonably in making the mistake.”

C. Other Intellectual Property Laws

1. Trademark

Trademarks are words, phrases, symbols, and other indicia used to identify the source or sponsorship of goods or services. The law allows trademark owners to prevent commercial uses by others that would likely cause customer confusion. Trademark law is recognized at the federal level in the Lanham Act, and every state has an analogous trademark or “unfair competition” law.[86]. To establish ownership of a mark, an aspiring trademark owner must use their trademark in commerce in connection with goods or services.[87]

After ownership is established, the Lanham Act authorizes an owner to bring lawsuits to prevent others from using the mark in a manner that would confuse consumers, or, with respect to more famous marks, to “dilute” mark’s distinctiveness across all goods and services.[88] Defenses to a claim of trademark infringement or dilution include that the defendant was selling the plaintiff’s genuine goods,[89] that the defendant was using the words that make up the plaintiff’s trade name for their normal meaning,[90] and that the defendant was using the plaintiff’s mark to refer to the plaintiff directly.[91]

Trademark law is unique in this study, as there is no equivalent to general content liability’s Section 230 or copyright’s Section 512 “safe harbor” to address online intermediary liabilities. Section 230 of the Communications Decency Act does not protect online intermediaries from trademark liability under the Lanham Act,[92] and courts are split as to whether it protects against claims under state trademark laws.[93] As a result, much of recent trademark law reflects a judicial attempt to reinterpret existing tests in light of online activity, which has led to less legal certainty. Because trademark draws from both state and federal laws, precedent in this area is especially complex.

Existing Supreme Court precedent recognized secondary trademark liability for those who intentionally induce another to infringe a trademark, as well as those who manufacture or distribute supplies to another, knowing that person is engaging in trademark infringement.[94] Lower courts have extended that to cases where the defendant supplies a platform for the sale of trademark-infringing goods, such as the operator of a flea market, when a plaintiff can show that platform operator knew about infringing activity. These courts, however, have not imposed an affirmative duty to take precautions against counterfeits.[95]

Applying these principles to the online context, courts generally agree that online intermediaries can be held liable for infringement, but establishing clear standards for that liability has been more divisive.[96] In one early case, a court stated that an Internet company could be liable under a theory of contributory trademark infringement if it possessed “direct control and monitoring” over the infringing activity of third parties on the site, though it declined to extend that theory to the defendant, a domain name resolution service.[97] In a prominent 2010 case, Tiffany v. eBay (discussed in the “Private Ordering to Respond to Trademark Concerns – eBay’s VERO Program” case study below) a federal appellate court upheld the infringement-management practices of the online auction website eBay, who took down infringements upon receipt of specific rights holder complaints.[98] Critically, the court held that general knowledge that the defendant’s platform was being used for infringing activity on the platform was not sufficient; plaintiffs would have to show that a defendant has knowledge of specific infringing conduct.[99] For online auction sites, the holding in Tiffany likely means increased industry homogeneity as competitors attempt to craft their own business as in the mold of eBay’s judicially accepted model. For other online intermediaries, the lack of a legal standard means increased risk and wary innovation. For an enterprising online intermediary with a service susceptible to a claim of contributory trademark infringement, looking to the policies and standards underlying the CDA and DMCA are likely the best barometers of legal guidance.[100]

[86] See State Trademark Information and Links, U.S.P.T.O., <> (last updated July 24, 2012).
[87] See Restatement (Third) Unfair Competition § 18.
[88] See What Trademark Covers, Digital Media Law Project, <> (last updated April 30, 2008). A trademark owner can also bring a claim of dilution by “tarnishment,” or the use of a trade name that harms the reputation of a famous mark. 15 U.S.C. § 1125(c)(2)(C).
[89] See, e.g., Prestonettes, Inc. v. Coty, 264 U.S. 359 (1924).
[90] This is sometimes called a “descriptive fair use.” See, e.g., KP Permanent Make-Up, Inc. v. Lasting Impression I, Inc., 543 U.S. 111 (2004).
[91] This is called a “nominative fair use, and tends to also include the requirement that the mark at issue must not be readily identifiable without use of the mark’s name, the use of the mark must be limited to as much as is necessary to identify the mark, and the user must do nothing that would suggest sponsorship or endorsement by the trademark owner. The New Kids on the Block v. News America Publ’g, Inc., 971 F.2d 302 (9th Cir. 1992). Critics note that this is, in effect, the same test as the general likelihood of confusion test. See William McGeveran, Rethinking Trademark Fair Use, 94 Iowa L. Rev. 49, 90-97 (2008).
[92] See, e.g., Parker v. Google, Inc. 442 F. Supp. 2d 492, 502 n.8 (E.D. Pa. 2006).
[93] Compare Perfect 10, Inc. v. CCBill, LLC 488 F.3d 1102, 1118-19 (9th Cir. 2007) (Section 230’s exception for “intellectual property” only covers federal intellectual property laws); with Doe v. Friendfinder Network, Inc., 540 F. Supp. 2d 288, 298-302 (D.N.H. 2008) (extensively analyzing Perfect 10 and deciding that Section 230 does extend to state intellectual property laws).
[94] Inwood Labs., Inc. v. Ives Labs., Inc., 456 U.S. 844, 853-54 (1982).
[95] Hard Rock Cafe Licensing Corp. v. Concession Services, Inc., 955 F.2d 1143, 1149 (7th Cir. 1992); Fonovisa Inc. v. Cherry Auction Inc., 76 F.3d 259, 265 (9th Cir. 1996).
[96] See e.g., Lockheed Martin Corp. v. Network Solutions, Inc., 194 F.3d 980, 984 (9th Cir. 1999); Rescuecom Corp. v. Google Inc., 562 F.3d 123 (2d Cir. 2009); Playboy Ent., Inc. v. Netscape Comm’ns Corp., 354 F.3d 1020, 1024 (9th Cir. 2004); Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93 (2d Cir. 2010); Rosetta Stone Ltd. v. Google, Inc., 676 F.3d 144, 149 (4th Cir. 2012).
[97] Lockheed Martin Corp. v. Network Solutions, Inc., 194 F.3d 980, 984 (9th Cir. 1999).
[98] Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93 (2d Cir. 2010).
[99] Id. at 107.
[100] At present, the most considerable legal attention to intermediaries has come not for actions they take with respect to user content, but to their own direct liability. This is in contrast to earlier times, where direct liability was rarely found with online service providers. Emily Favre, Online Auction Houses: How Trademark Owners Protect Brand Integrity Against Counterfeiting, 15 J.L. & Pol'y 165, 179 (2007). Two recent federal appellate cases have taken issue with Google’s AdWords program, which allows companies to buy advertisement to display alongside searches for certain words, including the names of competing companies. See Rescuecom Corp. v. Google, Inc., 562 F.3d 123 (2d Cir. 2009); Rosetta Stone Ltd. v. Google, Inc., 676 F.3d 144, 149 (4th Cir. 2012). Both cases subsequently settled.

2. Misappropriation and Right of Publicity Laws

Two overlapping types of laws govern the use of a person’s name or likeness for commercial or exploitative purposes without the person’s consent: right of publicity laws and laws against misappropriation of a person’s name or likeness.[101] While the two types of laws cover the same conduct, they are meant to remedy different harms: misappropriation is meant to remedy the damage to human dignity for unauthorized commercialization, while right of publicity is meant to compensate for commercial damage for lost licensing revenue.[102] Like the privacy torts discussed above, knowing participation in another’s violation could lead to intermediary liability, though there are very few cases on point.[103]

Courts unanimously agree that federal intellectual property claims are not covered by the CDA, but there is ongoing disagreement over whether the exception also extends to state intellectual property claims, particularly claims involving states’ right of publicity laws.[104] Other courts have taken the middle path, noting the difficulty of the issue and refusing to consider whether state intellectual property rights are exempted by the CDA when other means of settling the claim exist.[105] This echoes a concern articulated in the discussion of CDA 230 above: while Section 230 by its terms provides a clear and direct means for foreclosing intermediary liability, courts have allowed extensive and costly litigation on the question, undercutting its positive effects for intermediaries.[106]

[101] See generally Using The Name or Likeness of Another, Digital Media Law Project, <> (last updated July 30, 2008).
[102] J. Thomas McCarthy, McCarthy on Trademarks and Unfair Competition § 28:6 (4th ed. 2014).
[103] Perfect 10, Inc. v. Cybernet Ventures, Inc., 213 F. Supp. 2d 1146, 1183 (C.D. Cal. 2002) (finding a likelihood of success on an claim for aiding another’s right of publicity violation); but see Perfect 10, Inc. v. Visa Int’l Serv. Ass’n, 494 F.3d 788, 809 (9th Cir. 2007) (declining to find authority for a credit card processor for aiding and abetting a right of publicity violation, “[e]ven if such liability is possible under California law – a proposition for which [plaintiff] has provided no clear authority”); Keller v. Electronic Arts, Inc., No. 09-cv-1967, 2010 WL 530108 at * 2 (N.D. Cal. 2010) aff’d on other grounds sub nom. In re NCAA Student-Athlete Name & Likeness Licensing Litigation, 724 F.3d 1268 (9th Cir. 2013) (finding no theory of liability for those who enable another’s right of publicity violation).
[104] Compare Perfect 10, Inc. v. CCBill, LLC 488 F.3d 1102, 1118-19 (9th Cir. 2007) (Section 230’s exception for “intellectual property” only covers federal intellectual property laws); with Doe v. Friendfinder Network, Inc., 540 F. Supp. 2d 288, 298-302 (D.N.H. 2008) (extensively analyzing Perfect 10 and deciding that Section 230 does extend to state intellectual property laws).
[105] Almeida v., Inc., 456 F.3d 1316 (11th Cir. 2006).
[106] See generally Ardia, supra note [[x]].

3. The Espionage Act

Because of the considerable attention given toward the dissemination of classified government information through the documents released by Chelsea Manning and Edward Snowden, and the profound policy implications of both the information they conveyed and the treatment of those who handle and disseminate such documents to the public, special attention should be given to a particular federal crime that implicates the disclosure of classified information. The Espionage Act of 1917 contains many provisions intended to prohibit interference with military operations and protect national security.[107] These include provisions that criminalize obtaining, collecting, or communicating information that would harm the harm the national defense of the United States.[108] This section was used by the United States government to go after the New York Times and Washington Post for their publication of “The Pentagon Papers,” a classified and damning assessment of United States involvement in the Vietnam War.[109] Most recently, it was used to convict former U.S. Army intelligence analyst Chelsea Manning for leaking classified documents to the organization WikiLeaks.[110]

While all federal criminal law includes the possibility for a charge of aiding and abetting another’s violation of the law,[111] the United States has never successfully prosecuted an information intermediary for disseminating classified information under the Espionage Act.[112] Such a theory would present profound First Amendment issues, and ultimately an intermediary may only be found liable if the intermediary bribed, coerced, or defrauded a government employee to disclose classified information.[113]

[107] See 18 U.S.C. §§ 793–798.
[108] 18 U.S.C. § 793(e).
[109] New York Times Co. v. United States, 403 U.S. 713 (1971).
[110] Cora Currier, Charting Obama’s Crackdown on National Security Leaks, Pro Publica, July 30, 2013, <>. Many others have been charged but not ultimately convicted for violating the Espionage Act or conspiracy to violate the Espionage Act.
[111] 18 U.S.C. § 2; see also § 793(g) (“If two or more persons conspire to violate any of the foregoing provisions of this section, and one or more of such persons do any act to effect the object of the conspiracy, each of the parties to such conspiracy shall be subject to the punishment provided for the offense which is the object of such conspiracy.”).
[112] See Emily Peterson, WikiLeaks and the Espionage Act of 1917: Can Congress Make It a Crime for Journalists to Publish Classified Information?, The New Media and the Law Vol. 35 No. 3, Summer 2011, available at <>.
[113] See Geoffrey R. Stone, Government Secrecy vs. Freedom of the Press, 1 Harv. L. & Pol’y Rev. 185, 217. For more on the general First Amendment right to disclose true matters of public concern, see supra notes x-y and accompanying text.

4. Surveillance Law

A patchwork of federal law enables both law enforcement and intelligence agencies to compel online intermediaries (as well as others) to disclose data about their users, sometimes including the content of their communications. The federal requirements for the disclosure of user data are mainly found in two places. The primary authority enabling the federal government to compel companies to surrender customer data in criminal investigations is found in the Stored Communications Act (SCA). The authority for intelligence investigations is found primarily in the Foreign Intelligence and Surveillance Act (FISA) and related amendments to the SCA. The authority used to compel the data disclosure is important for several reasons: it determines the legal standard that must be used, the kind of data that can be collected, and even how companies can write their transparency reports.

The SCA is an outdated law, enacted well before high-speed Internet or gigabytes of free cloud storage was the norm. The SCA gives law enforcement agencies the ability to collect substantial personal data, often with minimal court supervision. Under the framework of the SCA, there are three primary methods for compelling data collection: warrants, court orders, and subpoenas.

The easiest form of legal process to obtain is a subpoena. Instead of going before a court or a judge, a law enforcement agent can directly issue a subpoena to a company if there is any reasonable possibility that the materials will produce information relevant to the general subject of the investigation. Because it is so easy to obtain a subpoena, the types of information that law enforcement can obtain subject to a subpoena are fairly circumscribed. Using a subpoena, law enforcement can obtain what is known as “basic subscriber information.” This includes the user’s name, address, connection records (including session times and durations), the date the user began using services, the types of services used, the IP address or other instrument number, and payment information (including credit card and bank account numbers).

The next type of legal process, slightly more difficult to obtain, is a 2703(d) order, called that because it is described in section 2703(d) of the SCA. A “d order” is a court order, meaning that unlike a subpoena it requires a law enforcement agent to go before a court and show that there are “specific and articulable facts showing that there are reasonable grounds to believe” that the requested data is “relevant and material to an on-going criminal investigation.”[114] The d order allows law enforcement to collect non-content information, which includes data such as e-mail headers, recipient e-mail addresses, and any other account logs that the provider may maintain.

As described above, both subpoenas and d orders can be used to get data other than content. However, the data that law enforcement is most likely to be interested in would be classified as “content,” and includes things such as e-mail subject lines, e-mail content, and instant message text. Under the letter of the law, both subpoenas and d orders may be used in certain limited circumstances to also get content information. For instance, the law allows law enforcement to obtain opened e-mails or other stored files, or unopened e-mail in storage for more than 180 days, using just a subpoena or a d order, as long as law enforcement provides notice to the user.[115]

Although the text of the law enables law enforcement to obtain content information, in limited circumstances, with only a d order or a subpoena, in actuality, law enforcement generally needs to use a third type of process to get content information: a warrant. Despite the text of the SCA, the U.S. Court of Appeals for the 6th Circuit, with jurisdiction over the states of Ohio, Michigan, Kentucky, and Tennessee, held in United States v. Warshak that the government needs a warrant to obtain e-mail content.[116] Although that holding is technically limited to the geographic region of the 6th Circuit, almost all the major Internet companies rely upon the Warshak decision to require a warrant before providing any content information, despite the fact that such a conclusion is seemingly inconsistent with the SCA itself.[117]

Because a search warrant allows for the collection of content, and is therefore more invasive than subpoenas and d orders, it is also harder to obtain. To obtain a warrant, a law enforcement agent must demonstrate to a court that there is “probable cause” that information related to a crime is in the specific place to be searched. In addition to content information, warrants can obtain all the non-content data that a d order and subpoena can collect (and a d order can collect all the subscriber information that a subpoena can collect).

For terrorism or national security related investigations, the government has three additional levers for the collection of data from online intermediaries. National Security Letters (NSLs) allow the FBI to obtain telephone and e-mail records (and associated billing records), “relevant to an authorized investigation to protect against international terrorism or clandestine intelligence activities,” but not the content of the messages themselves.[118] Section 215 of the USA PATRIOT Act amended FISA to enable secret court orders, approved by the Foreign Intelligence Surveillance Court (FISC), to require third parties, such as ISPs or telephone providers, to provide business records deemed relevant to terrorism or intelligence investigations. The government used the Section 215 authority, for example, to compel Verizon to provide all cell phone metadata.[119] The third lever is Section 702 of the FISA Amendments Act, which allows the government to collect both the content and non-content information of targeted non-U.S. persons reasonably believed to be outside of the United States.

Subpoenas, d orders, warrants, 215, and 702 orders represent just some of the wide array of legal tools at the disposal of American law enforcement and intelligence agencies. Additional tools include wiretaps and pen-registers, which enable law enforcement to obtain prospective, instead of retrospective, data. With this array of tools and the treasure trove of personal information that online intermediaries may store, it means that once intermediaries reach a sufficiently large size, it is only a matter of time before law enforcement or intelligence agencies will serve legal process.

[114] 18 U.S.C. § 2703(d).
[115] See 18 U.S.C. § 2703(a), (b).
[116] See 631 F.3d 266 (6th Cir. 2010).
[117] See, e.g., Twitter Transparency Report at <> (“A properly executed warrant is required for the disclosure of the contents of communications (e.g., Tweets, DMs).”).
[118] 18 U.S.C. § 2709(b)(2).
[119] Full text of Section 215 Order the government served on Verizon. <>

III. Case Studies

A. Sex Trafficking in Online Classified Advertising – and

1. Introduction

As discussed in the Legal Landscape Primer of this report, Section 230 of the Communications Decency Act enables a wide array of online intermediaries to operate within the United States without the burdens of either monitoring user-generated content or (except in the case of certain intellectual property claims) implementing a system for removal for such content.

While this has facilitated the creation of many platforms for user-generated content, Section 230’s protections are controversial. Many believe that the rule protects what is worst about the Internet and social media rather than what is best about it. Plaintiffs who legitimately claim to be harmed, as well as law enforcement officials attempting to protect the public, are often frustrated by their inability to stem unlawful online content at the obvious source, the intermediary. This frustration is particularly acute when the websites that provide access to such content seem to revel in (and profit from) their users posting content that is tawdry or mean-spirited, or even illegal under state laws.

This case study examines a six-year effort by officials of state (rather than federal) government to hold intermediaries accountable for a specific activity: namely, the hosting of online advertisements alleged to facilitate prostitution and sex trafficking. A recurring theme throughout this case study is the barrier that Section 230 poses to efforts by state governments to shut down these advertisements, and the ways that these governments have attempted to circumvent Section 230 through public pressure, judicial action, and legislation.

This case study focuses on two websites in particular, Craigslist and

  • Craigslist is a classified advertisements service that has been available via the Internet since 1996, and currently the largest such online service in the United States. Craigslist hosts separate sub-domains for separate geographic regions; more than 700 regions in seventy countries currently have Craigslist sites, with content available in multiple languages. Listings on the site include advertisements and solicitations for jobs, housing, the sale of personal items, and various services. The listings for services originally included a section for “erotic services.” Craigslist’s terms of service expressly prohibit the use of the site to advertise illegal activities.
  •, launched in 2004, is the second largest online classified advertisements service in the United States after Craigslist. Like Craigslist, it offers listings for a wide range of proposed transactions and is available in multiple countries and languages. was originally owned by Village Voice Media. The site contains a section for “adult entertainment services,” but, like Craigslist, prohibits the use of the site to advertise illegal activities.

2. “Erotic” and “Adult” Advertisements on Craigslist – Negotiation Leads to Concession

As a general classified advertising service, Craigslist had hosted a section of “erotic services” content on its service, created by its users and over which Craigslist could plausibly claim immunity for intermediary liability under Section 230. While Craigslist’s protection under Section 230 was never pierced and adult content had been on the site for years, a series of events taking place from March 2008 to September 2010 lead to the rapid shutdown of these listings on the site.

The “erotic services” section on Craigslist attracted the attention of state and local law enforcement in the United States, after it was perceived that some users were using the section to advertise services that were illegal under state law. In March 2008, the attorney general of Connecticut, Richard Blumenthal, sent a letter to Craigslist on behalf of the attorneys general of 40 states, demanding that Craigslist purge the site of ads for prostitution and illegal sex-oriented businesses and more effectively enforce its own terms of service, which prohibit illegal activity.[120]

Craigslist first opted to respond to these demands through negotiation. In November 2008, Craigslist reached an agreement with these state attorneys general to take steps to curb – but not remove – its “erotic services” listings. These steps included requiring posters to provide valid telephone numbers and pay a small fee per ad using a credit card, in order to make posters easier for law enforcement to track.[121] Jim Buckmaster, chief executive of Craigslist, stated that the attorneys general had “identified ads that were crossing the line,” and that the company “saw their point, and . . . resolved to see what [it] could do to get that stuff off the site.”[122] Craigslist subsequently reported a 90% drop in erotic services listings.[123]

Four months later, a sheriff for the county in Illinois that includes Chicago, Thomas Dart, sued Craigslist in federal court. Dart claimed that the site created a “public nuisance” under Illinois law, because its “conduct in creating erotic services, developing twenty-one categories, and providing a word search function causes a significant interference with the public's health, safety, peace, and welfare.”[124] Craigslist moved for judgment on the pleadings in the case on the basis of Section 230, asserting that Dart was attempting to hold Craigslist liable as the “publisher or speaker” of content created by third party users.[125] Craigslist would ultimately win that case on Section 230 grounds in October 2009.[126]

While that litigation was pending, in April 2009, Philip Markoff (later dubbed the “Craigslist killer”) murdered one woman whose services he located through Craigslist and robbed two others; the case received national attention.[127] The following month, the attorney generals of Illinois, Connecticut, and Missouri met with Craigslist executives again, seeking an end to ads alleged to be advertisements for illegal sexual activities.[128] That same month the attorney general of South Carolina, Henry McMaster, sent Craigslist a letter accusing it of violating its November 2008 agreement and threatening the company’s management with criminal investigation and prosecution; the letter stated that “[i]t appears that the management of craigslist has knowingly allowed the site to be used for illegal and unlawful activity after warnings from law enforcement officials and after an agreement with forty state attorneys general.”[129]

While never found civilly or criminally liable, Craigslist subsequently removed its “erotic services” section and replaced it with an “adult services” section, in which employees would take an active role in reviewing postings for indications of activity that was illegal or otherwise violated the site’s guidelines.[130] Jim Buckmaster, CEO of Craigslist, denied that this change was the result of legal pressure, instead stating that the change was “strictly voluntary,” that the site’s activities were always protected by Section 230, and that “[i]n striking this new balance we have sought to incorporate important feedback from all the groups that have expressed strongly held views on this subject, including some of the state A.G.’s, free speech advocates and legal businesses who are accustomed to being entitled to advertise.”[131] New York Attorney General Andrew M. Cuomo criticized the move, stating that rather than work with his office “to prevent further abuses, in the middle of the night, Craigslist took unilateral action which we suspect will prove to be half-baked.”[132]

At the same time, in an attempt to forestall the threat from the South Carolina Attorney General, Craigslist filed a declaratory judgment action against McMaster in federal district court in South Carolina, asserting that McMaster’s threats violated the First Amendment by chilling Craigslist’s speech and that the threatened prosecution would be blocked by the First Amendment and Section 230.[133] McMaster consented to a preliminary injunction against prosecution of Craigslist while this lawsuit was pending.[134] The court ultimately dismissed Craigslist’s complaint without reaching the Section 230 issue, holding that there was no actual case or controversy ripe for adjudication on that issue because no prosecution had been initiated.[135] In May 2010, approximately one year after Craigslist’s “erotic services” section was closed and the new “adult services” section was launched, Connecticut and 38 additional states sent subpoenas to Craigslist asking for information about the site’s revenue from sex-related advertisements and its implementation of measures to stop the use of the site for prostitution. This move was believed to have resulted from the widespread perception that Craigslist’s “adult services” section had not reduced the use of the site for prostitution, but simply driven it into other sections of the site using coded terminology for the services offered. Craigslist accused Connecticut’s attorney general of engaging in blatant political grandstanding. [136]

Public pressure on Craigslist came from a different direction two months later, when two teenage girls published an open letter to Craig Newmark, the founder of Craigslist, stating that they had been the victims of sex trafficking through the site.[137] By August 2010, there were public calls for the “adult services” section to be shut down, both in the press[138] and from state law enforcement.[139] Buckmaster responded to these demands, saying:

“[f]ortunately, most concerned parties seem to realize that declassifying adult services ads back into Craigslist personals, services, and other categories, and offsite to venues that have no interest in combating trafficking and exploitation or in assisting law enforcement, would simply undo all the progress we have made, undermine our primary mission of evolving Craigslist community sites according to user feedback, set back the efforts of our partners in law enforcement and exacerbate the very societal epidemic we all seek to end.”[140]

Less than a month later, however, Craigslist shuttered the “adult services” section in the United States. As of September 4, 2010, the link to the section on Craigslist was replaced with a black label reading “censored.”[141] This label (and the dead link to the defunct section) was removed a few days later.[142] Craigslist later removed the section from all of its sites worldwide.[143]

Later that month Craigslist representatives appeared at a hearing of the House Judiciary Committee and testified that while the “adult services” section had been removed permanently from the United States, it was unrealistic to believe that this would end sex crimes. By pressuring Craigslist to close the section, they claimed, state governments had ended their ability to contain the illegal activity in one location and work with Craigslist to pursue offenders; now, this traffic would simply migrate to other sites. Craigslist’s representatives specifically pointed to a spike in traffic to following the shutdown of Craigslist’s section.[144]

[126] Dart v. Craigslist, Inc., 665 F. Supp.2d 961 (N.D. Ill. 2009). Dart did not appeal the decision.
[138] Id.

3. “Adult Content” on – State Legislation and Defiance

Six days after Craigslist testified, twenty-one state Attorneys General sent a public letter to demanding that it close its “adult entertainment services” section, stating that the “volume of these ads will grow in light of Craigslist’s recent decision to eliminate the adult services section of its site. In our view, it is time for the company to follow craigslist’s lead and take immediate action to end the misery of the women and children who may be exploited and victimized by these ads.”[145] publicly rejected the states’ demand that same day, writing:

“ respectfully declines the recent demand by a group of 21 state attorneys general that it close its adult classifieds website . . . is a legal business and operates its website in accordance with all applicable laws . . . Censorship will not create public safety nor will it rid the world of exploitation.”[146]

Nevertheless, on October 18, 2010, announced that it would temporarily suspend certain aspects of its adult sections while implementing improved screening procedures for advertisements for illegal services.[147]

The next several months saw relatively little government activity or public outcry against itself. There were, however, numerous media reports of arrests for illegal prostitution and human trafficking in various states, which were attributed to law enforcement’s identification of offenders via[148]

Beginning in July 2011, there were renewed demands from both local officials and private actors for to reform or remove its adult services section.[149] That summer, forty-six state attorneys general sent a public letter to calling for information about how the site attempts to remove advertising for sex trafficking, especially ads that could involve minors. The letter pointed to more than fifty cases involving the trafficking or attempted trafficking of minors through[150] A petition signed by 80,000 people and spearheaded by John Buffalo Mailer, the son of Village Voice co-founder Norman Mailer, later demanded that the Village Voice shut down the adult services section.[151] The Village Voice would subsequently divest itself of, which continued to operate independently.[152]

At the Spring 2012 meeting of the National Association of Attorneys General (NAAG), Washington State Attorney General Rob McKenna gave a speech to attendees in which he made clear that the fundamental problem in dealing with was Section 230:

“[M]embers of Congress may want to review section 230 of the Communications Decency Act in order to make sure that when Backpage goes away, another operation based on exploitation doesn’t fill the void…Backpage executives see the CDA as a license to make money from prostitution ads without any accountability. I disagree with their assessment. The CDA does not immunize Web sites from criminal prosecutions under federal law, though the states are currently hampered in their ability to take enforcement action. However, given that sites such as Backpage see this federal statute as an invitation to promote human trafficking without consequence, Congress should hold hearings about carefully revising the law to ensure that the knowing promotion of prostitution, for example, is more easily pursued by state authorities, in addition to their federal counterparts.”[153]

That same month, the State of Washington passed Senate Bill 6251, a state law that criminalized commercial advertising for sexual abuse of a minor.[154] The bill made it a felony to knowingly publish, disseminate, or display or to “directly or indirectly” cause content to be published, disseminated or displayed if it contains a depiction of a minor and any “explicit or implicit offer” of sex for something of value. Under the proposed law, it was not a valid defense that the defendant did not know the age of the person depicted.

The State of Tennessee followed suit shortly thereafter by enacting Tennessee Public Charter No. 1075, which criminalized selling advertisements involving commercial sex with anyone appearing to be a minor. As with the Washington law, the seller’s ignorance of the fact that a person depicted was a minor was not a defense to criminal liability; the only recognized defense was if the seller individually verified the age of anyone appearing in an advertisement via government-issued identification. To implement such a system on a website would be, in all likelihood, prohibitively expensive.

These statutes were expressly targeted at’s advertising, notwithstanding the fact that Section 230 barred the imposition of such liability under state law. In June 2012, filed two separate lawsuits in federal courts in Washington and Tennessee to prevent the enforcement of these laws, arguing that they were preempted by Section 230 and violated the First Amendment by chilling a substantial amount of legal advertising to adults.[155]

The cases were swiftly resolved in’s favor. In each case, the court granted a temporary restraining order against enforcement of the law on the basis of Section 230 and the First Amendment.[156] Washington State settled with in December 2012, agreeing to pay $200,000 in attorneys’ fees and to work to repeal SB 6251.[157] Meanwhile, the State of Tennessee did not oppose’s motion to convert the restraining order to a permanent injunction, ending the Tennessee case in March 2013.[158]

[148]; <;;
[149] See, e.g.,;

4. Attention Turns to Section 230 Itself – The Current Legislative Debate

The failure of these laws fueled a legislative attack on Section 230. On July 23, 2013, forty-nine state and territory attorneys general sent an open letter to four members of Congress citing the activities of and calling upon Congress to amend Section 230. The letter cited to the Washington and Tennessee cases, among others, as evidence that Section 230 was frustrating attempts by state law enforcement to suppress sex trafficking, and accordingly asked that Congress amend Section 230 to include an exception for state criminal law, as it currently does for federal law.[159]

This proposal was widely criticized by academics and advocates of online freedom, because it would effectively eviscerate Section 230; states could avoid federal preemption simply by criminalizing any conduct by intermediaries of which they disapproved. The Electronic Frontier Foundation noted that the proposed amendment would grant states legislative authority over the Internet that was much broader than the sex trafficking issue that allegedly motivated the proposal, and would be dangerous to freedom of expression online.[160] Professor Eric Goldman of Santa Clara University School of Law called the NAAG’s proposal “a terrible idea” and “one of the most serious threats to Section 230’s integrity that we’ve ever faced,” arguing that the amendment would subject Internet communication and commerce to the whims of vague, conflicting, and provincial state legislation.[161]

The demand by the state attorneys general has not yet resulted in a movement within the U.S. Congress to amend Section 230; Congress has instead looked to expand federal sex trafficking law to cover advertising. On March 13, 2014, Rep. Ann Wagner introduced H.R. 4225, the “Stop Advertising Victims of Exploitation (SAVE) Act of 2014” in the U.S. House of Representatives.[162] In its final form, H.R. 4225 seeks to amend the current federal law against sex trafficking. As currently enacted, the law punishes (among other things) anyone who “knowingly . . . recruits, entices, harbors, transports, provides, obtains, or maintains by any means a person” knowing or in reckless disregard of the fact that either (1) the person is a minor who will be engaged in a commercial sex act; or (2) the person is of any age, but will be so engaged through means of force, fraud, or coercion. A separate offense exists for someone who benefits financially from these activities, provided they also satisfy the same knowledge requirement.[163]

The bill would add “advertises” to the list of prohibited behavior. It would require those who financially benefit from advertising sex trafficking have actual knowledge of such, but allows those doing the advertising to be liable if the only are “reckless[ly] disregard[ing] the fact” that such person is a victim of sex trafficking. The bill does not clarify whether a platform, like or Craigslist, would be considered as the advertiser or the financial beneficiary. If it is considered the advertiser, this would mean a platform could be liable without first showing specific knowledge of the activity, in stark contrast to most other forms of online intermediary liability. As it would be a federal criminal law, Section 230 would also offer no defense.

Some members of the media and civil liberties organizations have expressed concerns with this legislation. The Association of Alternative Newsmedia published an editorial in April attacking H.R. 4225, raising First Amendment concerns similar to those previously raised by with respect to state statutes, and asserting that the statute would subject intermediaries to impossible monitoring and verification requirements of the sort that Section 230 was intended to prevent.[164] The American Civil Liberties Union and the Center for Democracy & Technology have also come out in opposition to this bill.[165]

Despite this, the bill passed the House of Representatives by a vote of 392-19, with twenty members not voting.[166] Several related bills are pending in the Senate.[167] Senate Bill 2536 – also called the “SAVE Act” but apparently not the Senate-introduced version of H.R. 4225 – is radically broader than the House bill, enacting strict record keeping requirements around all adult advertising, and expanding criminal liability for anyone hosting, selling, or promoting any ad that facilitates any state or federal sex trafficking, child sexual abuse, or assault on children statute.[168] The bill excludes Internet access service providers, Internet browsers, “external” information location tools, and telecommunications carriers. This works to exclude some online intermediaries, but critically – and in all likelihood, intentionally – not websites like or Craigslist.[169]

[159] The proposed legislative amendment would add the words “or State” to 47 U.S.C. § 230(e)(1), so it would read “[n]othing in this section shall be construed to impair the enforcement of . . . any . . . Federal or State criminal statute.”
[163] 18 U.S.C. § 1591.
[166] H.R. 4225: SAVE Act of 2014, GovTrack, (last viewed July 19, 2014).
[167] See, e.g., Stop Exploitation Through Trafficking Act, S. 2599; End Trafficking Act of 2014, S. 2564; SAVE Act, S. 2536.
[168] See S. 2536 § 3.
[169] S. 2536 § 3.

5. Conclusion

As the circumstances of Craigslist and illustrate, the presence of Section 230 concentrates criminal power for online activity to Congress, and leaves states with little ability to proscribe online behavior on their own. For all the public pressure that state authorities can bring to bear, Section 230 ultimately blocks their ability to suppress activity by using online intermediaries as a choke point. Calls by these intermediaries to instead cooperate to combat sex trafficking at the source, like those made by Craigslist during 2009 and 2010, have been rejected by state law enforcement. Accordingly, while image-conscious organizations such as Craigslist might decide to abandon such services, there are few alternatives available for states to take action against organizations like that refuse to succumb to that pressure.

For issues outside of sex trafficking, this situation is likely to continue. It appears that there is a lack of interest in Congress to grant state authorities broad discretion to impose criminal penalties on intermediaries for the conduct of their users, making a substantial amendment to Section 230 unlikely. Case-by-case solutions might, however, be reached at the federal level; as is the case of the pending SAVE Act. Federal statutory solutions are nevertheless more difficult to enact than state laws, not least because of the far greater public scrutiny that federal bills receive. It is likely that many online media organizations will raise challenges to the passage of the SAVE Act given the law’s harsh criminal penalties and unclear boundaries, but as of yet only a few organizations have voiced opposition to the law.

B. Private Ordering to Respond to Copyright Concerns: YouTube’s ContentID Program

As discussed at greater length in this document’s Legal Landscape Primer, Section 512 of the Digital Millennium Copyright Act (“DMCA”) makes it possible for online intermediaries (“OI”s) to have user-generated content (“UGC”) on their platforms or networks that potentially infringes the copyrights of 3rd parties. However, unlike Section 230 of the Communications Decency Act, which, with minor exceptions, completely shields OI’s from liability for defamatory UGC, and therefore eliminates for OIs the burden of either monitoring UGC or implementing a system for removal of such content with respect to defamation[170], Section 512 of the DMCA implements a regime in which online intermediaries can only shield themselves from liability if they adhere to certain practices. Section 512’s criteria, in the aggregate, have become known as a “notice-and-takedown” regime, and the insulation from liability that the regime provides to intermediaries is the DMCA’s “safe harbor.” Online intermediaries who present or allow access to user-generated content that infringes copyright cannot be subjected to liability for that infringement as long as they comply with the tenets of Section 512.

Whether Section 512 “works” or not is a matter of much debate,[171] with some arguing that recent developments have proven that 512’s mechanisms are totally inadequate for protecting the interests of copyright holders,[172] and others arguing that the balance Section 512 has struck errs too far on the side of protecting those same rights holders, at the expense of individuals and the public interest.[173] OIs themselves are also affected; with large-scale rights holders arguing OIs aren’t doing enough to prevent infringement,[174] individual users arguing OI’s treat those rights holders preferentially, on top of a recent explosion in the number of notices sent and acted on[175] and the attendant increased costs of compliance.[176] The resolution of these arguments notwithstanding, some online intermediaries have taken it upon themselves to go beyond the requirements of the DMCA and provide other mechanisms with which to manage and control content. It bears mentioning at the outset that these extra-legal mechanisms, while often modeled after the structures of the DMCA, are not part of it[177], not required in any way by law or regulation, and at least in theory have no effect on the true legal liability of the online intermediaries using them, liabilities that remain external to the private orderings in question.[178] The question is therefore, what external pressures, legal, regulatory, social and economic, have led to the creation and use of these extra-legal mechanisms?

This case study provides a short history of YouTube and then examines what is unquestionably the most elaborate, well-known, and (arguably) successful such private ordering mechanism for addressing copyright infringement: YouTube’s “Content ID” system. Content ID continually monitors the majority of the videos on YouTube and upon finding a match, allows rights holders to decide whether to take the video down, place advertisements next to it, or simply monitor traffic to it. A key thread running throughout YouTube’s history[179] is the tension between YouTube’s reliance on arguably infringing copyrighted content to drive its success, its obvious need to avoid liability related to that same infringing content, and its need to maintain an adequately positive relationship both with its users, who upload the content that makes YouTube what it is, and with institutional copyright holders, whose intellectual property is interwoven with much of that content those users generate.

[170] Eric Goldman, “Want To Scrub Google Search Results In The US? Tough–O’Kroley v. Fastcase | Technology & Marketing Law Blog,” Technology & Marketing Law Blog, May 30, 2014,
[171] Michael P. Murtagh, The FCC, the DMCA, and Why Takedown Notices Are Not Enough, SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 15, 2009), <>; Mike Masnick, “MPAA: Millions Of DMCA Takedowns Proves That Google Needs To Stop Piracy | Techdirt,” Techdirt., December 17, 2012, <>; Mark Schultz, “Time to Revise the DMCA: The Most Antiquated Part of the Copyright May Be One of the Newest-CICTP,” Tech Policy Daily, accessed June 2, 2014, <>.
[172] “RIAA Boss Says That The DMCA ‘Isn’t Working’ Any More | Techdirt,” Techdirt., accessed June 2, 2014, <>.
[173] Niva Elkin-Koren, “Making Room for Consumers under the DMCA,” Berkley Technology Law Journal 22, no. 3 Summer (February 2014); Matt Schruers, “5 Misconceptions We’re Likely to Hear at Tomorrow’s DMCA Hearing,” 2014, <>; Wendy Seltzer, Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment, SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, March 1, 2010), <>.
[174] Schruers, “5 Misconceptions We’re Likely to Hear at Tomorrow’s DMCA Hearing.”
[175] <>
[176] <>
[177] “Latest Content ID Tool for YouTube,” Official Google Blog, October 15, 2007, <>. (“Like many of these other policies and tools, Video Identification goes above and beyond our legal responsibilities.”)
[178] See Viacom Int'l, Inc. v. YouTube, Inc., 676 F.3d 19, 40-41 (2d Cir. 2012) (“In other words, the safe harbor expressly disclaims any affirmative monitoring requirement—except to the extent that such monitoring comprises a “standard technical measure” within the meaning of § 512(i).”)
[179] See the YouTube and Content ID Timeline, appendix B. p. 64.

1. YouTube Is Created

YouTube was created in 2007 by several employees of PayPal. Within less than a year, it was popular enough to have 65,000 videos a day uploaded and to receive $12 million in venture capital funding from Sequoia.[180] Google purchased the company only months later for $1.65 billion.[181] Despite the confidence in the long-term viability of the YouTube business model that an infusion of venture capital and the subsequent purchase of the company clearly represented, the possibility of being held liable as secondary[182] copyright infringers loomed over the fledgling company from the first.[183] Negotiations with institutional content holders, who held the copyright in much of the content being uploaded to YouTube, began almost immediately. In 2006, YouTube was able to strike licensing deals with Warner Music, ABC, and NBC, three of the largest entities in the video media space,[184] despite the licensing fees that could be demanded for digital distribution of copyrighted works being derided as “digital pennies” that took the place of “analog dollars.”[185] [186] [187] Viacom, another enormous player in the content industry,[188] also initially participated in negotiations, but ultimately refused to enter into any deal, and shortly thereafter asked YouTube to remove approximately 100,000 videos allegedly infringing its content from the site.[189]

Notably, according to Viacom, YouTube’s business model at the time was predicated on providing access to copyrighted content. “They are saying we will only protect your content if you do a deal with us – if not, we will steal it.”[190] Statements from Chad Hurley, one of YouTube’s founders, seemed to confirm this, at least in part, [191] although the statements were arguably taken out of context.[192]

Unsurprisingly, YouTube officially took the opposite stance, specifically that it was both interested in licensing and willing to remove any infringing material upon being notified, according to the tenets of the DMCA’s section 512, that it was present on their site.[193] In 2007 the DMCA was almost ten years old, and courts had already tested Section 512’s provisions.[194] However, experts did not see existing law as clearly establishing Google/YouTube’s immunity to liability,[195] identifying serious potential risks, at least with respect to the damages YouTube might have to pay if found to have contributed to infringement.[196] On the other hand, Viacom’s course of action was seen as having its own dangers, including alienating[197] its customer base and missing an opportunity to be part of the burgeoning YouTube phenomenon. Both sides faced the burden of substantial legal fees,[198] potentially with nothing to show for them. Alongside all of this, the various media companies, including Viacom, were experimenting with their own competing distribution architectures and media platforms,[199] even as they licensed some or all of their material to YouTube and used the DMCA to take down other instances of it.[200] YouTube complied with the original set of takedown requests from Viacom,[201] but this was not enough to resolve things and, in early 2007, Viacom sued YouTube for $1 billion, alleging copyright infringement[202] and describing YouTube’s activities as affecting “not just plaintiffs but the economic underpinnings of one of the most important sectors of the United States economy."[203] The suit came close on the heels of the United States Supreme Court’s Grokster decision,[204] and the potential implications of a win for Viacom[205] were immediately apparent.[206]

It was against this backdrop, and with an eye toward heading off any future suits,[207] that YouTube began to develop its internal content monitoring system as early as the beginning of 2006.[208] From the start, this system ran alongside and complemented the mechanisms of Section 512[209], rather than taking their place. While the internal system had other names at the beginning[210], it quickly became known as Content ID. It is important to note that with respect to whether or not user uploaded videos infringed copyright, and whether YouTube could be held secondarily liable for any infringement, YouTube could have, and still can, rely solely on the safe harbors of the DMCA. Although early in its history there may have been pressure on YouTube to create a monitoring system in order to show their willingness to cooperate with rights holders, at this point, YouTube is under no obligation to run the Content ID system. But they do, presumably because they have decided it is better business practice to do so.

[180] “A Brief History of YouTube - YouTube5Year,” accessed May 11, 2014,; Megan Rose Dickey Feb 15 et al., “The 22 Key Turning Points In The History Of YouTube,” Business Insider, February 15, 2013, See also the attached timeline, courtesy of Professor Terry Fisher’s CopyrightX class.
[181] “Google Buys YouTube for $1.65 Billion,”, October 10, 2006,
[182] See this paper’s Legal Landscape Primer, p. 9, for a discussion of secondary liability.
[183] “Google hopes to strike deals that will give it the rights to mainstream programming and also wipe away its potential liability for any violations of copyright law by YouTube so far.” Geraldine Fabrikant and Saul Hansell, “Viacom Asks YouTube to Remove Clips,” The New York Times, February 2, 2007, sec. Technology,; Anne, Greg Broache, Sandoval, “Viacom Sues Google over YouTube Clips - CNET News,” March 17, 2007,
[184] Candace Lombardi et al., “YouTube Cuts Three Content Deals - CNET News,” CNET, accessed May 13, 2014,
[185] “Analog Dollars vs. Digital Pennies,” Edictive On Filmmaking, accessed May 12, 2014,
[186] [187] While this criticism was perhaps true at one point, digital licensing fees continued to gain value and importance until they were an established part of the economics of the copyright ecosystem. “All3Media has hailed the end of the era of “digital pennies” as it forecasts that its digital activity will account for 11% of group profits this year.” Alex Farber, “All3Media: Era of ‘digital Pennies’ Is Finally over,” June 21, 2012,
[188] A short list of the copyrighted properties Viacom owned at the time includes: MTV and its subsidiaries, Logo, Nickelodeon, Nick at Nite, Comedy Central, Spike TV, BET, TV Land, and Paramount films library, which included titanic, Forrest Gump, and the Indiana Jones and Godfather trilogies.
[189] Geraldine Fabrikant and Saul Hansell, “Viacom Asks YouTube to Remove Clips,” The New York Times, February 2, 2007, sec. Technology,
[190] Ibid.
[191] “YouTube Founder Pushed for Growth ‘through Whatever Tactics, However Evil,’” VentureBeat, March 18, 2010,
[192] Jason Kincaid, “Viacom Seems To Be Misrepresenting YouTube Founder’s Call To ‘Steal It!,’” TechCrunch, March 18, 2010,
[193] Fabrikant and Hansell, “Viacom Asks YouTube to Remove Clips.”
[195] Fabrikant and Hansell, “Viacom Asks YouTube to Remove Clips.” (“John G. Palfrey Jr. , the executive director of the Berkman Center for Internet and Society at Harvard Law School, said Google may well be able to use this defense, but ‘I don't think the law is entirely clear.’ And if Google loses, ‘the damages could get astronomically high,’ he said.")
[196] As just one example calculation, if maximum statutory damages of $150,000 per willfully infringed work were awarded for a single day’s worth of uploaded YouTube videos, the damages award would be in the billions.
[197] “Viacom Won’t Soon Shed Image as Corporate Bully,” CNET, July 8, 2008,
[198] Liz Shannon Miller, “Google’s Viacom Suit Legal Fees: $100 Million,” Gigaom, July 15, 2010,
[199] Fabrikant and Hansell, “Viacom Asks YouTube to Remove Clips.” (“Just a few months ago, Viacom and Google were cozying up so successfully that Viacom struck a deal to have Google distribute clips from its shows on its Google Video service. The deal included an arrangement where the two companies would share revenue from adjacent advertising. Mr. Dauman yesterday characterized that deal as an "experiment."”)
[200] “Official Blog: Broadcast Yourself,” accessed July 15, 2014,
[201] Louis Hau, “Viacom Demands YouTube Remove Videos,” Forbes, accessed June 2, 2014,
[202] Anne, Greg Broache, Sandoval, “Viacom Sues Google over YouTube Clips - CNET News,” March 17, 2007,; Complaint Initial, “Viacom vs. YouTube,” n.d., accessed May 12, 2014.
[203] Broache, Sandoval, “Viacom Sues Google over YouTube Clips - CNET News.”; This sweeping proclamation of impending doom has strong echoes of then-President of the MPAA Jack Valenti’s 1982 testimony to congress on the putative negative effects of the VCR on the movie industry. “I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone.”
[204] Notable for its framing of the “substantial non-infringing uses” test as to whether a particular technology could be banned or enjoined because of facilitating copyright infringement.
[205] Ars Staff, “Viacom v. YouTube Ruling Is a Bummer for Google and the UGC Community,” Ars Technica, April 6, 2012,
[206] The Viacom suit itself only settled in 2014, after several appeals, and just prior to the next appearance by the parties in court. Of course during those years, YouTube only continued to grow and become more ubiquitous.
[207] Kevin J. Delaney, “YouTube to Test Software To Ease Licensing Fights,” Wall Street Journal, June 13, 2007, sec. News,
[208] Id; Kenneth Li and Eric Auchard, “YouTube to Test Video ID with Time Warner, Disney,” Reuters, June 12, 2007,; “Latest Content ID Tool for YouTube.”
[209] “YouTube’s Content ID Disputes Are Judged by the Accuser -,” accessed May 9, 2014, (“[The DMCA] wasn't perfect, by any means, but it was fair. Disputes could always be appealed, and both parties were given equal power. And if a claimant lied about owning the copyright to the material in question, they could face perjury charges.”)
[210] “Latest Content ID Tool for YouTube.”

2. What Is Content ID?[211]

A complete examination of how Content ID has evolved over time is beyond the scope of this case study, but at its most fundamental level, it is an automatic[212] system with minimal human involvement,[213] in which:

  • Content rights holders who qualify[214] may upload to YouTube’s internal network copies of the material that they own and over which they wish to assert control. Rights holders indicate what they want to do with any content that matches their uploaded reference files. Options include: “block,” in which the video is removed automatically; “track,” in which the content owner can see how many views the video gets and from where; and critically, “monetize,” in which YouTube will serve ads next to the user’s video and the content owner will split the revenue from those ads 55-45 percent with YouTube[215];
  • Any new content uploaded to YouTube is matched against the rights holder-uploaded reference database. If a match is found, the system presumes that this is an example of the user who uploaded the content having done so without permission from the rights holder, and therefore a potential copyright infringement.[216] Based on the rights holders’ choice of block[217], track, or monetize, YouTube sends the uploading user a notification that an upload of theirs has triggered the system, and what the consequences are. Repeat violators have their account terminated.[218]At no point is a human being involved – to determine fair use, for example – although human reviewers may watch videos as part of other parts of YouTube’s video review processes, for example when users “flag” videos.[219]

The Content ID process therefore owes much to the DMCA’s mechanisms of notice-and-takedown followed by counter-notification. Rights holders (or their uploaded reference files) “notify” YouTube of a possible infringement, and YouTube acts on the material in question. The key differences between the two processes are: with Content ID, content owners do not have to proactively police YouTube for their content in order to notify YouTube, because the scanning for matches takes place automatically; rights holders have more choices available to them than just a takedown; and, at least in theory, the consequences for the content-posting users in question are less serious.[220] Further, the DMCA and its mechanisms are always available as well, either during or after the ContentID process. At any point in the Content ID process, a copyright holder has the opportunity to file a DMCA notice to take the material down. Additionally, if a user challenges the Content ID outcome, it is possible that a DMCA notice will be a rights holder’s only remaining option. Therefore, the possibility of invoking Federal copyright law always hangs over any of Content ID’s disputes, but this is a blunt instrument, with none of the nuances or possible beneficial outcomes that Content ID offers.

Initially, a YouTube user who received a Content ID notification had only one response, to “dispute” the claim.[221] A dispute from a user originally resulted in a removed video being replaced or monetization being restored to the user, and the content owner being notified of the dispute. The owner would then have the binary option of allowing the video to remain up, or filing a DMCA notice to take it down. Later, the owner was given the ability to “reject” the dispute, which left the video down and the user with no further recourse for some claims.[222] [223] In 2012, YouTube introduced the current – theoretically more user-friendly – appeals process, to mixed reaction.[224]

Currently, a user whose content triggers a Content ID warning may first “dispute” it.[225] The relevant copyright owner may then release the claim, uphold the claim, or take the video down by submitting a DMCA notice. If the owner releases the claim, the video goes back up and the process ends. If the owner upholds the claim, the user’s dispute has been “rejected” and the user may then “appeal” that decision, placing the ball back in the copyright owner’s court.[226] However, a user can appeal only three rejections at once, and that requires that the user’s account be in good standing.[227] A user with even a moderate number of videos on YouTube, to say nothing of hundreds, could easily and quickly receive more Content ID claims than they could appeal. There is therefore a clear incentive on the part of complaining rights holders to use Content ID over the DMCA, since an un-appealed notification essentially ends the process in a way that favors the rights holder, while a DMCA notice can be counter-noticed, etc.

After an appeal, the owner has thirty days to respond by either releasing the video as above, or issuing a formal DMCA request, thereby taking the alleged infringement out of YouTube’s private ordering and into the actual tenets of federal copyright law. However, Content ID may remain involved, albeit for other user content. Notably, if a user receives a DMCA notice, they receive a “strike” on their account.[228] Having a “strike” means that the user cannot appeal a Content ID rejection, and three strikes can result in the loss of an account, with no way to regain it or its content.[229] Strikes can be removed by waiting for six months, attending YouTube’s somewhat ridiculous[230] “copyright school”[231] or by successfully submitting a counter notice. Notably, and apropos the balancing of interests that copyright law and the DMCA are meant to accomplish, there does not appear to be any corresponding set of accumulating penalties for owners whose Content ID claims are eventually dropped.[232] However, YouTube does assert that it will remove owners from the Content ID partner system for systematic misuse or abuse.[233]

It is quite easy to make a list of high-profile failures of the Content ID system, failures that have serious consequences for culture,[234] [235] [236] civic participation,[237] [238] [239] an educated public,[240] and more. Some false positives are simply ridiculous,[241] but some threaten the public domain.[242] It can be argued that the very fact that failures like this make the news is because they are proportionately rare, although hard data on ContentID’s true error rate is lacking, perhaps because what counts as an “error” is not universally agreed upon. On the other hand, the seemingly low occurrence of error may be because the majority of users whose legitimate content is adversely affected by Content ID simply allow it to remain down because they are reluctant to engage with the process for whatever reason or because they don’t know that processes for redress exist at all. It’s equally simple to make a list of Content ID-related successes[243], even not including the astonishing economic success of YouTube itself.[244] But whether positive, negative, or more complex, the implications of, and outcomes associated with, a vast automatic private ordering system like Content ID are both far-reaching and multi-faceted. This case study will examine some of them through a variety of different interpretive lenses.

[211] See here for a comprehensive internal document explaining the entire ContentID process. Carlos Pacheco, “YouTube Content ID Handbook - Google,” (Technology, 19:41:05 UTC),
[212] The lack of human involvement is a critical piece, as it is this which not only makes it possible for the system to keep up with the flood of material being uploaded to YouTube but also which means that edge cases and false positive results are more common, and difficult to subject to human review. YouTube does have a parallel human review process whereby users can flag videos as objectionable, and they will then be tracked for review by a human being, as well as a “super-flagger” program within the larger crowdsourced version.
[213] Human beings could never review all of YouTube’s material, but depending on the nature of the Content ID flag, a particular video may get pushed to a “manual review” queue. See
[214] “Qualifying for Content ID - YouTube Help,” accessed May 9, 2014,
[215] “The Hidden Costs of YouTube’s Controversial Revenue Split,” The Daily Dot, accessed July 2, 2014,
[216] “How Content ID Works - YouTube Help,” accessed May 9, 2014,
[217] Blocking is “not the DMCA” and does not result in a strike
[218] Note the similarities to Section 512(i)(1)(A)’s statement about the service provider’s needing to have a “policy that provides for the termination in appropriate circumstances of subscribers and account holders of the service provider's system or network who are repeat infringers;”
[219] “Flagging Content - YouTube Help,” accessed July 13, 2014,
[220] “What Is a Content ID Claim? - YouTube Help,” accessed May 9, 2014, (“In most cases, getting a Content ID claim isn’t a bad thing for your YouTube channel. It just means, ‘Hey, we found some material in your video that’s owned by someone else.’”)
[221] Patrick McKay, “Victory! YouTube Reforms Content ID Dispute Process,” October 4, 2012,
[222] “Official Blog: Improving Content ID,” October 3, 2012,
[223] “YouTube Refuses to Honor DMCA Counter-Notices,” accessed June 17, 2014,
[224] McKay, “Victory! YouTube Reforms Content ID Dispute Process”; “Official Blog: Improving Content ID.”
[225] “Dispute a Content ID Claim - YouTube Help,” accessed May 9, 2014, (“After you appeal a rejected dispute, the copyright owner has 30 days to respond. If they don’t respond within 30 days, their claim on your video will expire, and you don’t need to do anything.”)
[226] Carlos Pacheco, “YouTube Content ID Handbook - Google,” (Technology, 19:41:05 UTC),; “Dispute a Content ID Claim - YouTube Help.” slide 79
[227] Which requires that a user have no Community Guidelines strikes, no copyright strikes and no more than one video blocked worldwide by Content ID; See “Dispute a Content ID Claim”
[228] “A Guide to YouTube Removals,” Electronic Frontier Foundation, accessed May 9, 2014,
[229] Strikes can be erased with either the passage of time or by attending YouTube’s bizarre “copyright school”
[230] elisa, “Help Fix YouTube’s Copyright School Fail,” Political Remix Video, April 22, 2011,
[231] “YouTube Copyright School,” accessed July 15, 2014,
[232] Carlos Pacheco, “YouTube Content ID Handbook - Google,” 19:41:05 UTC.
[233] Id , slides 43, 48, 59
[234] “Major Labels Claim Copyright Over Public Domain Songs; YouTube Punishes Musician | Techdirt,” Techdirt., accessed July 1, 2014,
[235] Brian Kamerer | May 24th and 2012, “An Open Letter to Jay Leno About Stealing My Video and Then Getting It Removed From YouTube,” Splitsider, accessed July 1, 2014,
[236] “How I End up with YouTube Copyright Claims on My Own Songs | Chris Zabriskie | Composer,” accessed July 1, 2014,
[237] “Telemundo & Univision Copyright Claim On YouTube Takes Down US Congressional Appropriations Hearing | Techdirt,” Techdirt., accessed July 1, 2014,
[238] Ryan Singel, “YouTube Flags Democrats’ Convention Video on Copyright Grounds | Threat Level,” WIRED, September 5, 2012,
[239] Timothy B. Lee, “Music Publisher Uses DMCA to Take down Romney Ad of Obama Crooning,” Ars Technica, July 16, 2012,
[240] Alex Pasternack, “NASA’s Mars Rover Crashed Into a DMCA Takedown,” Motherboard, accessed July 1, 2014,
[241] “YouTube Content ID Under Fire As False Copyright Claims Abound,” SocialTimes, accessed July 1, 2014, The birdsong on a video of a nature walk triggered a match to content owned by Rumblefish, who when first notified of the error, nevertheless rejected the dispute!
[242] “Major Labels Claim Copyright Over Public Domain Songs; YouTube Punishes Musician | Techdirt”; “YouTube Taking Down Public Domain Works? | Techdirt,” Techdirt., accessed July 10, 2014,; “How Google’s ContentID System Fails At Fair Use & The Public Domain | Techdirt,” Techdirt., accessed May 9, 2014,
[243] Christopher Zoia, “This Guy Makes Millions Playing Video Games on YouTube,” The Atlantic, March 14, 2014,; Amanda Holpuch, “Harlem Shake: Baauer Cashes in on Viral Video’s Massive YouTube Success,” The Guardian, February 19, 2013, sec. Technology,; “How to Make Money on YouTube: 101 Monetization Tips | MonetizePros,” accessed May 9, 2014,; “Musician Alex Day Explains How He Beat Justin Timberlake In The Charts Basically Just Via YouTube | Techdirt,” Techdirt., March 25, 2013,
[244] Ryan Lawler, “YouTube Has Found Its Business Model, And Is Paying Out Hundreds Of Millions Of Dollars To Partners,” TechCrunch, accessed May 14, 2014,

3. What Can An Examination Of YouTube And ContentID Tell Us About Online Intermediaries And Private Ordering?

YouTube is, at its root, constructed by the content of its users, and therefore has an almost Protean[245] nature. YouTube is an extremely powerful platform and tool, in part because of its audio-visual nature[246] and has arguably evolved to become what its users need it to be, though of course in some tension with what YouTube itself is willing and able to allow itself to be.[247] However, because it actively occupies the space between content owners and users, YouTube is arguably much more than a simple UGC platform. The overwhelming market share,[248] ubiquity, and ease of use of the YouTube platform have made it an essential tool for not only private or recreational communication and uses, but public ones as well.[249] YouTube is a paradigmatic example of a “social media” OI.[250] Videos on YouTube can be breaking news,[251] and also provide the raw material underlying many articles and broadcasts, but clearly YouTube is not a traditional journalistic medium.[252] Is YouTube a search engine? As a “simple” database of videos, it may not appear to be at first, but it is unquestionably used and thought of as one, and an enormous one at that.[253] Although not a traditional “blogging” site by most meanings of that word, “vlogging” is a burgeoning trend,[254] and more and more popular users and channels on YouTube are simply users sharing their thoughts and ideas, rather than “constructed” entertainment.[255] It has even become possible to purchase content on YouTube.[256] The platform’s identity as an intermediary is therefore one that blurs category lines, making the way in which it negotiates the potential liability for its content all the more illuminating.

The presence of ContentID means that YouTube’s liability for, and handling of, the user-generated content that gives the site its unique qualities is subject to more pressures than just the largely ex post law of the DMCA. Other influences include the markets, in the form of YouTube’s need to succeed as a business and the normative pressures of its users[257] and also the algorithmic decisions that underlie Content ID’s computer code and produce its outcomes.

From a liability perspective, YouTube is subject only to the DMCA and, if appropriate, CDA 230. YouTube could choose to rely solely on the DMCA’s mechanisms to police its content.[258] The DMCA is for the most part an “enabling” ex post regime. In contrast, Content ID is an ex ante regime that, at first glance, places additional net restrictions and costs on YouTube. But Content ID is a voluntary addition. Why then has YouTube chosen to invest substantial resources in Content ID if it is under no obligation to do so?

Content ID has been part of YouTube since nearly the beginning. Arguably, YouTube started Content ID as a direct response to the threat of the then-ongoing Viacom litigation,[259] and it seems reasonable to suggest that if there had been no lawsuit and no looming copyright liability (for example, if the DMCA somehow completely immunized OSPs for all user postings under all circumstances) that YouTube would have had little incentive to innovate or investigate new ways to monitor and police its content. Professor Terry Fisher[260] has described Content ID as a way that YouTube could show both the court and the public that it was trying to do the right thing regarding its legal obligations, as part of a larger strategy that would enable it to survive.[261] But Content ID quickly became much more than just reputation management, especially as YouTube continued to grow and to gain an audience.[262] In contrast to the blunt (but arguably more fair) instrument that was the DMCA, Content ID’s “block/track/monetize” gave rights holders more nuanced choices than “up”, “down” or “lawsuit”, which in turn made it possible for users and rights holders to innovate into the new spaces provided along a variety of axes. Although in some ways it may seem more restrictive, and may well be, from a given individual user’s perspective, Content ID is, broadly, a more enabling regulatory regime than the DMCA. Other UGC platforms, such as SoundCloud, have recognized Content ID’s success and have emulated it, sometimes for exactly the same reasons,[263] and unsurprisingly, with many of the same controversies.[264] However, many of Content ID’s affordances also have a negative side, a side that almost always has to do with the difficulty of how to effectively scale individual problem-solving and fact-specific inquiry that a user needs to the exigencies of YouTube’s immense size and volume.

[246] Combined bandwidth into the brain for eyes and ears exceeds 10 MBps
[247] “The street finds its own uses for things” "William Gibson, “Burning Chrome” Omni in July 1982; See also Ann Balsamo, arguing that when it comes to designing new technologies, we, the designers, need to leave the potential of those technologies as open as possible. Video available at ; Last viewed Jan. 12, 2009
[248] “YouTube Leads US Online Video Market with 28% Market Share,” MarketingCharts, accessed July 11, 2014,
[249] Katharine Q. Seelye, “New Presidential Debate Site? Clearly, YouTube - New York Times,” June 13, 2007,; Kristal Leah Curry, “YouTube’s Potential as a Model for Democracy: Exploring Citizentube for ‘Thick’ Democratic Content,” Journal of Curriculum Theorizing 28, no. 1 (April 18, 2012),; “Facebook, Twitter, YouTube—and Democracy,” 2010,; “Campaign Takedown Troubles: How Meritless Copyright Claims Threaten Online Political Speech | Center for Democracy & Technology,” accessed July 8, 2014,; Singel, “YouTube Flags Democrats’ Convention Video on Copyright Grounds | Threat Level”; Curry, “YouTube’s Potential as a Model for Democracy”; “CitizenTube: What Is Democracy? The State Department and YouTube Put It to a Vote,” accessed May 14, 2014,
[250] “Top 10 Social Networking Sites by Market Share of Visits [January 2013],” DreamGrow Social Media, accessed July 11, 2014,
[252] Pew Research Center’s Journalism Project Staff, “YouTube & News,” Pew Research Center’s Journalism Project, accessed July 15, 2014,
[253] “YouTube: The 2nd Largest Search Engine (Infographic),” accessed July 11, 2014,
[254] “In-Depth Statistics on Online Video Sharing and Engagement - Part I,” accessed July 11, 2014, (“YouTube is the most popular video-sharing service used by bloggers attracting 81.9% of all embedded videos”)
[255] See, e.g. Note though that as “average” YouTube users become more popular, the production values of their videos tend to increase.
[256] Jennifer Van Grove, “YouTube Expands Click-to-Buy, Takes Over Your Videos,” Mashable, January 21, 2009,; “I Clicked to Buy and I Liked It,” Official Google Blog, accessed June 3, 2014,
[257] Liz Shannon Miller, “Should YouTubers Launch New Platforms to Compete with YouTube?,” June 9, 2013,; “Union For Gamers,” Union For Gamers, accessed June 9, 2014,
[258] “Viacom v. YouTube: How a District Court Saved Free Speech on the Internet,” American Civil Liberties Union of Washington, July 6, 2010,;
[259] Verne Kopytoff, Chronicle Staff Writer, “Copyright Questions Dog YouTube / Deals with Entertainment Industry Limit Site’s Liability,” SFGate, October 27, 2006,
[260] WilmerHale Professor of Intellectual Property Law, Harvard Law SchoolDirector, Berkman Center for Internet & Society
[261] See right-hand side of Fisher & Oberholzer-Gee: Business Strategies Mind-Map at appendix C, p 65; CopyX lecture 11.2 at
[262] A 2009 survey found that users found online video on YouTube more than any other site, by a substantial percentage.
[263] “SoundCloud » Q&A: Our New Content Identification System,” accessed July 10, 2014,; “After Heavy Threats, SoundCloud Agrees to Label Licensing Talks...,” Digital Music News, accessed July 10, 2014,; “Soundcloud Doing a Deal With Record Labels Not to Get Sued | TorrentFreak,” accessed July 11, 2014,
[264] “Universal Music Can Delete Any SoundCloud Track Without Oversight | TorrentFreak,” accessed July 11, 2014,

4. What Has Content ID Made Possible?

i. Social and Cultural Impacts

Remix culture thrives on YouTube, although there is a great deal of “original” content as well. Content ID gives rights holders the ability to curate which remixes of their material they are willing to tolerate, a new form of (indirect) brand management.[265] [266] Some rights holders don’t attempt to curate at all, seeing each reuse of their material as free publicity, facilitating greater popularity for the material in question.[267] In parallel, users have access to a much wider range of copyrighted materials with which to remix and create new content, materials that the use of which would previously have caused their videos to be removed under the DMCA. When user-generated content that arguably infringes copyright remains available to digital bricoleurs, there is more freedom to use the raw materials of popular culture to make commentary, have fun, or simply to participate, and works that incorporate those materials can remain public and reach a much wider audience. On the other hand, many uses of content would, if challenged in court, be ultimately deemed fair use, and therefore not infringement. Relying on Content ID and its automatic processes means that the fair use analysis never takes place, and that a great deal of content that should actually remain online is blocked under Content ID.[268]

However, the democratization of access that the YouTube platform and medium represent – the lack of traditional obstacles and gatekeepers – has been a boon to those who might otherwise have struggled to get their voices heard.[269] In addition, it has facilitated the formation of new bonds of community and organization, both social[270] [271] and commercial,[272] groups whose fortunes may in part rise and fall with YouTube’s.[273] You Tube is increasingly a space in which political discourse takes place, albeit still in parallel to more traditional channels. [274] [275] [276] [277]

Conversely, the same size and breadth that makes YouTube such a powerful platform means that as it deploys Content ID and responds to the DMCA, it must balance the interests of a much wider spectrum of users, interests of whom may often inadvertently come into conflict. Speech on YouTube may be censored[278] deliberately[279] for personal,[280] commercial[281], and political[282] reasons. Perhaps even more importantly, accidental censorship may occur as a result of poorly targeted Content ID matching, or creating a collision with unknowing and likely indifferent commercial interests.[283] As an example of how the application of Content ID can have far-reaching and substantial effects on an entire subculture, business model, and economic ecosystem, see the extensive coverage of the December 2013 “multichannel network” controversy,[284] wherein thousands of users simultaneously received numerous Content ID notices virtually overnight, many of which were from seemingly unrelated third party content holders.[285]

[265] Stuart Dredge, “Disney’s YouTube Deal Is a Real Game Changer,” The Guardian, March 29, 2014, sec. Technology,
[266] “Copyright And The Harlem Shake: Selective Enforcement | Techdirt,” Techdirt., accessed July 10, 2014,
[267] “Number Ones - Psy ‘Gangnam Style,’”, accessed July 10, 2014,
[268] “How Google’s ContentID System Fails At Fair Use & The Public Domain | Techdirt.”
[269] Hayley Tsukayama, “In Online Video, Minorities Find an Audience,” Washington Post, April 20, 2012,
[270] Clement Chau, “YouTube as a Participatory Culture,” New Directions for Youth Development 2010, no. 128 (December 1, 2010): 65–74, doi:10.1002/yd.376; Bryan Mueller, “Participatory Culture on YouTube: A Case Study of the Multichannel Network Machinima,” August 2013; “Union For Gamers.”
[271] Erik Kain, “YouTube Responds To Content ID Crackdown, Plot Thickens,” Forbes, December 17, 2013,
[272] Michael Carney On August 15 and 2013, “AdRev Launches, Brings Music Rights Management to the YouTube Masses,” PandoDaily, August 15, 2013, (“use the illegally uploaded uses of their client’s IP as distribution and drive audience to that content to increase ad-based monetization. Secondly, AdRev and aid their clients in “commercializing” their IP through micro synchronization partnerships with YouTube MCNs including Maker Studios, FullScreen, Big Frame, Bent Pixels, MiTu, and others, and also through uploading this content to iTunes and Amazon.”
[273] Staff, “Viacom v. YouTube Ruling Is a Bummer for Google and the UGC Community.”
[274] Seelye, “New Presidential Debate Site?”
[275] Erin F. Dietel-McLaughlin, “Remediating Democracy: YouTube and the Vernacular Rhetorics of Web 2.0” (Bowling Green State University, 2010),
[276] “CitizenTube,” accessed May 14, 2014,
[277] “Campaign Takedown Troubles.”
[279] “Why Yes, Copyright Can Be Used To Censor, And ‘Fair Use Creep’ Is Also Called ‘Free Speech’ | Techdirt,” Techdirt., accessed July 13, 2014,; Cory Doctorow at 3:00 pm Sat, Feb 15, and 2014, “AIDS Deniers Use Bogus Copyright Claims to Censor Critical YouTube Videos,” BoingBoing, accessed July 13, 2014,; “Hollywood Studios Censor Pirate Bay Documentary | TorrentFreak,” accessed July 13, 2014,
[280] “Copyright As Censorship: Using The DMCA To Take Down Websites For Accurately Calling Out Racist Comments | Techdirt,” Techdirt., accessed July 13, 2014,
[281] “Rotolight Uses DMCA To Censor Review They Didn’t Like, Admits To DMCA Abuse For Censorship | Techdirt,” Techdirt., accessed July 13, 2014,
[282] “State Censorship by Copyright? Spanish Firm Abuses DMCA to Silence Critics of Ecuador’s Government,” Electronic Frontier Foundation, accessed July 13, 2014,
[283] “Warner Bros. Censorship of Greenpeace LEGO Video Backfires | TorrentFreak,” accessed July 11, 2014,; My Own Game Has Been Flagged by YouTube!, 2013,; “Record Label Reaches Settlement With Lessig; Promises To Revamp Abusive DMCA Takedown Policies -- Chilling Effects Clearinghouse,” accessed July 13, 2014,; Singel, “YouTube Flags Democrats’ Convention Video on Copyright Grounds | Threat Level”; “How YouTube Lets Content Companies ‘claim’ NASA Mars Videos | Ars Technica,” accessed May 8, 2014,
[284] Paul Tassi, “The Injustice Of The YouTube Content ID Crackdown Reveals Google’s Dark Side,” Forbes, accessed May 14, 2014,; Mike Masnick, “Dan Bull Takes On YouTube’s ContentID Changes, Stolen Revenue, With A Diss Track | Techdirt,” Techdirt., January 3, 2014,
[285] “What People Don’t Get About Content ID,” accessed June 11, 2014,

ii. Legal and Regulatory Innovation

One clear difference between what is possible with a private ordering system like Content ID, as compared to federal legislation like the DMCA, is the potential speed of adaptation. YouTube itself has only been in existence for seven years, but Content ID has already gone through several major iterations.[286] In contrast, the DMCA, the most recent major change to copyright law, is fifteen years old, and the U.S. Congress is only just now under massive pressure from a variety of constituencies, acknowledging that current copyright law – and especially the DMCA – are perhaps not the best fit for the realities of the networked digital age.[287] Being able to change as needed may be more work for YouTube, in contrast to standing on the floor of the DMCA’s safe harbors, but it makes YouTube more nimble, and less reliant on government to protect its existing business model.[288]

With a private schema, OIs like YouTube at least have the opportunity to do a better job of managing the evolving needs of their users, whether individual or institutional. YouTube will obviously never be able to satisfy all of its constituencies all of the time,[289] but even when they get it wrong, a fix can be implemented[290] far more rapidly than a new law can be passed. YouTube’s success with Content ID is already being emulated by other OIs who must balance their users’ interests against those of the content industry, and who have previously faced similar lawsuits as they while engaging in licensing talks.[291] The outstanding questions then become the extent to which the voices of individual users can be heard over those of powerful business interests, and the extent to which YouTube will make its private ordering transparent. Ideally, all of the involved parties should agree to the rules under which they will interact,[292] and, for now, YouTube users seem to be something of an afterthought.[293] It may still prove to be the case that the public interest is best served through law’s public ordering.

Somewhat more speculatively, it seems that adding the private layer of Content ID may mean a reduction in the number of infringement-based conflicts that actually make it to court. Why would a rights holder file suit, or even threaten, if the material in question can be easily blocked or monetized through Content ID with minimal effort? This should in theory result in a smaller workload for federal courts, at least with respect to copyright lawsuits. Or it may mean less DMCA-related case law, and the stagnation of jurisprudence in that area. Regardless, this is a worthy topic for future research. Compare the massive copyright litigation campaign of Malibu Media, a rights holder that during one year was responsible for filing nearly 40% of all U.S. copyright lawsuits.[294]

[286] “History of Content Management - YouTube5Year,” accessed May 11, 2014,
[287] “U.S. Copyright Office: The Register’s Call for Updates to U.S. Copyright Law,” accessed July 11, 2014,; Schruers, “5 Misconceptions We’re Likely to Hear at Tomorrow’s DMCA Hearing”; “Copyright Hearing Recap: DMCA Notice & Takedown | Future of Music Coalition,” accessed July 11, 2014,
[288] William Patry, How to Fix Copyright (Oxford University Press, 2011); “ISP CEO Slams Copyright Law and Outdated Business Models | TorrentFreak,” accessed July 13, 2014,
[289] “YouTube Fails In Explaining Flood Of Takedowns For Let’s Play Videos | Techdirt,” Techdirt., accessed June 11, 2014,; Mark Sweeney, “YouTube Accused of Trying to Strong-Arm Indie Labels into Poor Deals,” The Guardian, June 3, 2014, sec. Technology,; Stuart Dredge and Dominic Rushe, “YouTube to Block Indie Labels Who Don’t Sign up to New Music Service,” The Guardian, June 17, 2014, sec. Technology,; Amir Efrati, “Reappearing on YouTube: Illegal Movie Uploads,” Wall Street Journal, February 8, 2013, sec. Tech,
[290] McKay, “Victory! YouTube Reforms Content ID Dispute Process”; Borough, Benjamin, “The next Great YouTube: Improving ContentID,” n.d.; “YouTube Announces Improved ContentID Program (Finally),” accessed June 11, 2014,; “Official Blog: Improving Content ID,” accessed May 11, 2014,; “When We Launched Google+ over Three Years Ago, We Had a Lot of Restrictions On…,” accessed July 15, 2014,
[291] “After Heavy Threats, SoundCloud Agrees to Label Licensing Talks...”
[292] “Searching for the Right Balance,” Official Google Blog, accessed July 14, 2014,
[293] “YouTube Finally Admits It Totally Screwed Up Rolling Out ContentID To Multi-Channel Networks; Trying To Improve Tools | Techdirt,” Techdirt., March 27, 2014,
[294] “One Single Porn Copyright Troll, Malibu Media, Accounted For Nearly 40% Of All Copyright Lawsuits This Year | Techdirt,” Techdirt., accessed July 13, 2014,; Matthew Sag, Copyright Trolling, An Empirical Study, SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, July 3, 2014),

iii. Financial and Economic Innovation

The “monetization” choice that Content ID offers to rights holders is perhaps its most noteworthy feature, and other than errors and false positives, the focus of the most attention surrounding the program. Diverting the ad revenue stream on a video to the rights holders arguably functions like a compulsory licensing regime, or a sort of private copying levy,[295] but one in which there are zero transaction costs from the user’s perspective. Even notoriously protective rights holders, such as Disney,[296] have realized that there is more to be gained by tolerating, and even profiting from, the public’s “unlicensed” uses of their intellectual property. Nintendo has gone so far as to offer to split its ad revenue with the users who incorporate its content.[297] The “Nintendo Creator Program” debuted recently to mixed reviews.[298] Looking further into the future, some have even speculated that not only do YouTube and other similar streaming platforms represent a new consumption paradigm[299] that will disrupt existing business models, but also that “views” may actually form the basis of new metrics for success.[300] [301] The distribution of advertising revenue[302] associated with consuming content takes the place of selling a “thing,” digital or real. The flexibility that Content ID provides, or more cynically, the liminal zones that it creates, means that YouTube and its constituencies have more niches to fill[303] and surplus to exploit.[304] Hollywood may have seen the writing on the wall, and is taking the YouTube platform very seriously.[305] It’s hard to believe that it would have done so without being first convinced by the sheer scale of content, viewers, and dollars available on YouTube that Content ID made possible.

As just one example of such a new niche, YouTube is uniquely poised to effectively curate its massive store of content,[306] a role becoming ever more vital as data grows beyond human capacity to make sense of it.[307]As the U.S. Congress holds a series of hearings on the future of copyright law in 2014, it is reasonable to speculate that future iterations of copyright law may mandate a similar content monitoring and revenue sharing system, as a way of cutting the current system’s Gordian knot.[308] [309] However, from the perspective of start-up businesses and would-be disrupters and innovators, creating or buying a Content ID-like system costs a lot of money, likely far more than having a DMCA-notification procedure in place. The costs associated with such a requirement would effective raises the barrier to market entry, stifling innovation.

The revenue stream associated with Content ID also represents a new business model for performers and a new, or replacement revenue stream for existing types of artists. Whether the new ways to make money are as lucrative[310] as previous ones is a matter of opinion,[311] but the mere fact that a robust debate as to the viability of the YouTube model exists, and that there are those positioning themselves as guides to the new territory[312] speaks volumes.

[295] See, e.g. Terry fisher, Promises to Keep, chapter 9
[296] Andrew Leonard, “How Disney Learned to Stop Worrying and Love Copyright Infringement,” accessed July 10, 2014,* how_disney_learned_to_stop_worrying_and_love_copyright_infringement/.
[297] Sam Machkovech, “Nintendo Announces Plan to Share Ad Revenue with YouTube Streamers,” Ars Technica, May 27, 2014,; “Nintendo’s New Affiliate Program Will Split YouTube Ad Revenue with Proactive Users,” Polygon, accessed July 9, 2014,
[298] “Nintendo’s YouTube Plan Is Already Being Panned By YouTubers,” Kotaku, accessed January 29, 2015,
[299] “Statistics - YouTube,” accessed July 11, 2014, (“reaches more adults than any cable network.”)
[300] “Audience as the New Currency: YouTube and Its Impact on Hollywood and Social Media - Brian Solis,” accessed June 11, 2014,
[301] “Spotify Rules,” Lefsetz Letter, accessed July 10, 2014,; Victor Luckerson, “Spotify and YouTube Are Just Killing Digital Music Sales,” Time, accessed July 10, 2014,
[302] Todd Spangler, “YouTube to Gross $5.6 Billion in Ad Revenue in 2013: Report,” Variety, December 11, 2013,; Farber, “All3Media,” 3.
[303] KaskadeVerified account, “Yes, so I Will Move Forward with Constructing My Own Portal Where I Can Share What I like When I Like.,” microblog, (June 4, 2014),
[304] “Gamasutra: Colin Sullivan’s Blog - YouTube’s Content ID Is Not About Copyright Law,” accessed July 10, 2014,
[305] “Why Hollywood Is Making It Rain on the YouTube Ecosystem and Why It’s Only Beginning,” PandoDaily, April 7, 2014,; Michael Carney On May 2 and 2013, “An inside Look at the First Major Acquisition of a Premium YouTube Channel,” PandoDaily, May 2, 2013,
[306] “Exclusive: ‘YouTube Music’ Is Launching This Summer...,” Digital Music News, accessed July 11, 2014,; “YouTube Says That 95% of Labels Are Now on Board...,” Digital Music News, accessed July 11, 2014,
[307] Weinberger, David Too Big To Know;
[308] “Google DMCA Takedowns Increase Tenfold, MPAA Still Says Google Not Doing Enough,” Digital Digest, accessed July 11, 2014,; Masnick, “MPAA.”
[309] “Copyright Hearing Recap: DMCA Notice & Takedown | Future of Music Coalition.”
[310] Amanda Holpuch, “Harlem Shake.”
[311] “1.2 Million YouTube Views and Not a Penny Earned for Watertown Shootout Video.,” June 18, 2013,; Zoia, “This Guy Makes Millions Playing Video Games on YouTube”; Lawler, “YouTube Has Found Its Business Model, And Is Paying Out Hundreds Of Millions Of Dollars To Partners”; “I Ain’t Gonna Work on YouTube’s Farm No More - LAUNCH Blog - LAUNCH Blog,” accessed June 9, 2014,
[312] “How to Make Money on YouTube: 101 Monetization Tips | MonetizePros”; “Making Money on YouTube with Content ID,” Official Google Blog, August 27, 2008,; 15 and 2013, “AdRev Launches, Brings Music Rights Management to the YouTube Masses.”

5. Negative Outcomes

As will come as no surprise, the most obvious and commonly occurring problem with a vast and impersonal system like Content ID is that it makes mistakes.[313] False positives are probably an unavoidable consequence of any classification system, and are a problem with DMCA notices[314] as well as with Content ID,[315] but the issue with Content ID is the scale on which it must operate in order to be effective.[316] With one hundred hours of video uploaded to YouTube every minute, if even one in a million is incorrectly flagged as infringing, that adds up rapidly.[317]And while some errors may be relatively minor, some can have far-reaching and lasting consequences.[318] Relying on big data and automation means that when errors need human attention to resolve, or to avoid in the first place, problem solving doesn’t scale. There’s simply no way for YouTube to give human attention to every video, even if that attention is outsourced to rights holders.[319] What percentage of errors is “acceptable” is a difficult – if not impossible – question to answer, especially when some errors are so egregious.[320] The nature of the problem by necessity means that the interest of those actors who operate at scale, whether by volume or wealth, will always be better served, while an individual’s will not. YouTube has little incentive (or ability) to tailor Content ID to meet the idiosyncratic needs of a single user, but when that user’s videos are affected, the impact on him or her is quite real.[321] The nature of copyright may even mean that the actors the public sees as “responsible” for copyright conflicts may not actually be the ones behind a removal.[322] [323]

Layered on top of the problem with errors is that when user rights are completely defined in the Terms of Service, a user has little recourse, either procedural or substantive, when there is an error.[324] The down side of relying on Content ID instead of the DMCA is that YouTube’s Terms of Service and Content ID’s internal procedures become de facto law.[325] [326] The First Amendment[327] doesn’t apply to YouTube, nor is there any fundamental right to use a private service. Some critics have gone so far as to say that a user’s mere knowledge that any uploaded content will be impersonally reviewed will itself have a chilling effect on public discourse. [328]

There is no obvious solution to these problems, at least not one that will please rights holders as well as those afflicted by erroneous takedowns. Solving the false positive issue also requires addressing “correct” content matches that would nevertheless be determined to be a fair use, the Achilles heel of any automatic content review system.[329] Of course, there is no penalty for YouTube if Content ID fails to consider fair use, the way there theoretically is within the DMCA.[330] Finally, with so much power given to YouTube and Content ID, it could easily be said that YouTube is no longer just an intermediary, but a third, equally powerful participant in the relationship between content and consumer, with its own interests at stake, both separate from and intercalated with those of others.

One possible silver lining in this cloud is that the same inability to avoid false positives and the lack of recourse[331] for clear errors by Content ID is incentivizing users to innovate and create their own solutions, including alternative platforms,[332] new types of business organization[333] and revenue sharing[334], and even suggestions for YouTube-centered organized labor.[335] To perhaps stretch the point, Content ID’s false positives are acting as a kind of selection pressure on the UGC (video) ecosystem, though it remains to be seen what will survive as “fit.”

[313] “Film Distributor, Copyright Enforcement Company Join Forces To Kick Creative Commons-Licensed Film Off YouTube | Techdirt,” Techdirt., accessed July 11, 2014,
[314] “Google Starts Reporting False DMCA Takedown Requests | TorrentFreak,” accessed July 13, 2014,; “Rogues Falsely Claim Copyright on YouTube Videos to Hijack Ad Dollars | Threat Level,” WIRED, November 21, 2011,
[315] “YouTube Content ID Under Fire As False Copyright Claims Abound.”
[316] “What Is a Content ID Claim? - YouTube Help”; “How Content ID Works - YouTube Help”; “Statistics - YouTube.”
[317] “YouTube Copyright Fiasco Get Wilder, But This Time Someone Admits Error,” Kotaku, accessed June 9, 2014,; “YouTube Copyright Chaos Continues. Game Publishers To The Rescue?,” Kotaku, accessed July 10, 2014,
[318] “Campaign Takedown Troubles.”
[319] David Kravets, “Google Says It Won’t ‘Manually’ Review YouTube Vids for Infringement | Threat Level,” WIRED, October 4, 2012,; “YouTube Isn’t Going to Manually Check Videos for Copyright Infringement after All,” The Verge, October 4, 2012,; “YouTube’s Content ID Disputes Are Judged by the Accuser -”
[320] Singel, “YouTube Flags Democrats’ Convention Video on Copyright Grounds | Threat Level.”
[321] “Game Critic Says YouTube Copyright Policy Threatens His Livelihood,” accessed June 4, 2014,; 24th and 2012, “An Open Letter to Jay Leno About Stealing My Video and Then Getting It Removed From YouTube.”
[322] Mike Masnick, “Commander Hadfield’s Amazing Cover Of David Bowie’s Space Oddity Disappears Today, Thanks To Copyright | Techdirt,” Techdirt., May 14, 2014,; “YouTube Copyright Chaos Continues. Game Publishers To The Rescue?”
[323] “Blizzard, Capcom, Ubisoft And More Rally Behind Copyright-Afflicted YouTubers,” Forbes, accessed July 2, 2014,
[324] “YouTube Refuses to Honor DMCA Counter-Notices.”
[326] “Universal Music Can Delete Any SoundCloud Track Without Oversight | TorrentFreak.”
[327] “Online Hitler Parodies Suffer Censorship - FIRST AMENDMENT COALITION,” accessed July 13, 2014,
[328] “The YouTube Gaze: Permission to Create? | Enculturation,” accessed July 10, 2014,
[329] “How Google’s ContentID System Fails At Fair Use & The Public Domain | Techdirt”; “Fair Use Principles for User Generated Video Content,” Electronic Frontier Foundation, accessed May 11, 2014,; “MPAA Freaks Out: Insists That Having To Consider Fair Use Before Filing A DMCA Takedown Would Be Crazy | Techdirt,” Techdirt., accessed June 2, 2014,;
[331] “YouTube Copyright Fiasco Get Wilder, But This Time Someone Admits Error.”
[332] Miller, “Should YouTubers Launch New Platforms to Compete with YouTube?”; account, “Yes, so I Will Move Forward with Constructing My Own Portal Where I Can Share What I like When I Like.”
[333] “Crowdfunding’s Patreon Takes Aim At YouTube’s Business Model,” Huffington Post, February 13, 2014,; “YouTube Multi-Channel Networks & the Great Music Money Debate,” accessed June 11, 2014,; Lawler, “YouTube Has Found Its Business Model, And Is Paying Out Hundreds Of Millions Of Dollars To Partners.”
[334] Machkovech, “Nintendo Announces Plan to Share Ad Revenue with YouTube Streamers.”
[335] “Union For Gamers”; “A YouTube Creators’ Bill of Rights (Or ‘A Roadmap for Building a Better YouTube’) - LAUNCH Blog - LAUNCH Blog,” accessed June 9, 2014,

6. Conclusion

It’s likely that as users and rights holders’ relationship with YouTube and each other continues to evolve, so will Content ID, as it did following the MCN controversy. A comprehensive private ordering like Content ID, may therefore serve as a “laboratory”[336] for regulation and law with respect to liability and may provide templates or cognitive anchors for future legislation.

However, more avenues for success mean more possible lines along which to make mistakes. Policy makers will need to recognize that a particular OI’s internal schema will, by necessity, suit its own needs, and that a legal or regulatory regime modeled on that of a powerful and successful OI like YouTube will likely favor the existence and survival of similar OIs. Any system will prefer some uses to others, with the inevitable “pruning” and possible chilling effect on innovation along other paths that will result.[337] The dominant players will have again written the rules, but this time indirectly. Ongoing transparency with respect to the way in which private ordering works, as well as paying more than lip service to the public interest, will likely result in both better outcomes and wider acceptance.


C. Private Ordering to Respond to Trademark Concerns – eBay’s VERO Program

In the United States, trademarks are words, phrases, symbols, and other indicia used to identify the source or sponsorship of goods or services.[338] Trademark law serves the dual purpose of protecting brand integrity and preventing customer confusion with regard to a product’s source or affiliation. Federal trademark law is codified in the Lanham Act, a statute that makes it unlawful to use a valid trademark in a manner that would cause confusion as to the source or sponsorship of goods or services.[339] Ownership of a trademark does not vest upon the mark’s creation, and an aspiring trademark owner must actually use their trademark in commerce in connection with goods or services. The Lanham Act also authorizes trademark owners to bring infringement suits to stop or prevent use of a mark by other parties. Unlike other intellectual properties, trademark law is a hybrid of both federal and state law, which complicates the creation and prevalence of concrete standards, particularly issues involving trademark infringement.

Trademark infringement occurs when one party uses another’s trademark without permission in commerce, causing confusion at the point of sale or a third party’s initial interest. A typical scenario involves the sale of counterfeit goods, or when one party uses the trademark of another in the hopes of free riding off the goodwill created by the trademark owner’s investment. Another type of infringement involves false sponsorship or affiliation, where an infringer uses another party’s trademark not to mislead consumers as to the source of the product, but rather to attract the goodwill of the borrowed trademark’s brand by association with its own product. Initial interest confusion is where an infringer uses another party’s trademark, often a competitors’, to draw consumers in and ultimately purchase their own product.

Counterfeit goods have always been problematic for brand owners, but the Internet’s emergence as a market for goods has made it extremely difficult for trademark owners to bring suit against direct infringers. No longer burdened by international boundaries, and aided by anonymity and the lax registration requirements of online marketplaces, counterfeiters can push counterfeit goods manufactured across the globe into domestic markets with little risk of legal consequences.[340] Finding lawsuits against individual counterfeiters for direct infringement to be both time consuming and financially inefficient, trademark owners began to target online intermediaries under a theory of contributory trademark liability.[341]

Contributory trademark liability is a judicially created legal doctrine rooted in the common law of torts.[342] The seminal case on the subject is Inwood Labs. Inc. v. Ives Inc., where the Supreme Court held that a third party is legally accountable to a trademark owner if it “intentionally induces another to infringe a trademark, or if it continues to supply its product to one whom it knows or has reason to know is engaging in trademark infringement.”[343] The Inwood test initially applied exclusively to manufacturers and distributers of infringing goods, but courts eventually expanded the scope of the doctrine to include Internet service providers (ISPs), which were analogized to flea markets based on their ability to control and monitor the activity of the infringing users.[344] In cases featuring claims against ISPs, judicial analysis has focused on the second part of the Inwood test, or the quantum of knowledge necessary to trigger liability.[345]

[337] Mueller, “Participatory Culture on YouTube: A Case Study of the Multichannel Network Machinima.”
[339] Lanham Act, 15 U.S.C. § 1114(1)(a)
[340] National White Collar Crime Center, 2007 Internet Crime Report 5 (2007), http:// (“During 2007, Internet auction fraud was by far the most reported offense, comprising 35.7% of referred crime complaints.”)
[341] See Lockheed Martin Corp. v. Network Solutions, Inc., 194 F.3d 980, 984 (9th Cir. 1999),Rescuecom Corp. v. Google Inc., 562 F.3d at 124, 131 ; Playboy Entertainment, Inc. v. Netscape Communications Corp., 354 F.3d 1020, 1024 (9th Cir. 2004); Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93 (2d Cir. 2010);Rosetta Stone Ltd. v. Google, Inc., 676 F.3d 144, 149 (4th Cir. 2012)
[342] Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93, 103 (2d Cir. 2010)
[343] Inwood Labs., Inc. v. Ives Labs., Inc., 456 U.S. 844, 102 S. Ct. 2182, 72 L. Ed. 2d 606 (1982)
[344] See Lockheed Martin Corp. v. Network Solutions, Inc., 194 F.3d 980, 984 (9th Cir. 1999)
[345] Tiffany (NJ) Inc. v. eBay, Inc., 576 F. Supp. 2d 463, 469 (S.D.N.Y. 2008) aff'd in part, rev'd in part, 600 F.3d 93 (2d Cir. 2010)

1. Tiffany v. eBay

In Tiffany v. eBay, the Second Circuit attempted to answer the question of whether an online marketplace could be liable for facilitating the infringing conduct of its users.[346] Tiffany, a purveyor of fine jewelry, brought suit against eBay, the leading online auction site, in part for failing to police the site for counterfeit Tiffany products. After identifying eBay’s site as a marketplace for goods with sufficient control and monitoring it to be liable under a theory of contributory liability, the court nevertheless determined that generalized knowledge of infringing conduct was not enough to assign liability to eBay based on the infringing actions of its users.[347] The court reasoned that in the absence of specific knowledge of infringing activity, eBay could not be expected to seek out and remove counterfeit listings, and that rights holders were better situated to identify infringing items and bring them to eBay’s attention through its Verified Owner’s Rights Program (VeRO), discussed below.[348]

The court also considered whether eBay could be liable under a theory of willful blindness, based on its general knowledge of infringing activity. Stating that a service provider may be liable if it has “reason to suspect” that its users are engaging infringing conduct and “looks the other way,” the court noted that eBay removed every specific listing brought to its attention and had considerable anti-counterfeit measures in place to combat infringing use.[349] Unfortunately, the court neglected to specify what types of user actions or information would trigger a “reason to suspect” infringing activity. On its face, the language would seem to include general knowledge, which could be a problem for companies with many employees under an agency theory of liability. For example, it is not clear whether liability would attach if an eBay employee received notice of a specific infringing auction and failed to take action, and the court’s willful blindness standard may become a battleground for litigation in future actions until some clarification is provided.

[346] Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93, 103 (2d Cir. 2010)
[347] Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93, 107 (2d Cir. 2010)
[348] See Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93, 107 (2d Cir. 2010)

2. Moving Forward

The Tiffany holding seems to balance the parties’ competing interests while distributing burdens according to relative expertise and resources. Encumbering eBay with the legal responsibility to police its site for infringing auctions would have forced it to completely change its operating model, while relieving it from all responsibility would have encouraged it to facilitate even more counterfeit auctions. By placing the initial burden of notice on rights holders, the court authorized eBay’s existing model and supported VeRO as a self-policing tool that would allow them to combat counterfeit sales. A rights holder presumably possesses expertise in identifying its own products and trademarks, so owners are presumably better situated than market intermediaries to determine whether an auction contains infringing items. But as the administrator and facilitator of the auction platform, eBay is uniquely situated to remove the identified infringement, and thus assumes the burden of action after sufficient notice of infringing conduct.

While the court moored its decision in the distinction between general and specific knowledge, the opinion was distinctly flavored by eBay’s heavy investment in anti-counterfeiting measures. During the relevant period, eBay was spending around $20 million per year on counterfeit prevention initiatives, including a buyer protection program and a fraud engine that automatically searches for counterfeit auctions.[350] Additionally, the court was satisfied that eBay had removed every listing flagged by Tiffany as potentially infringing.

[349] Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93, 110 (2d Cir. 2010)
[350] Tiffany (NJ) Inc. v. eBay, Inc., 576 F. Supp. 2d 463, 476 (S.D.N.Y. 2008) aff'd in part, rev'd in part, 600 F.3d 93 (2d Cir. 2010)

3. The VeRO Program

For online intermediaries facilitating user-to-user sales, the Tiffany court’s acceptance of eBay’s VeRO program is perhaps more instructive than its decision to absolve eBay of any legal obligation to actively monitor its site for infringing content. Generally, VeRO is a self-policing mechanism that places the initial burden of identifying infringing auctions on the holders of intellectual property rights.[351] Under the VeRO program, a rights holder alleging infringement must download and submit a Notice of Claimed Infringement (NoCI) to one of eBay’s designated agents. In addition to swearing ownership and a good faith belief that the identified listing actually infringes its rights, the owner must associate the alleged infringement with one of twelve reason codes, which correspond to different types of intellectual property claims.[352] After receipt of a NoCI, eBay removes the identified listings within 24 hours, and often much sooner.[353] EBay then provides the seller with the e-mail address of the accusing rights holder, and the burden shifts to the seller to prove that its auction was legitimate. To reinstate the item flagged for trademark infringement, eBay must receive permission from the filer of the NoCI.

[352] “How to report a listing to eBay”
[353] Tiffany (NJ) Inc. v. eBay, Inc., 576 F. Supp. 2d 463, 478 (S.D.N.Y. 2008) aff'd in part, rev'd in part, 600 F.3d 93 (2d Cir. 2010)

4. History of VeRO

EBay designed VeRO in the wake of the Digital Millennium Copyright Act (DMCA), which established a safe harbor for Internet service providers with copyright infringing users. Under the DMCA, ISPs could avoid liability by removing infringing content after being notified of its existence.[354] Similar to the DMCA, VeRO places the burden of policing eBay’s site for trademark infringement on rights holders, who must submit a NoCI to eBay each time an infringing auction is identified. Again mirroring the DMCA, the burden of action shifts to eBay only after notice of specific instances of user infringement. But unlike the DMCA, there is no legally supported recourse for sellers whose auctions are taken down at the request of rights holders, and eBay conducts no independent investigation into the validity of ownership claimed in a NoCI. Accused sellers are simply provided with the information of the accusing rights holder and asked to contact them directly to resolve any disputes. Consequently, rights holders have every incentive to overzealously send NoCIs, and many auctions for authentic goods are removed and the accounts of individual sellers are wrongly suspended or removed completely.[355]

While serving as eBay’s shield, the VeRO program functions as a sword for brand owners interested in curbing legitimate sales protected by the first sale doctrine and nominative fair use. Companies like Tiffany and Louie Vuitton would love the ability to regulate or eliminate legitimate secondary markets for their products, and part of Tiffany’s inspiration for bringing claims was eBay’s refusal to prohibit the sale of all Tiffany items on its site. But the law gives them no right to regulate these markets, and in many ways the VeRO program sacrifices the rights of its users to allow eBay to escape liability. Ultimately, judicial acceptance of VeRO does not provide any new legal authority to mitigate legitimate sales, but it does act as a powerful extralegal tool for rights holders with the desire and wherewithal to regulate a vast secondary market for their products.

[355] Tiffany (NJ) Inc. v. eBay, Inc., 576 F. Supp. 2d 463, 479 (S.D.N.Y. 2008) aff'd in part, rev'd in part, 600 F.3d 93 (2d Cir. 2010)

5. Outcomes

The Second Circuit’s opinion was favorable to online auction sites, but may have been too fact-specific for general application beyond eBay’s specific business model. Ultimately, the opinion failed to delineate a clear standard for secondary liability claims against online intermediaries generally, and other online intermediaries wondering whether their own practices are legally sufficient must proceed without clearly demarcated boundaries. Regardless, there are a few facts that seemed particularly persuasive to the court’s decision, and similarly situated intermediaries hoping to avoid trademark infringement liability can look to the case for at least some direction for avoiding liability.

First, in light of Tiffany, it is fairly reasonable to assume that a notice and takedown system similar to the VeRO program is persuasive, so long as care is taken to actually remove identified listings after receipt of notice. The court made repeated references to eBay’s prompt compliance with infringement notices, and similar diligence would seem to greatly increase the likelihood of avoiding trademark liability. Indeed, other online marketplaces have adapted in the wake of Tiffany, and Amazon currently utilizes a notice and takedown mechanism very similar to the VeRO program. The familiar looking “rights holder notification” even requires the same assurance of good faith as to rights holders’ identities and infringing activity.[356]

Uncertainty remains however, as eBay had several counterfeit initiatives cited by the court, making it difficult to determine whether a VeRO-like program is sufficient, necessary, or simply persuasive. For example, it is unclear whether an online marketplace must also utilize an internal infringement filter akin to eBay’s Fraud Engine, whether users accused of infringement must be suspended or removed in certain circumstances, or if simply removing the listing is sufficient. The court also highlighted eBay’s consistent steps to “improve its technology and develop anti-fraudulent measures as such measures became technologically feasible and reasonably available,” which may suggest that online marketplaces are expected to continually upgrade their protective measures as new technology becomes feasible.[357] Also, while the sum of eBay’s practices were deemed sufficient, the court gave no indication as to whether those practices represent the bare minimum or exceed the legal requirements of an online auction site with trademark infringing users.

Since the holding, Tiffany has been cited in over 100 cases, but rarely for cases concerning liability for online intermediaries. In Rosetta Stone v. Google, a district court found Google’s anti-infringement efforts sufficiently similar to eBay’s and absolved Google of any contributory liability on the basis of Tiffany.[358] But the Fourth Circuit overturned the decision, holding that Tiffany did not apply to Rosetta Stone’s claims on Google’s motion for summary judgment.[359] The case subsequently settled out of court, leaving an open question of whether Google’s AdWords policy amounted to trademark infringement. Additionally, the decision would seem to preclude any reliance on Tiffany in a motion for summary judgment, limiting any application of its holding to fact-specific inquiries before fact is tried.
In 1-800 Contacts Inc. v., the 10th Circuit advocated a stricter standard for online intermediaries providing service to trademark infringing users.[360] Specifically, the court held that nothing in Tiffany prevents contributory liability from attaching where the service provider did not need specific knowledge of the infringing users identity to prevent the illegal conduct. The court reasoned that “when modern technology enables one to communicate easily and effectively with an infringer without knowing the infringers specific identity, there is no reason for a rigid line requiring knowledge of that identity…”[361] This logic tracks the implicit understanding in Tiffany that online marketplaces are expected to update their anti-infringing initiatives alongside technology, which effectively creates a fluid and unknowable standard for contributory trademark liability. Additionally, whether a service provider with general knowledge could have utilized technology to prevent counterfeit infringement would appear to be a question of fact, and widespread adoption of the 10th Circuit interpretation could lead to considerable litigation as identity screening mechanisms become more sophisticated.

[357] Tiffany (NJ) Inc. v. eBay Inc., 600 F.3d 93, 100 (2d Cir. 2010)
[358] Rosetta Stone Ltd. v. Google Inc., 732 F. Supp. 2d 628 (E.D. Va. 2010) aff'd, 676 F.3d 144 (4th Cir. 2012)
[359] Rosetta Stone Ltd. v. Google, Inc., 676 F.3d 144, 165 (4th Cir. 2012)
[360] 1-800 Contacts, Inc. v., Inc., 722 F.3d 1229, 1254 (10th Cir. 2013)
[361] Id.

D. The State as Soft Power – The Intermediaries Around Wikileaks

1. Introduction

The mission of, which launched on October 4, 2006, is to anonymously publish otherwise private or censored documents in order to promote government and corporate transparency across the world.[362] Led by its editor-in-chief Julian Assange, an Australian computer programmer, publisher, and journalist, and largely relying on anonymous sources, WikiLeaks has subsequently been responsible for publicizing several very large leaks of confidential government information.[363] These leaks made WikiLeaks, its employees, and its sources the target of possible criminal liability.[364] But the online intermediaries that provided services, hosted, or supported WikiLeaks also incurred many risks. Although not faced with direct criminal charges, intermediary supporters of WikiLeaks have been forced to confront government pressures and the potential that legal action could be taken against them. Without much guidance from courts or prior business experiences, online intermediaries responded in various ways to these pressures. This analysis of the WikiLeaks case will examine how online intermediaries responded in the wake of WikiLeaks’ dissemination of controversial documents, the United States government’s effect on those responses, and what this case means for the future of online intermediaries.

[362] About: What is Wikileaks?, (June 27, 2014, 12:45 PM),
[363] Yochai Benkler, A Free Irresponsible Press: Wikileaks and the Battle over the Soul of the Networked Fourth Estate, 46 Harv. C.R.-C.L. L. Rev. 311 (2011).
[364] Id. at 313.

2. Background

Beginning in 2007, WikiLeaks made headlines in the United States by independently releasing numerous confidential documents. These leaks included the Standard Operating Procedures of the Guantanamo Bay Prison, reports on Scientology, U.S. military rules of engagement in Iraq, emails from then-Governor of Alaska Sarah Palin, and, most controversially, a video showing two Apache attack helicopters killing two Reuters employees in Iraq.[365] After WikiLeaks released the Iraq video, the United States arrested and charged U.S. army intelligence analyst Chelsea Manning for obtaining and leaking confidential national security information to WikiLeaks in violation of the Uniform Code of Military Justice, which includes the Espionage Act and the Computer Fraud and Abuse Act.[366] The United States later convicted Manning of 20 offenses and sentenced her to 35 years in prison.[367]

After Manning’s arrest, WikiLeaks worked with more established media outlets, such as The New York Times, The Guardian, and Der Spiegel, to release Afghanistan War Diaries and Iraq War Logs in 2010.[368] Then, on November 28, 2010, WikiLeaks and its media partners released 220 United States Embassy Cables to the public.[369] The leaking of thousands of cables, dubbed “Cablegate,” contained confidential internal communications between the U.S. government and various embassies from 1966 to 2010.[370] Although WikiLeaks’ previous releases had earned worldwide attention, Cablegate nevertheless set off unprecedented scrutiny from the public and the government.[371]

After WikiLeaks released the Cablegate memos, the White House immediately issued a statement, stating that “[b]y releasing stolen and classified documents, WikiLeaks has put at risk not only the cause of human rights but also the lives and work of these individuals.”[372] Three days later, on December 1, 2010, United States Senator Joe Lieberman, Chairman of the Senate Committee on Homeland Security, released a statement asking the intermediaries supporting WikiLeaks to end their relationship with WikiLeaks. In Lieberman’s statement, he stated, “I call on any other company or organization that is hosting Wikileaks to immediately terminate its relationship with them. . . No responsible company – whether American or foreign – should assist Wikileaks in its efforts to disseminate these stolen materials.”[373] Lieberman’s staff members also called Amazon to inquire about its hosting of WikiLeaks and the confidential documents.[374]

[365] Id. at 316–26.
[366] WikiLeaks: Bradley Manning Faces 22 New Charges, CBS News, (June 27, 2014, 12:58 PM),
[367] Charlie Savage & Emmarie Huetteman, Manning Sentenced to 35 Years for a Pivotal Leak of U.S. Files, The New York Times, Aug. 21, 2013, available at
[368] See Benkler, supra note 2, at 323–325.
[369] Id. at 326–329.
[370] Id.
[371] Id.
[372] Jennifer K. Elsea, Criminal Prohibitions on the Publication of Classified Defense Information, Congressional Research Service, Sept. 9, 2013, available at
[373] See Benkler, supra note 2, at 339.
[374] Julie Adler, The Public’s Burden in a Digital Age: Pressures on Intermediaries and the Privatization of Internet Censorship, 20 J.L. & Pol’y 231, 239 (2011).

3. Legal Liability

At the time of the Cablegate releases, WikiLeaks used various intermediary companies to help it maintain its online presence and financial viability. Amazon hosted on its cloud hosting services, while EveryDNS provided the domain name service. WikiLeaks solicited donations through its website using payment processing services such as PayPal, MasterCard, Visa, and Bank of America. Citizens could also access WikiLeaks content through its many social media platforms and other websites and applications that linked to WikiLeaks material.

In general, these online intermediaries would have legal immunity from most liability under Section 230 of the Communications Decency Act (CDA),[375] but Section 230 of the CDA does not apply to federal criminal law.[376] Therefore, online intermediaries such as Amazon, EveryDNS, Twitter, and PayPal could have potentially been liable under federal statutes, including the Espionage Act[377] and laws against material support for terrorism[378] or treason.[379]

Although the United States convened a grand jury to consider possible charges against WikiLeaks and Assange,[380] the United States Department of Justice has not taken any formal action against WikiLeaks, Assange, or any third party or business associated with the website.[381] In general, the United States has never prosecuted a journalist or an online intermediary for publishing classified information.[382] In the WikiLeaks case, the United States only brought charges under the Espionage Act against Manning, the source of the illegally obtained documents.[383] But the vague language of the Espionage Act leaves open the possibility of charging non-government employees such as journalists, media outlets, and intermediaries.[384] It is difficult to determine exactly who could be found liable under the Espionage Act.[385] Even though the threat looms, the United States continues to suggest it does not plan to charge a publisher or intermediary in connection to WikiLeaks. A legislative attorney wrote that “There may be First Amendment implications that would make such a prosecution difficult, not to mention political ramifications based on concerns about government censorship.”[386]

Those First Amendment implications stem from extensive United States Supreme Court jurisprudence, mostly notably New York Times Co. v. United States,[387] also known as the “Pentagon Papers” case, in 1971 and Bartnicki v. Vopper[388] in 2001. In the “Pentagon Papers” case, the United States Supreme Court held that under the First Amendment government actions to prevent publication, known as prior restraints, receive the most stringent judicial scrutiny and would only be allowed in extremely rare situations.[389] In Bartnicki, the Court extended a principle from the 1979 case of Smith v. Daily Mail Publishing Co.[390] and established that publishing truthful information about a matter of public concern, even if obtained through the illegal activity of a third party, is constitutionally protected unless the government’s restriction on the speech satisfies a “state interest of the highest order.”[391]

Since the relevant documents are truthful, newsworthy, and the intermediaries are not connected to their illegal obtainment, applying the “Pentagon Papers” case and Bartnicki to WikiLeaks, means that the only chance an online intermediary would be held liable and not protected by the First Amendment would be if a Court determined there was a high likelihood that the content released through WikiLeaks would bring immediate and grave harm to the country.[392]

[375] See 47 U.S.C. §§ 230(c)(1) (1996). “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
[376] See 47 U.S.C. §§ 230(e)(1) (1996).
[377] See 18 U.S.C. §§ 37.
[378] See 18 U.S.C. §§ 2339(A), (B). See also Charles Doyle, Terrorist Material Support: An Overview of 18 U.S.C. 2339A and 2339B, Congressional Research Service, July 19, 210, available at
[379] See 18 U.S.C. §§ 2381.
[380] See Ed Pilkington, WikiLeaks: US Opens Grand Jury Hearing, The Guardian, (May 11, 2011),
[381] See Elsea, supra note 11, at 16
[382] See Geoffrey R. Stone, Government Secrecy vs. Freedom of the Press, 1 Harv. L. & Pol'y Rev. 185, 197, 204 (2007).
[383] Among other charges, the United States convicted Manning of 18 U.S.C. §§ 793(e) of the Espionage Act, which states that: “[w]hoever having unauthorized possession of, access to, or control over any document . . . or information relating to the national defense which information the possessor has reason to believe could be used to the injury of the United States or to the advantage of any foreign nation, willfully communicates, delivers, transmits . . . to any person not entitled to receive it . . . Shall be fined under this title or imprisoned not more than ten years, or both.”
[384] See Stone, supra note 21.
[385] See Emily Peterson, WikiLeaks and the Espionage Act of 1917: Can Congress Make It a Crime for Journalists to Publish Classified Information?, The New Media and the Law Vol. 35 No. 3, Summer 2011, available at Steven Aftergood, director of the Project on Government Secrecy for the Federation of American Scientists, said “The Espionage Act is so vague and poorly defined in its terms, that it’s hard to say exactly what it does and does not cover.” Id.
[386] See Elsea, supra note 11, at 16.
[387] New York Times Co. v. United States, 403 U.S. 713 (1971). The United States filed an injunction against The New York Times, demanding the newspaper stop publishing the Pentagon Papers that detailed military operations and secret diplomatic negotiations of the Vietnam War obtained through an employee of the Defense Department.
[388] Bartnicki v. Vopper, 532 U.S. 514 (2001). Bartnicki involved punishment of a radio station after it published an audio recording in violation of the Electronic Communications Privacy Act.
[389] New York Times Co., 403 U.S. at 714.
[390] Smith v. Daily Mail Publ'g Co., 443 U.S. 97, 104 (1979).
[391] Bartnicki, 532 U.S. at 534.
[392] See Stone, supra note 21, at 202. Historical examples of content that would likely bring immediate and grave danger to the nation were “the sailing dates of transports” or “locations of troops” in wartime. Id. Stone points out that the content would likely have to instantly endanger American lives and not meaningfully contribute to public debate. Id. at 203. “[T]he reason for protecting the publication of the Pentagon Papers was not only that the disclosure would not ‘surely result in direct, immediate, and irreparable damage’ to the nation, but also that the Pentagon Papers made a meaningful contribution to informed public debate.” Id.

4. Online Intermediaries React

It was easy for WikiLeaks to initiate relationships with online intermediaries as the website was still developing and relatively uncontroversial, but as soon as governmental attention and pressures began to mount, the intermediaries quickly began disassociating themselves from WikiLeaks. Many of the intermediaries decided to end their relationship with WikiLeaks even though they had clear First Amendment protection.

On December 1, 2010, three days after WikiLeaks published the embassy cables, Amazon removed from its cloud hosting services, citing violations of its terms of service and that the content on WikiLeaks was potentially damaging.[393] After Amazon’s decision, WikiLeaks began using servers in Sweden and France. Two days later, the French company OVH, which was hosting WikiLeaks, went offline after pressure from French Industry Minister Eric Bresson.[394] The Pirate Party in Sweden then became WikiLeaks’ sole hosting service.[395]

EveryDNS, which provided domain name service to WikiLeaks, also denied service to WikiLeaks, claiming WikiLeaks received distributed-denial-of-service (DDoS) attacks that affected other EveryDNS clients.[396] For a period of time, Internet users who typed “” into their URL would not be directed to the website. Some users resorted to typing the IP address of WikiLeaks in order to directly connect to the website.[397] WikiLeaks quickly switched to a domain name service in Switzerland and could be temporarily found via “”[398]

PayPal, an online payment service through which the public could financially support WikiLeaks, suspended its service to WikiLeaks on December 4, 2010.[399] This decision came after the U.S. State Department legal adviser Harold Koh wrote a letter to WikiLeaks stating the website was engaging in illegal activity.[400] In a statement, PayPal said that it suspended the WikiLeaks account because “our payment service cannot be used for any activities that encourage, promote, facilitate or instruct others to engage in illegal activity.”[401] Soon after, MasterCard, Visa, and Bank of America announced they would no longer allow WikiLeaks to accept process payments using their products.[402] This resulted in a 95 percent decrease of donations to WikiLeaks even though the website found some limited funding through other third parties.[403]

Later, in December 2010, Apple removed an iPhone application that allowed users to access WikiLeaks documents.[404] Even though the developer of the app had no direct ties to WikiLeaks, Apple said it removed the app because the app did not comply with local laws and could put people in harm’s way.[405]

Although Amazon, EveryDNS, PayPal, and Apple seemed to make their decisions after soft, indirect government pressures, Twitter, another online intermediary, felt direct pressure from United States courts. On December 14, 2010, the U.S. Department of Justice subpoenaed Twitter for WikiLeaks’ account information.[406] The subpoena, which came with a gag order, requested the user names, addresses, telephone numbers, bank account details, and credit card numbers of five WikiLeaks leaders associated with WikiLeaks’ Twitter account.[407] The subpoena also sought the email addresses and IP addresses for any communications stored on those accounts, which included identifying information of some of the more than 600,000 followers of WikiLeaks’ Twitter page.[408] Twitter successfully appealed the gag order in order to disclose the subpoena to its users, but on November 11, 2011, a U.S. federal judge upheld the subpoena under the Stored Communications Act.[409] Although Twitter was the only social media outlet to publicly contest the subpoenas and gag orders, WikiLeaks claims that similar subpoenas have been issued to Google and Facebook.[410]

[393] See Benkler, supra note 2, at 339.
[394] Id. at 340.
[395] Id.
[396] Id.
[397] Id.
[398] Id.
[399] Id. at 341.
[400] Id. at 340.
[401] Jonathan Haynes, PayPal Freezes WikiLeaks Account, The Guardian, Dec. 4, 2010,
[402] See Benkler, supra note 2, at 340.
[403] Mia Shanley, WikiLeaks Claims Victory as Credit Card Donations Flow Again, Reuters, July 3, 2013,
[404] Miguel Helft, Why Apple Removed a WikiLeaks App from Its Store, The New York Times, (Dec. 21, 2010 12:29 PM),
[405] Id.
[406] Scott Shane & John F. Burns, U.S. Subpoenas Twitter Over WikiLeaks Supporters, The New York Times, Jan. 8, 2011, available at
[407] Id.
[408] Id.
[409] Zack Whittaker, U.S. Judge Upholds Twitter Subpoena of WikiLeaks’ Followers, ZDNET, (Nov. 11, 2011, 1:42 PM),
[410] Shane & Burns, supra note 45.

5. Analysis

Some of the intermediaries publically cited violations of Terms of Use or other contractual violations as why they ended their relationship with WikiLeaks, but pressure from the United States government and threats of criminal liability undoubtedly played a large role.[411] Questions remain as to what these decisions by the intermediaries tell us about the relationship between the United States government and online intermediaries and what it means for the future of the Internet and free speech.

The WikiLeaks case is an example of how the United States government censored potential Internet content through extralegal means. Although the law did not empower the government to stop the intermediaries from associating with WikiLeaks, the soft power of the government led to the suppression of speech by limiting the means in which the content could reach the public. The government’s influence stemmed, for at least the time being, the dissemination of WikiLeaks materials. Just as traditional print media relied on common mail carriers to transmit newspapers, so do modern-day online media outlets rely on online intermediaries for distribution and spreading of their content. Instead of the government, private companies who maintain the Internet’s infrastructure are increasingly often the gatekeepers of which messages are allowed to freely flow online.[412] If the United States government, through extralegal avenues, is able to control online intermediaries by skirting the limits of the Constitution, the government, in turn, is able to stifle online speech without running afoul of the First Amendment. Although practical considerations are of course a major obstacle, truly guaranteeing free speech online will require an Internet free from of government censorship in conjunction with a robust private infrastructure that supports free speech.[413]

[411] See Benkler, supra note 2, at 314.
[412] See Adler, supra note 13, at 237.
[413] Id. at 253.

i. What, If Anything, Can be Done?

Since online intermediates are private companies and are not constrained by the limits of the Constitution, they are only governed by the contracts they sign with their customers. As a result, the terms of service controlling online speech end up being stricter than restrictions on public speech. There are limited options for WikiLeaks or other disseminators of online speech to fight against suppression by intermediaries. WikiLeaks could sue the intermediary for wrongful denial of service, arguing there is an implied contractual obligation to not withhold service unreasonably or without good faith.[414] WikiLeaks could also sue the government for tortuous interference with contractual relations, but it would be difficult to prove that government intervention caused the intermediary to break the contract with WikiLeaks.[415]

Without the power of law encouraging intermediaries to keep freedom of expression robust on the Internet, one of the only remaining influences over the intermediaries is the power of the consumer. If public backlash is strong enough, intermediaries may think twice about refusing service to organizations like WikiLeaks. This is difficult because of the layers of secrecy between the government and the intermediaries that restrict disclosures to the public. For example, it was only after Twitter appealed the gag order that the public found out about the subpoenas it received from the government. This earned praise from many organizations and users of the social networking website.[416] The United States government submits more than 50,000 subpoenas each year, known as national security letters, with gag orders that prevent revealing to the public what the subpoenas seek or even that the subpoenas exist.[417] These gag orders stifle public debate on the topic of national security letters. If the public does not know what is going on between the intermediaries and the government, the public will not be able to put pressure on intermediaries.

[414] See Benkler, supra note 2, at 367.
[415] Id. at 367–370.
[416] Ryan Singel, Twitter's Response to WikiLeaks Subpoena Should Be the Industry Standard, Wired, Jan. 11, 2011, available at <>.
[417] Noam Cohen, Twitter Shins a Spotlight on Secret F.B.I. Subpoenas, The New York Times, Jan. 9, 2011, available at <> partner=rss&emc=rss&_r=0.

ii. Why Only WikiLeaks?

The WikiLeaks case study also brings up the question of why the intermediaries disassociated themselves from but not the other websites that were distributing the same material. The Cablegate documents that caused the intermediaries to separate themselves from WikiLeaks were not uniquely posted on; they were also available on the websites of The New York Times, The Guardian, and Der Spiegel.[418] Nevertheless, the intermediaries did not change their policies related to the more established press entities. The intermediaries drew a line between the established press and WikiLeaks, a website who claims to be part of the press but is often cast as “rogue” or anti-American.[419] Although the Constitutional protections given to WikiLeaks and the other outlets are largely the same,[420] the decisions by the intermediaries showed a clear difference in policy between the intermediaries and WikiLeaks and the intermediaries and other media outlets.[421] For whatever reason this policy difference exists – possibly due to differences in organizational structure, technology, or the intent of WikiLeaks compared to the established press – this stark difference in treatment puts online ventures, especially ones not conforming to traditional norms or paradigms, e.g. “the press”, at a greater risk than traditional media outlets.[422] This disparate treatment undermines the quality of our public disclosure and weakens the important function of the newly developing fourth estate in the networked information society.[423]

[418] See Benkler, supra note 2, at 326.
[419] Id. at 385–396.
[420] See Branzburg v. Hayes, 408 U.S. 665 (1972). See also Citizens United v. Federal Election Commission, 558 U.S. 310 (2010). “We have consistently rejected the proposition that the institutional press has any constitutional privilege beyond that of other speakers.”
[421] See Benkler, supra note 2, at 358.
[422] Id.
[423] Id. at 362.

iii. What Will the Impact be on Economics, Social Progress, and Innovation?

There are several different downstream consequences of the WikiLeaks case study. After seeing Amazon, EveryDNS, PayPal, and Apple bow to government pressure, online intermediaries faced with similar dilemmas will more easily make the same decision. If and when future online intermediaries are approached with the question of whether to support OIs that are publishing questionable material, especially confidential national security material, an example has already set by some of the most powerful intermediaries in the country. Additionally, the outcome of its efforts with respect to WikiLeaks surely reassures the United States government that pressuring private companies yields successful results, which will only encourage similar pressure in the future. Finally, it may chill the speech of other online speakers who may think twice about voicing their opinion online for fear their speech will be suppressed by the intermediaries.

E. Online Intermediaries and Transparency Reporting

1. Introduction

As online intermediaries move beyond simply delivering content to end users and become persistent cloud storage networks for all of a user’s communications and online interactions, these intermediaries have become incredible resources for law enforcement and intelligence agencies. This puts online intermediaries in a difficult situation with respect to their users. On the one hand, user trust is a central part of their business model: if users cannot trust these companies, they will not entrust them with sensitive personal material such as photographs, e-mails, texts, and other documents. But on the other hand, companies are legally required to comply with the law of the countries in which they operate. Some of these laws require companies to disclose their users’ sensitive data (ranging from metadata to actual content) when presented with a valid legal request such as a warrant, subpoena, or court order.

Many of the world’s largest online intermediaries are products of California’s Silicon Valley, and are thus US companies bound by US law. When discussing issues such as human rights and online censorship, this location has been considered an asset, often allowing companies to claim immunity from the laws of the countries in which they don’t (yet) operate.[424]

US-based intermediaries, however, have never claimed to be immune from US legal jurisdiction. And the revelations of Edward Snowden regarding the NSA have shown how that jurisdiction subjects these companies to the surveillance demands of US intelligence agencies.[425] While the media focus of the past year and a half has been on the depth and breadth of those intelligence demands, these companies are equally subject to the requests of other US law enforcement agencies from the federal level all the way down to the local level.

Regardless of whether the demands are from intelligence agencies or local sheriff’s offices, they place the companies in a difficult situation. How do they comply with valid requests while maintaining the critical trust of their users? Over the past year there has been an explosion in the use of transparency reports as one way to navigate this difficult tension. One of the audiences for these reports is the users of the service;[426] for these users, the report symbolizes a commitment to openness and offers assurances that the company is not complicit in mass or indiscriminate surveillance. The reports, however, are an incomplete solution. They are subject to misunderstandings and ultimately serve as incomplete proxies for the real issue: the trustworthiness of companies and the extent to which they will go to protect the privacy of their users.

[424] See Eva Galperin, What Does Twitter’s Country-by-Country Takedown System Mean for Freedom of Expression?, EFF (Jan. 27, 2012),’s-country-country-takedown-system-mean-freedom-expression (“Like all companies (and all people) Twitter is bound by the laws of the countries in which it operates, which results both in more laws to comply with and also laws that inevitably contradict one another. Twitter could have reduced its need to be the instrument of government censorship by keeping its assets and personnel within the borders of the United States, where legal protections exist like CDA 230 and the DMCA safe harbors (which do require takedowns but also give a path, albeit a lousy one, for republication).”).
[425] See, e.g., Timothy B. Lee, Here’s everything we know about PRISM to date, Washington Post, June 12, 2013, available at
[426] Interviews conducted with several companies about their transparency reports have revealed that there are several audiences that companies are often trying to reach through their transparency reports. Other audiences for transparency reports include policy makers, investors, law enforcement agencies, and even employees within the company itself. This series of case studies focuses on the legal obligations (and potential liability) placed upon online intermediaries. In the context of government requests for user data, these obligations most directly affect companies who are compelled to disclose data, the users whose data is disclosed, and users whose trust in the company is eroded because of those compelled disclosures. Because of that, this paper focuses most directly on transparency reports as a means of communicating with those users.

2. Legal Background

The legal requirements for the disclosure of user data are found in several areas. At the federal level, the requirements come from two key sources. The primary authority enabling the federal government to compel companies to surrender customer data in criminal investigations is found in the Stored Communications Act (SCA). By contrast, the authority for intelligence investigations is found primarily in the Foreign Intelligence and Surveillance Act (FISA). The authority used to compel the data disclosure is important for several reasons: it determines the legal standard that must be used, the kind of data that can be collected, and even how companies can write their transparency reports.

Although these authorities are described in greater detail in the legal primer section of this paper,[427] a brief review is useful here. In short, there are three main kinds of legal processes for criminal investigations: subpoenas, court orders (often called d orders because the authority is located in Section 2703(d) of SCA), and warrants. Because subpoenas and d orders are easier to obtain, law enforcement may only use them to collect basic subscriber information and other non-content information. Warrants are more difficult to obtain, requiring convincing a court that there is “probable cause” that information related to a crime is in the specific place to be searched. Because they are harder to obtain, warrants can be used to collect content information, such e-mail subject lines, e-mail content, and instant message text.

There are also three legal processes for intelligence investigations: National Security Letters (NSLs), section 215 of the USA PATRIOT Act, and section 702 of the FISA Amendments Act. NSLs allow the FBI to obtain telephone and e-mail records (and associated billing records), “relevant to an authorized investigation to protect against international terrorism or clandestine intelligence activities,” but not the content of the messages themselves.[428] Section 215 authority allows secret court orders, approved by the Foreign Intelligence Surveillance Court (FISC), requiring third parties, such as ISPs or telephone providers, to provide business records deemed relevant to terrorism or intelligence investigations. The third authority is Section 702 of the FISA Amendments Act, which allows the government to collect both the content and non-content information of targeted non-U.S. persons reasonably believed to be outside of the United States.

Subpoenas, d orders, warrants, 215, and 702 orders represent just some of the wide array of legal tools at the disposal of American law enforcement and intelligence agencies. It is this collection of legal tools that put American-based online intermediaries into a difficult position. There are very few options available for companies that are served valid legal process, other than compliance. Generally speaking, that is for the best – it would undermine civil society and respect for law if companies could pick and choose the laws that they comply with. Unfortunately, the invasiveness of these legal demands risks undermining the relationship between the companies and their users.

[427] See supra, pp 16-17
[428] 18 U.S.C. § 2709(b)(2).

3. Transparency Reporting: Resolving the Tension Between Compliance and Trust?

One of the key ways that companies have tried to maintain the trust of their users while still complying with valid legal process is through the publication of transparency reports. These reports, which document the amount and type of legal process that law enforcement agencies and government have served on a company, are a relatively new phenomenon. Prior to Edward Snowden’s first NSA leak on June 9, 2013, only seven American Internet or telecommunications companies had published transparency reports (LinkedIn, Google, Sonic, Dropbox, SpiderOak, Twitter, and Microsoft). In the year that followed, 18 additional companies released transparency reports. Thus, the revelations about the scope of NSA surveillance – and the attention that those news stories garnered – served to build momentum for transparency reporting. With this surge in reporting taking place only within the last year, transparency reports are very much an on-going experiment. The 25 current transparency reports represent a vast array of preferences, choices, and techniques for presenting this information. And because they are so new, a clear consensus has not yet developed around them. That being said, there are three important observations we can draw from transparency reports and the companies’ attempts to use them to restore and maintain user trust.

4. National Security Data is Complicated

Although the stories of NSA surveillance may have catalyzed the use of transparency reporting, domestic law enforcement data requests are actually the more commonly reported category of data. 18 of the 25 transparency reports include domestic law enforcement requests, and only 15 include data on FISA requests or NSLs. However, more significant than the number of reports is the fact that companies provide far greater detail about domestic law enforcement requests than they do for national security requests.

The reason for this disparity in detail between reporting about domestic law enforcement requests and reporting about national security surveillance is due to complex legal restraints. Companies are generally free to publish as much detail as they wish with regards to domestic law enforcement requests. In fact, one company has taken the maximalist approach of publishing a list of every single such report it has received.[429]

By contrast, the government requires companies’ reports to be quite circumspect with regards to disclosures about FISA and NSL requests. These restrictions come from a January 27, 2014 agreement between the U.S. Department of Justice and the major Internet companies.[430] This agreement leaves companies with two, and only two, approaches to publishing information about national security related requests. The first option allows companies to report the following categories of data:

  • Number of NSLs received
  • Number of customer accounts affected by NSLs
  • Number of FISA orders for content information
  • Number of “customer selectors targeted under FISA content orders”
  • Number of FISA orders for non-content information
  • Number of “customer selectors targeted under FISA non-content orders”

However, all of those categories can only be reported in bands of 1000 starting with 0–999. The second option allows companies to report in bands of 250 starting with 0–249. But companies using this option may only report:

  • Number of national security requests received (FISA and NSL together in one number)
  • Number of “customer selectors targeted under all national security process”

Because of these restrictions, it has been difficult to extract from transparency reports valuable information relating to national security process. While NSA surveillance may have prompted an explosion in transparency reporting, the reports available say far more about domestic law enforcement than they do about the NSA. That fact, however, does not diminish the value of transparency reports as a way of understanding domestic criminal surveillance. Indeed, one thing that we’ve learned from transparency reports is that online intermediaries receive just as many requests (if not more) for domestic criminal surveillance than intelligence related surveillance.[431] Thus, although the focus on the NSA may have been misplaced as the motivation for transparency reporting, the end result has provided data helpful for understanding the scale and scope of the surveillance burdens placed upon online intermediaries as a whole.

[429] See Credo Mobile, 2013 Transparency Report,
[431] Ryan Budish, Tech firms should be allowed to publish more data on US surveillance, Guardian (July 18, 2013), at “[I]f our estimates are correct, national security surveillance accounted for only about 13% of the total requests Microsoft received and 54% of the total accounts surveilled. That means that non-secret criminal surveillance of Americans is as pervasive, if not more so, than the secret national security surveillance.”).

5. Transparency Reports Describe a Passive Event

The biggest challenge for transparency reports as a tool for reestablishing and maintaining trust between companies and their users is that the data often provides little that explains how companies are trying to protect user data. The reason for this stems from the fact that transparency reports are largely documenting a passive event on the part of companies; transparency reports say more about governments than companies. If a company’s transparency report shows a large number of government requests for their user data, that could indicate one of three things:

  • The government is aggressively investigating the users of this company
  • The company has a large number of users
  • The users of this service are more likely to be engaged in criminal activity

Importantly, none of those three possibilities relates to the trustworthiness of the company itself, and that’s because companies have no control over the number of requests they receive. Companies do, however, have control over how they handle those requests. Companies can carefully scrutinize requests to ensure that they are responding only to valid requests. But, once again, transparency reports are ill suited to document this. If a company’s transparency report shows that they have responded to every single government request, it may be because they haven’t scrutinized the validity of the requests. But it may also be because every single request was valid, even after careful scrutiny. Thus, transparency reports are often weak proxies for determining company trustworthiness.

6. Companies Are Competing With Transparency Reports

Although there are clearly challenges with transparency reports, many companies are innovating with their reports, both to address some of these weaknesses, and to compete with their peers. A good example of innovation comes from the user notice section of Tumblr’s transparency report.[432] User notice, like the volume of requests received, presents a problem for transparency reports because there may be many reasons why a company may or may not provide notice to a user, making a basic percentage misleading. For example, a company may choose (or be compelled) to not provide notice because the request is sealed or because or because the company concluded on its own that notice might disrupt an investigation. This concern is particularly salient in child pornography investigations, where notice to the suspected user might prompt them to delete evidence. Transparency reports are often too blunt a tool to express these subtleties in company decision-making.

Tumblr has tried to address this deficiency within existing reports by providing detailed data about the percent of notice for each of eight different kinds of legal investigations. For instance, Tumblr’s data shows that they provide notice in only 1% of “Harm to Minors” investigations and 0% of suicide investigations. Had Tumblr reported the percent of time they provided user notice cumulatively for all types of investigations, their lack of notice in child pornography cases would have made it appear that Tumblr was providing less notice to users overall. Making the effort to categorize requests by type of investigation is not easy, but it pays dividends by helping users understand more about Tumblr’s approach to user notice in different circumstances. No other company is as of yet providing this level of specificity for user notice in their transparency report.

There are other examples of innovation in transparency reporting. For instance, Verizon[433] and AT&T,[434] two of America’s biggest cellular service providers, have reported the number of requests for user location information, as well as the number of law enforcement requests for “cell tower dumps” – lists of every single phone number connected to a particular cellular tower. Although the latter is specific to mobile phone service, location data is something many intermediaries track and (presumably) share with law enforcement and intelligence agencies, but it has yet to make it to many other transparency reports.

In conversations with many companies that have released transparency reports, we’ve learned that companies often look to peer companies’ reports for inspiration when creating their own reports, but also seek to outdo existing reports with new levels of detail or innovative features. Thus, more recent transparency reports tend to make standard the features that were more innovative just a year ago. For instance, separating content from non-content requests, identifying emergency requests, and listing subpoenas, court orders, and warrants separately have all become the norm in more recent reports, when they were rarely done a year ago. Because companies seek to outdo each other with their transparency reports, it would not be a surprise to see these innovations spread to other reports, and to see further innovations in reporting that do even more to help users regain trust in online intermediaries.

[432] Tumblr’s Transparency Report, at
[433] Verizon Transparency Report: US Data, at
[434] AT&T Transparency Report, at

7. Conclusion

Online intermediaries increasingly find themselves in a difficult situation. How do they maintain the trust of their users while complying with valid legal demands to disclose user data to the government? One approach that has gained traction over the past year has been through transparency reporting. These reports, however, are incomplete proxies for company trustworthiness. This is largely due to the fact that companies have no control over the number of requests they receive and the validity of those requests. Despite this issue, reports, taken as a whole, help us better understand the often secretive and fragmented law enforcement environment that intermediaries operate within.

Ultimately, law enforcement requests and surveillance are government issues, not corporate ones. Thus, a government that wanted to enhance user trust for the companies that operate within its legal boundaries might take it upon itself to offer transparency reports of its own. Or better still, it would place significant legal restraints upon its ability to collect user data in the first place. But in the absence of those steps, transparency reports serve a useful role in providing a sense of the scope of law enforcement requests and government surveillance. To the extent that such reports show that only a small percentage of users are impacted by law enforcement requests and surveillance, they are indeed helpful for reestablishing and maintaining user trust. However, transparency reports are primarily statements about government activity, and there is little a transparency report can do to directly change government behavior. Additionally, there have been no studies conducted to identify any impact from transparency reports on either user behavior or corporate bottom lines.[435] However, to the extent that they demonstrate the scope of government data collection, the reports may help contribute to the policy discussion that could have the biggest impact on user trust: a change in government data collection and surveillance behaviors.

[435] We do, however, has evidence that the revelations about NSA surveillance have cost online intermediaries somewhere between $35 and $180 billion dollars in lost business. Claire Cain Miller, Revelations of N.S.A. Spying Cost U.S. Tech Companies, NY Times, Mar. 21, 2014, at

F. Appendix A: Literature Review

The literature review can be found as a living document here:

G. Appendix B: Youtube and ContentID Timeline



(Fisher & Oberholzer-Gee)