Mass Law Blog

Anderson v. TikTok: A Potential Sea Change for § 230 Immunity

Anderson v. TikTok: A Potential Sea Change for § 230 Immunity

In late August the U.S. Third Circuit Court of Appeals released a far reaching decision, holding that § 230 of the Communications Decency Act (CDA) did not provide a safe harbor for the social media company TikTok when its algorithms recommended and promoted a video which allegedly led a minor to accidentally kill herself. Anderson v. TikTok (3rd Cir. Aug. 27, 2024).

Introduction

First, a brief reminder – § 230, which was enacted in 1996, has been the guardian angel of internet platform owners. The law prohibits courts from treating a provider of an “interactive computer service” i.e., a website, as the “publisher or speaker” of third-party content posted on its platform. 47 U.S.C. § 230(c)(1). Under § 230 websites have been given broad legal protection. § 230 has created what is, in effect, a form of legal exceptionalism for Internet publishers. Without it any social media site (such as Facebook, X) or review site (such as Amazon) would be sued into oblivion.

On the whole the courts have given the law liberal application, dismissing cases against Internet providers under many fact scenarios. However, there is a vocal group that argues that the broad immunity protection given to § 230 of the CDA is based on overzealous interpretations far beyond its original intent.

Right now § 230 has one particularly prominent critic – Supreme Court Justice Clarence Thomas. Justice Thomas has not held back when expressing disagreement with the broad protection the courts have provided under § 230. 

In Malwarebytes, Inc. v. Enigma Software (2020) a petition for writ of certiorari was denied, but Justice Thomas issued a “statement” – 

Nowhere does [§ 230] protect a company that is itself the information content provider . . . And an information content provider is not just the primary author or creator; it is anyone “responsible, in whole or in part, for the creation or development” of the content.

Again in Doe ex rel. Roe v. Snap, Inc. (2024), Justice Thomas dissented from the denial of certiorari and was critical of the scope of § 230, stating – 

In the platforms’ world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry. The Court should consider if this state of affairs is what § 230 demands. 

With these judicial headwinds, Anderson v. TikTok sailed into the Third Circuit. Even one Supreme Court justice is enough to create a Category Two storm in the legal world. And boy, did the Third Circuit deliver, joining the § 230 opposition and potentially rewriting the rulebook on internet platform immunity.

Anderson v. TikTok

Nylah Anderson, a 10-year-old girl, died after attempting the “Blackout Challenge” she saw on TikTok. The challenge, which encourages users to choke themselves until losing consciousness, appeared on Nylah’s “For You Page”, a feed of videos curated by TikTok’s algorithm.

Nylah’s mother sued TikTok, alleging the company was aware of the challenge and promoted the videos to minors. TikTok defended itself using § 230, arguing that its algorithm shouldn’t strip away its immunity for content posted by others.

The district court dismissed the complaint, holding that TikTok was immunized by § 230. The Third Circuit reversed.

The Third Circuit Ruling

The Third Circuit took a novel approach to interpreting § 230, concluding that when internet platforms use algorithms to curate and recommend content, they are engaging in “first-party speech,” essentially creating their own expressive content.

The court reached this conclusion largely based on the Supreme Court’s recent decision in Moody v. NetChoice (2024). In that case the Court held that an internet platform’s algorithm that reflects “editorial judgments” about content compilation is the platform’s own “expressive product,” protected by the First Amendment. The Third Circuit reasoned that if algorithms are first-party speech under the First Amendment, they must be first-party speech under § 230 too.

Here is the court’s reasoning:

230 immunizes [web sites] only to the extent that they are sued for “information provided by another information content provider.” In other words, [web sites] are immunized only if they are sued for someone else’s expressive activity or content (i.e., third-party speech), but they are not immunized if they are sued for their own expressive activity or content (i.e., first-party speech) . . .. Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, it follows that doing so amounts to first-party speech under § 230. . . . TikTok’s algorithm, which recommended the Blackout Challenge to Nylah on her FYP, was TikTok’s own “expressive activity,” and thus its first-party speech.

Accordingly, TikTok was not protected under § 230, and Anderson’s case could proceed.

Whether the Third Circuit’s logic will be adopted by other courts (including the Supreme Court, as I discuss below), is an open question. The court’s reasoning assumes that the definition of “speech” should be consistent across First Amendment and CDA § 230 contexts. However, these are distinct legal frameworks with different purposes. The First Amendment protects freedom of expression from government interference. CDA § 230 provides liability protection for internet platforms regarding third-party content. Treating them as interchangeable may oversimplify the nuanced legal distinctions between them.

Implications for Online Platforms

If this ruling stands, platforms may need to reassess their content curation and targeted recommendation algorithms. The more a platform curates or recommends content, the more likely it is to lose § 230 protection for that activity. For now, this decision has opened the doors for more lawsuits in the Third Circuit against platforms based on their recommendation algorithms. If the holding is adopted by other courts it could lead to a fundamental rethinking of how social media platforms operate.

As a consequence, platforms might become hesitant to use sophisticated algorithms for fear of losing immunity, potentially resulting in a less curated, more chaotic online environment. This could, paradoxically, lead to more harmful content being visible, contrary to the court’s apparent intent.

Where To From Here?

Given the potential far-reaching consequences of this decision, it’s likely that TikTok will  seek en banc review by the full Third Circuit, and given the potential impact of the ruling there’s a strong case for full circuit review.

If unsuccessful there, this case is a strong candidate for Supreme Court review, since it creates a circuit split, diverging from § 230 interpretations in other jurisdictions. The Third Circuit even helpfully cites pre-Moody diverging opinions from the 1st, 2nd, 5th, 6th, 8th, 9th, and DC Circuits, essentially teeing it up for Supreme Court review.

In fact, all indications are that the Supreme Court would be receptive to an appeal in this case. The Court recently accepted the appeal of a case in which it would determine whether algorithm-based recommendations were protected under § 230. However, after hearing oral argument it decided the case on different grounds and didn’t reach the § 230 issue. Gonzalez v. Google (USSC May 18, 2023). Anderson presents another opportunity for the Supreme Court to weigh in on this issue.

In the meantime, platforms may start experimenting with different forms of content delivery that could potentially fall outside the court’s definition of curated recommendations. This could lead to innovative new approaches to content distribution, or it could result in less personalized, less engaging online experiences.

Conclusion

Anderson v. TikTok represents a potential paradigm shift in § 230 jurisprudence. While motivated by a tragic case, the legal reasoning employed could have sweeping consequences for online platforms, content moderation, and user-generated content. The decision raises fundamental questions about the nature of online platforms and the balance between protecting free expression online and holding platforms accountable for harmful content. As we move further into the age of AI curated feeds and content curation, these questions will only become more pressing.

Anderson v. TikTok, Inc. (3d Cir. Aug. 27, 2024)

For two earlier posts on this topic see: Section 230 Supreme Court Argument in Gonzalez v. Google: Keep An Eye on Justice Thomas and Supreme Court Will Decide Whether Google’s Algorithm-Based Recommendations are Protected Under Section 230

Section 230 Supreme Court Argument in Gonzalez v. Google: Keep An Eye on Justice Thomas

Section 230 Supreme Court Argument in Gonzalez v. Google: Keep An Eye on Justice Thomas

When a traditional print publication – a print newspaper or magazine – publishes a defamatory statement it is “strictly liable” for defamation. This is true even if the statement is written by an an unaffiliated third-party – for example a “letter to the editor.”

But the law for print publications is not the same for Internet websites. A law enacted in 1996, the Communications Decency Act, prohibits courts from treating a provider of an “interactive computer service” i.e., a website, as the “publisher or speaker” of third-party content posted on its platform. 47 U.S.C. 230(c)(1). Under this law, referred to as “Section 230,” websites have been granted broad legal protection. Section 230 has created what is, in effect, a form of legal exceptionalism for Internet publishers. Without it any social media site (such as Facebook, Twitter) or review site (such as Amazon) would have been sued into oblivion.

This law has been criticized and defended vigorously for many years. On the whole, the courts have given the law liberal application, dismissing cases against Internet providers in a wide variety of contexts and under many fact scenarios

However, as I recently noted, for the first time the Supreme Court has agreed to hear a Section 230 case. Supreme Court Will Decide Whether Google’s Algorithm-Based Recommendations are Protected Under Section 230.

Oral argument in the case is rapidly approaching – the Court will hear argument on February 21, 2023. When that day arrives you can listen to it live here.

Although Section 230 has been most effective in shielding websites from defamation claims, the case before the Supreme Court involves a different law. The plaintiffs are the estate and family of Nohemi Gonzalez, an American citizen who was murdered in a  2015 ISIS attack in Paris. Gonzalez’s family and estate argue that Google violated the Antiterrorism Act, 18 U.S.C. 2331, by providing targeted recommendations for Youtube videos posted by ISIS. Ms. Gonzalez’s family asserts that Google violated the ATA by using its algorithms to recommend ISIS videos and spread ISIS’s message on Youtube. 

The Ninth Circuit held that Section 230 protected Google from liability.

The plaintiffs appealed, posing the following issue to the Court: “Under what circumstances does the defense created by section 230 apply to recommendations of third-party content?” (link)

While you might think that this is a narrow issue, in the cloistered world of Internet/social media law that is far from true. In those regions this is heady stuff. Section 230 has been an almost insurmountable defensive barrier for Internet publishers, and particularly social media companies. Supporters of a broad application of Section 230 are watching the Gonzalez case with apprehension, fearing that the Court will narrow it. Critics of the law are watching the case with hope that the Court will do just that.

Not surprisingly, the case has attracted an enormous number of amicus briefs. I count a total of 79 briefs. They range from briefs filed by Senators Josh Hawley and Ted Cruz (urging that Section 230 be narrowed) to Meta Platforms (Facebook/Instagram) and Microsoft (urging the Court to apply Section 230 broadly). Pretty much every major social media company has weighed in on this case.

When I look at the docket of a Supreme Court appeal one of the first questions I ask is: has the Solicitor General – who represents the executive branch of the federal government – entered an appearance and filed a brief? And if so, which side has it taken?

The Solicitor General, or “SG” is sometimes referred to as the “Tenth Justice.” Its views on a case are important, sometimes more important than those of the parties. Sometimes the Court invites the SG to take a position on a case, other times the SG enters the case at its own initiative. In Gonzalez it was the latter – the SG asked for leave to file a brief and to argue the case at oral argument. Both requests were granted.

The SG has filed a lengthy, complex and highly nuanced brief in this case, parsing various claims and theories under Section 230 in detail. The bottom line is that it is urging the Court to support the Gonzalez family and estate and overrule the Ninth Circuit:

Plaintiffs’ allegations regarding YouTube’s use of algorithms and related features to recommend ISIS content require a different analysis. That theory of ATA liability trains on YouTube’s own conduct and its own communications, over and above its failure to block or remove ISIS content from its site. Because that theory does not ask the court to treat YouTube as a publisher or speaker of content created and posted by others, Section 230 protection is not available.

In other words, in the eyes of the SG Gonzalez wins, Google loses. 

The SG is careful to note that this does not mean that Google should be deemed an information content provider with respect to the videos themselves. In other words, the SG argues that Google is not liable for the ISIS postings – only that Section 230 does not shelter it from potential liability based on the fact that its algorithm recommended them. 

All eyes will be on Justice Thomas during oral argument on February 21st. While several justices have expressed concerns over the broad immunity provided by the lower courts’ application of Section 230, Justice Thomas has been the most outspoken Justice on this issue. He expressed his views on Section 230 in Malwarebytes v. Enigma Software Group USA, a case where the Court denied review. 

In Malwarebytes Justice Thomas agreed with the denial, but wrote an almost 3,000 word “Statement” criticizing much of the Section 230 jurisprudence for “adopting the too-common practice of reading extra immunity into statutes where it does not belong.” He criticized cases providing Section 230 immunity to websites that selected, edited and added commentary to third-party content, tailored third-party content to facilitate illegal human trafficking, and published third-party content on sites that had design defects lacking safety features. Importantly for this appeal he criticized websites that utilized recommendations, as Google does on Youtube.  

There is little question where his vote will fall.

My prediction: Section 230 will not emerge from this appeal unscathed. The only question is the extent to which the Supreme Court will narrow its scope. Justice Thomas will write the opinion.

Update: The Supreme Court dodged the issue based on its holding in Twitter v. Taamneh. In that case, decided the same day as Gonzalez, the Court declined to impose secondary liability on tech companies for allegedly failing to prevent ISIS from using their platforms for recruiting, fundraising, and organizing. The Court ruled that internet platforms cannot be held secondarily liable under Section 2333 of the Anti-Terrorism Act based solely on broad allegations that they could have taken more aggressive action to prevent terrorists from using their services. This ruling applied to Gonzalez case as well, and therefore the Court did not address the Section 230 issues.

Supreme Court Will Decide Whether Google’s Algorithm-Based Recommendations are Protected Under Section 230

Supreme Court Will Decide Whether Google’s Algorithm-Based Recommendations are Protected Under Section 230

Have you noticed that when you perform a search on Youtube you start seeing links to similar content? If you search for John Coltrane, Youtube will serve up links to more Coltrane videos and jazz performers from his era and genre. If you search for Stephen Colbert you’ll start seeing links to more Colbert shows and other late night TV shows. The more you watch, the better Youtube becomes at suggesting similar content.

These “targeted recommendations” are performed by behind-the-scenes algorithms that dole out hundreds of millions of recommendations to users daily. I use Youtube a lot, and these recommendations are quite good. I’d miss them if they disappeared. And, based on a case now pending before the Supreme Court, they might.

On October 3, 2022 the Supreme Court accepted a case to consider whether, under Section 230 of the Communications Decency Act of 1996 (“Section 230”), this automated recommendation system should deprive Google (Youtube’s owner) of its “non-publisher” status under Section 230. The case on appeal is Gonzalez v. Google, decided by the 9th Circuit early this year.

In Gonzalez the plaintiffs seek to hold Google liable for recommending inflammatory ISIS videos that radicalized users, encouraging them to join ISIS and commit terrorist acts. The plaintiffs are the relatives and estate of Nohemi Gonzalez, a U.S. citizen who was murdered in 2015 when ISIS terrorists fired into a crowd of diners at a Paris bistro. The plaintiffs allege that Google’s actions provided material support to ISIS, in violation of the federal Anti-Terrorism Act, 18 U.S.C. § 2333.

Google’s first-line defense is that it is immune under Section 230. This law states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

47 U.S.C,. § 230(c)(1).

This 1996 law has been central to the growth of the Internet by protecting online publishers from liability for user generated content.  See The Twenty-Six Words That Created the Internet. The most common situation is where someone is defamed by a user on a social media site. The person publishing the defamation may be liable, but the social media company is immune under Section 230.

Do Google’s targeted recommendations cause it to cross over the line and lose its “non-publisher/non-speaker” status under this law? The 9th Circuit held that they do not, and dismissed the case under Section 230: “a website’s use of content neutral algorithms, without more, does not expose it to content posted by a third party.”

Here is the issue the Gonzalez plaintiffs submitted to the Court on appeal:

Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?

… 

Petitioners will urge the Court to adopt the [following] interpretation of section 230: the protections of section 230 are limited to a publisher’s traditional editorial functions, such as whether to publish, withdraw, postpone or alter content provided by another, and do not additionally include recommending writings or videos to others.

Wow! if the Supreme Court accepts this argument at the very least it would likely leave social media companies (Google, Facebook, Twitter, Instagram and many others) with the difficult decision of whether to stop providing content recommendations or risk liability if they continue to do so. Cases based on content recommendations are rare, so these companies (at least the large ones, that can afford the legal risk) might conclude that the benefits outweigh the risk. 

However, the case has the potential to do much more, should the Court use Gonzalez to limit the scope of Section 230. Since Section 230 was enacted the courts have interpreted the law broadly and favorably to social media companies. Hundreds of lawsuits have been defended successfully under Section 230. This will be the first time the Supreme Court has decided a case under Section 230, and there is reason to believe that the Court might narrow 230’s protection. In fact, comments by Justice Thomas portend just that. In Malware Bytes v. Enigma Software (2020) the Court denied review of a Section 230 case. However, Justice Thomas filed a lengthy “statement,” stating that  “many courts have construed [Section 230] broadly to confer sweeping immunity on some of the largest companies in the world,” criticizing the “nontextual” and “purpose and policy”-based rationales for those decisions, and concluding that “we need not decide today the correct interpretation of Section 230. But in an appropriate case, it behooves us to do so.”

Will the Supreme Court use Gonzalez to narrow the scope of Section 230’s protection? Justice Thomas seems to have staked out his position, and the Court’s conservative block may be inclined to follow his lead. Only time will tell, but we can expect that this will be a blockbuster decision for social media companies and the Internet-at-large.

More to follow as the parties brief the case, amici weigh in and the Court schedules oral argument.

Scotusblog page on Gonzalez v. Google.

Anderson v. TikTok: A Potential Sea Change for § 230 Immunity

Trump v. Facebook, Twitter and Google

I don’t know how much money Trump’s lawsuits against Facebook, Twitter, and YouTube (and their CEOs) will help him raise, or whether it will gain him political support, but I do know one thing about these cases – they have no basis in current law.

Of course it’s not outside the realm of possibility that Republican judges in Florida will see it his way, but it seems very unlikely.

At issue is the infamous Section 230 of the Communications Decency Act (CDA) – 47 USC Section 230. The relevant part of this law states:

No provider or user of an interactive computer service shall be held liable on account of–

(A)  any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

Trump argues that the companies are state actors and are required to host content protected by the First Amendment. 

However, the courts have consistently held that social media companies like Facebook are not state actors subject to the First Amendment, and that their decisions to delete or block access to a user’s account fall squarely within Section 230 immunity.

Not surprisingly, the prolific Prof. Eric Goldman explains why Trump has no case in this interview by Michael Smerconish:

Prof. Goldman has a paper coming shortly which analyzes 61 prior lawsuits by users over having their account terminated or content removed. In every case Internet service providers have won lawsuits challenging termination/removal decisions:

You can’t know for sure, but to say that Trump’s lawsuits are a long shot doesn’t do them justice.

The Messy Legalities of Trump’s Social Media Executive Order

The Messy Legalities of Trump’s Social Media Executive Order

On May 28, 2020 President Trump issued an “Executive Order on Preventing Online Censorship” (the Order). It takes aim at Twitter, Facebook and Google through the lens of 47 U.S. Code § 230 (Section 230), the federal law that allows  internet platforms to host and moderate user created content free of  liability under state law. The Order came just days after Twitter, for the first time, added warning labels and fact-checking to several of Trump’s tweets.

A lot has already been written about the politics behind the Order. But what does the Order accomplish as a legal matter? Here’s my take, in brief.

First, the Executive Order directs the Commerce Department to ask the FCC to do rulemaking to interpret Section 230. Section 230 does not delegate to the FCC rule-making authority, so I don’t see how the FCC could exercise rule making authority with respect to Section 230. If they try, expect litigation. For in-depth discussion of this issue, see Harold Feld’s analysis here.

Second, the Executive Order instructs all federal agencies to report their online advertising expenditures. The Order doesn’t state what will be done with this information. Perhaps the agencies will be instructed to pull their advertising from these services? However, federal agency spending on Twitter is trivial, as discussed by CNBC here.

Third, it encourages the FTC to bring Section 5 enforcement actions against Internet companies for false marketing statements. The FTC already has enforcement authority over “unfair or deceptive acts or practices.” Whether it will exercise that authority against Twitter remains to be seen. However, it’s hard to believe that anything Twitter has done vis-a-vis Trump (or anyone else) constitutes an unfair or deceptive act or practice. This would have to be proven in court, so if the FTC pursues this, expect litigation.

Fourth, it instructs the U.S. attorney general (William Barr) to form a working group of state attorneys general to investigate how state laws can be used against Internet services, and to develop model state legislation to further the goals of the Order. Section 230 and the First Amendment would likely preempt any state law attempting to regulate Twitter, so this is a non-starter.

Fifth, it instructs the U.S. attorney general to draft legislation that would reform Section 230 and advance the goals of the Executive Order. OK, but this would require that a law reforming Section 230 could be enacted. Unless the Republicans control both legislative branches and the executive branch, this seems unlikely.

That’s it. For the most in-depth, line-by-line analysis of the Order I’ve seen, see Prof. Eric Goldman’s (and Section 230 expert) post, Trump’s “Preventing Online Censorship” Executive Order Is Pro-Censorship Political Theater.

It’s Probably Not a Good Idea to Sue Glassdoor If Your Employees Diss You There

It’s Probably Not a Good Idea to Sue Glassdoor If Your Employees Diss You There

Section 230 of the Communications Decency Act has, once again, protected a website from a claim of defamation based on user postings.

Simply put, Section 230 of the CDA provides that a website isn’t liable for defamation (or any other non-intellectual property claim) based on user postings. The poster may be liable (if she can be identified), but the website is not. Typically, Section 230 cases involve defamation or interference with contract by the poster — copyright infringement based on user postings is handled by a separate statute, the DMCA.

Craft Beer Stellar, LLC’s suit against Glasdoor ran into this law head-first in a recent case decided by Massachusetts U.S. District Court Judge Dennis Saylor.

Craft Beer complained to Glassdoor over a critical posting by a Craft Beer franchisee (the fact that the post was by a franchisee rather than an employee is legally irrelevant). Glassdoor removed the posting on the ground that it violated Glassdoor’s community guidelines. The franchisee reposted, this time in compliance with the guidelines, and Glassdoor denied a request by Craft Beer to remove the second posting.

Craft Beer argued that by taking down the first review and allowing the second review to be posted Glassdoor lost its Section 230 immunity. The judge summarized its argument as follows:

Glassdoor essentially contends that Glassdoor’s decision to remove a “review” from its website for violating its community guidelines, combined with its subsequent decision to allow the updated, guidelines-compliant version of the “review” to be re-posted, constituted a material revision and change to the post’s content. Such a material revision, it contends, constituted an act of creating or developing the post’s content, and accordingly transformed Glassdoor from an (immunized) interactive computer service into an information-content provider not subject to the protections of §230.

Judge Saylor rejected this argument, noting that Glassdoor wrote neither of the two posts; it just made a decision to publish or withdraw the posts. First Circuit precedent holds that these kinds of “traditional editorial functions” — deciding whether to publisher or withdraw content — fall squarely within Section 230’s grant of immunity. See Jane Doe No. 1 v. Backpage.com LLC (1st Cir. March 14, 2016) (“lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred”).

Craft Beer also claimed that Glassdoor had violated the Defend Trade Secrets Act (“DTSA”), 18 U.S.C. § 1836. However, as noted above, Section 230 provides protection for non-intellectual property claims. Although one would ordinarily think of a trade secret claim as an intellectual property claim (and therefore not covered by Section 230), the DTSA expressly states that the DTSA “shall not be construed to be a law pertaining to intellectual property for purposes of any other Act of Congress.” Accordingly, Section 230 provided Glassdoor with protection from the DTSA claim as well. (For an in-depth discussion of this issue see Professor Eric Goldman’s article, The Defend Trade Secrets Act Isn’t an ‘Intellectual Property’ Law.)

The larger problem for Craft Beer may be that not only did the judge dismiss its complaint, but the case probably has added publicity to the bad reviews Craft Beer sought to quash. Indeed, even if it had won the case and forced Glassdoor to take down the offending posts, potential franchisees researching the company online would find the posts quoted in court decisions in the case. As things now stand, Craft Beer is probably suffering to some extent from the Streisand Effect (for another example of Section 230 and the “Streisland Effect” see here). And, if it is considering an appeal to the First Circuit (a bad move, in my opinion), a decision from the First Circuit will only make matters worse.

Craft Beer Stellar, LLC v. Glassdoor, Inc. (D. Mass Oct. 17, 2018)

Attorney’s Attempt to Circumvent CDA Fails Before California Supreme Court

Attorney’s Attempt to Circumvent CDA Fails Before California Supreme Court

The Communications Decency Act (CDA) is a federal law that protects online publishers from liability for the speech of others. The CDA gives online platforms the right to publish (or decline to publish) the ideas and opinions of users without the threat of being held liable for that content or forced to remove it.

However, people who are defamed online will sometimes go to extreme lengths to try to force online publishers to remove defamatory content posted by users. A notable First Circuit case that I wrote about recently illustrates how a lawyer attempted, unsuccessfully, to obtain copyright ownership of several defamatory posts and then force Ripoff Report to remove the posts. (See: The Copyright Workaround and Reputation Management: Small Justice v. Ripoff Report).

A California attorney tried something similar in Hassell v. Bird, a case decided by the California Supreme Court on July 2, 2018. In that case a lawyer (Dawn Hassell) sued a former client and the author of a Yelp review (Ava Bird) over a review that Hassell claimed was defamatory. Hassell got a default judgment holding that the review was defamatory along with an injunction ordering the review to be removed. She then delivered the injunction to Yelp (which was not a party in the suit) and demanded that it honor the injunction against Bird and remove the review. Yelp refused to do so. The case proceeded through appeals, ending up before the California Supreme Court.

The attorney’s strategy in this case was to purposefully not name Yelp as a defendant, since Yelp would easily have been dismissed from the case under the CDA. Instead, her strategy was to get an injunction against the defendant ordering her to remove the Yelp post, and then attempt to enforce that injunction against Yelp. Ava Bird assisted in the first part of this strategy by defaulting, although it appears she may not have been properly served.

The court addressed Hassell’s  strategy, and answered the central issue in the case, as follows:

The question here is whether a different result should obtain because plaintiffs made the tactical decision not to name Yelp as a defendant. Put another way, we must decide whether plaintiffs’ litigation strategy allows them to accomplish indirectly what Congress has clearly forbidden them to achieve directly. We believe the answer is no . . . an order that treats an Internet intermediary ‘as the publisher or speaker of any information provided by another information content provider’ nevertheless falls within the parameters of [the CDA].

The court observed that even an injunction (as opposed to money damages) can impose a substantial burden on an online publisher:

An injunction like the removal order plaintiffs obtained can impose substantial burdens on an Internet intermediary. Even if it would be mechanically simple to implement such an order, compliance still could interfere with and undermine the viability of an online platform . . . furthermore, as this case illustrates, a seemingly straightforward removal order can generate substantial litigation over matters such as its validity or scope, or the manner in which it is implemented. The CDA allows these litigation burdens to be imposed upon the originators of online speech. But the unique position of Internet intermediaries convinced Congress to spare republishers of online content, in a situation such as the one here, from this sort of ongoing entanglement with the courts.

The court criticized Hassell’s strategy:

. . . plaintiffs’ maneuver, if accepted, could subvert a statutory scheme intended to promote online discourse and industry self-regulation. What plaintiffs did in attempting to deprive Yelp of immunity was creative, but it was not difficult. If plaintiffs’ approach were recognized as legitimate, in the future other plaintiffs could be expected to file lawsuits pressing a broad array of demands for injunctive relief against compliant or default-prone original sources of allegedly tortious online content. . . . Congress did not intend this result, any more than it intended that Internet intermediaries be bankrupted by damages imposed through lawsuits attacking what are, at their core, only decisions regarding the publication of third party content.

Yelp itself had the last laugh in this case, and it posted it on its blog:

The Hassell Law Group, which has always been a highly-rated business on Yelp and currently maintains five stars, has spent many years in the court system (and endured the resulting Streisand Effect) in an effort to force Yelp to silence a pair of outlier reviews. As we have observed before, litigation is never a good substitute for customer service and responsiveness, and had the law firm avoided the courtrooms and moved on, it would have saved time and money, and been able to focus more on the cases that truly matter the most — those of its clients.

There is a lot more to this case than I’ve covered here. If you are interested, I recommend Eric Goldman’s analysis of the nuanced concurring and dissenting opinions in his post, The California Supreme Court Didn’t Ruin Section 230 (Today)–Hassell v. Bird.

Hassell v. Bird (Cal. Sup. Ct. July 2, 2018)

Mavrix v. LiveJournal: The Incredible Shrinking DMCA

Mavrix v. LiveJournal: The Incredible Shrinking DMCA

While many performing artists and record companies complain that the Digital Millennium Copyright Act (the “DMCA”) puts them to the unfair burden of sending endless takedown notices, and argue that the law should require notice and “stay down,” supporters of Internet intermediaries and websites argue that court decisions have unreasonably narrowed the DMCA safe harbor.

A recent decision by the influential Ninth Circuit Court of Appeals (which includes California) adds to the concerns of the latter group.

LiveJournal, the defendant in this case, displayed  on its website 20 photographs owned by Mavrix. Mavrix responded, not by sending DMCA “takedown” notices, as you might expect, but by filing suit for copyright infringement. LiveJournal responded that it was protected by the DMCA. However, to successfully invoke the DMCA’s safe harbor  LiveJournal had to satisfy all of the legal requirements of the DMCA.

A key requirement is that infringing content have been posted “at the direction of the user.” In other words, the DMCA is designed to make websites immune from copyright infringement based on postings by users; it doesn’t protect a site from content posted or uploaded by the site itself – that is, by the site’s employees.  The photos at issue were submitted by users and posted at their direction and therefore, LiveJournal argued, it satisfied this DMCA requirement.

However, when it comes to the DMCA, the devil is in the details, and the outcome in any case depends on how the courts interpret those details. In the case of LiveJournal photos are submitted by users, but they are posted only after they are reviewed and approved by volunteer moderators. For this reason, Mavrix argued, the photographs were not “posted at the direction of the user,” rather they were posted by moderators who selected them from user submissions. Further, Mavrix argued that the moderators were “agents” of LiveJournal, and therefore their actions were legally attributed to LiveJournal. In other words, as “agents” of LiveJournal their actions were the same as if they were employees.

The district court rejected Mavrix’s arguments and ruled for LiveJournal, but the Ninth Circuit reversed, holding that Mavrix had a point – the moderators might very well be “agents” of LiveJournal, in which case LiveJournal would have failed this requirement of the DMCA and be liable for copyright infringement. In reaching this conclusion the court emphasized that the critical inquiry is not who submitted content, but who posted the content. The court rejected LiveJournal’s position that the words “at the direction of the user” include all user submissions, even when they are reviewed and selected by agents or employees of the service provider.

In the case of LiveJournal, because moderators screened and posted user submissions the issue is whether the moderators are “agents” of LiveJournal whose actions should be attributed to LiveJournal. In effect, the court equated agents with employees.

To make matters worse for websites hoping to use volunteer moderators, the legal “test” to determine whether moderators are agents gets into the arcane subject of agency law, a topic that rightly triggers the limbic system of any lawyer who suffered through agency law in law school. In this case the question is whether the level of control LiveJournal exercised over its volunteer moderators created an agency relationship based on “actual” or “apparent” authority. Trust me when I say that these are complex issues that no website owner would want to have to parse out while running its business, much less have to present and argue to a jury.

This ruling was a blow to LiveJournal, but the Ninth Circuit had more bad news to deliver. Even if LiveJournal was able to establish that the moderators were not agents of LiveJournal, Mavrix might be able to show that LiveJournal had “actual” or “red flag” knowledge that the postings were infringements. While the Ninth Circuit stated that “red flag” knowledge requires that the infringement be “immediately apparent to a non-expert,” the court ruled that the fact that some of the photos contained watermarks could have created red flag knowledge. Whether they did will be up to a jury to decide. If a jury decides that LiveJournal had the requisite knowledge with respect to one or more photos, LiveJournal will lose DMCA protection for those photos.

However, after this ruling the Ninth Circuit was still not done with LiveJournal.  The DMCA requires that LiveJournal not have received a financial benefit from infringements that it had the right and ability to control. The court held that this benefit “need not be a substantial or a large proportion” of the website’s revenue, and added to the confusion around DMCA law by suggesting that such a benefit could be established based on the volume of infringing material on the site, even if this material did not belong to Mavrix and was not the subject of the current litigation.

What lessons can website operators draw from this case?

First, this is a very bad decision for any social media website that would like to moderate content, whether through the use of volunteers or by employees.

Any business based on LiveJournal’s business model – volunteer moderators who review user submissions and decide whether or not to post them – is at serious risk of losing DMCA protection. The degree of planning, administration and ongoing legal supervision necessary to be confident that moderators are not agents would be daunting.

It’s worth noting that this decision will be difficult to evade – the fact that a site may not be in one of the states included in the Ninth Circuit is not likely to  provide protection. A site incorporated and operating outside the Ninth Circuit can be sued in the Ninth Circuit if it has minimum contacts there, and this is often easily established in the case of popular websites. The Mavrix case is so favorable for copyright owners seeking to challenge a DMCA safe harbor defense that it is likely to motivate forum shopping in the Ninth Circuit.

Second, the  irony of this case is obvious – Mavrix creates an incentive to engage in no moderation or curation and post “all comers.” If there is no moderator (whether an agent or employee) to view a warning watermark, there can be no knowledge of infringement. Unregulated postings of user generated content is likely to result in more copyright infringement, not less.

Third, to add to the confusion over how the DMCA should be applied to websites that host user-generated content screened by moderators, the Court of Appeals for the Tenth Circuit issued a decision in 2016 that appears to come to the opposite conclusion regarding the use of moderators. BWP Media USA Inc. v. Clarity Digital Group., LLC. This case may give LiveJournal a chance to persuade the Supreme Court to accept an appeal of the Mavrix case (based on a “circuit split”) – assuming, that is, that LiveJournal has the stomach, and the budget, to prolong this case further, rather than settle.

Lastly, it’s important to view this decision in the context of the DMCA as a whole. Any service provider hosting user-generated content has to pass through a punishing legal gauntlet before reaching the DMCA’s safe harbor:

(i) content must be stored at the direction of a user;

(ii) the provider must implement a policy to terminate repeat infringers, communicate it to users and reasonably enforce it;

(iii) the provider must designate an agent to be notified of take down notices, register the agent online by the end of 2017 and post the contact info for the agent online on the site;

(iv) the provider must respond expeditiously to take down notices;

(v) the provider may not have actual or red flag knowledge of infringement, nor may it be willfully blind to infringements;

(vi) the provider may not have the right and ability to control infringing content; and

(vii) the provider may not have a direct financial interest in the infringing content.

This is a challenging list of requirements and, as illustrated by the Mavrix case, each requirement is complex, subject to challenge by a copyright owner and subject varying interpretations by different courts. If the service provider fails on even one point it loses its DMCA safe harbor protection.

After the Ninth Circuit’s decision in Mavrix the chances that a service provider will be able to successfully navigate this gauntlet are significantly reduced, at least in the Ninth Circuit.

Mavrix Photographs, LLC v. LiveJournal, Inc. (9th Cir. April 7, 2017)

Update: The Ninth Circuit clarified its position on whether the use of moderators deprives a website of DMCA protection in Ventura Content v. Motherless (2018). In Ventura the court held that screening for illegal material is permissible, and distinguished Mavrix on the ground that in that case the moderators screened for content that would appeal to LiveJournal’s readers (“new and exciting celebrity news”). “Because the users, not Motherless, decided what to post — except for [its] exclusion of illegal material . . .  — the material . . . was posted at the direction of users.”

Gesmer Updegrove Client Advisory re New DMCA Agent Registration Requirement

The U.S. Copyright Office has issued a new rule that has important implications for any website that allows “user generated content” (UGC).  This includes (for example), videos (think Youtube), user reviews (think Amazon or Tripadvisor), and any site that allows user comments.

In order to avoid possible claims of copyright infringement based on UGC, website owners rely on the Digital Millennium Copyright Act (the “DMCA”). However, the DMCA imposes strict requirements on website owners, and failure to comply with even one of these requirements will result in the loss of protection.

One requirement is that the website register an agent with the Copyright Office. The contact information contained in the registration allows copyright owners to request a “take down” of the copyright owner’s content.

The Copyright Office is revamping its agent registration system, and as part of this process it is requiring website owners to re-register their DMCA agents by the end of 2017, and re-register every three years thereafter. Gesmer Updegrove LLP’s Client Advisory on this new rule is embedded in this post, below.  You can also click here to go directly to the pdf file on the firm’s website.

[pdf file=”https://live-mass-law-blog.pantheonsite.io/wp-content/uploads/2016/11/Client-Advisory-DMCA-Agent-Registration-Rule.pdf”]

Lets Go Crazy! The Dancing Baby, the DMCA  and Copyright Fair Use

Lets Go Crazy! The Dancing Baby, the DMCA and Copyright Fair Use

It’s not often that a case involving a 29 second video of toddlers cycling around on a kitchen floor goes to a federal court of appeals, much less results in an important,  precedent-setting copyright decision. But that is exactly what happened in Lenz v. Universal Music Corp.

The cases arises from an issue inherent in the Digital Millennium Copyright Act. The DMCA allows copyright owners to request the “takedown” of a post that uses infringing content.

But, what does the copyright owner have to do to determine, first, whether fair use applies? Does it need to do anything at all?

This question has finally been decided by the Ninth Circuit in a much-anticipated decision issued on September 14, 2015.

The case had inauspicious beginnings. In 2007 Stephanie Lenz posted to YouTube a 29 second video of her toddler son cycling around the kitchen, with Prince’s song “Let’s Go Crazy” playing in the background. Universal sent a DMCA takedown notice to YouTube, but Ms. Lenz contended her use of the song was fair use, and therefore was non-infringing. Eventually the dispute made its way to federal court in California, with Ms. Lenz asserting that her use of the song was protected by fair use, and that Universal had failed to take fair use into consideration before requesting takedown of her video.

The issue before the court was whether, before sending a DMCA takedown notice, copyright holders must first evaluate whether the offending content qualifies as fair use. The court held that the copyright statute does require such an evaluation, but that the copyright holder need only form a “subjective good faith belief” that fair use does not apply. And, the copyright holder may not engage in “willful blindness” to avoid learning of fair use.

In this case Universal arguably failed to consider fair use at all.

The court does not answer the practical question now faced by Universal and others: what, exactly must a copyright holder do to show subjective good faith under the DMCA? Noting that it was “mindful of the pressing crush of voluminous infringing content that copyright holders face in a digital age,” the court described generally what appears to be a low standard to satisfy the “good faith” test. The court opined that subjective good faith belief does not require investigation of the allegedly infringing content. And, “without passing judgment,” that the use of computer algorithms appeared to be a “valid … middle ground” for processing content. However, the court failed to provide a standard for an computerized algorithmic test that might apply in the notoriously uncertain legal context of copyright fair use.

It seems difficult to conclude other than that this decision will increase the cost burden on the part of content holders who wish to use the DMCA to force the takedown of copyright-infringing content on the Internet. While the court provides little guidance as to what a copyright content owner will have to do to show that it exercised “subjective good faith” before sending a takedown notification, it seems likely that the ruling will involve increased human involvement, and perhaps even legal consultation in “close cases.”

This case was originally filed by Ms. Lenz in 2007, eight years ago, however it is far from concluded. The Ninth Circuit’s decision only sends the case back to the trial court for a trial under the legal standard enunciated by the Ninth Circuit. And, even that determination can only be reached after the court (or a jury) concludes that the 29 second video was fair use of the Prince song in the first place, an issue that has yet to be taken up by the court.

What, one might ask, can Ms. Lenz expect to receive in the event she prevails at trial? First, The Ninth Circuit decision explicitly allows her to recover “nominal damages” — in other words, damages as low as $1. However, even if she prevails and recovers only one dollar, she would be entitled to her costs and attorney’s fees, which could be a substantial amount, notwithstanding the fact that Ms. Lenz is represented by counsel pro bono.

Of course, given the economics of this type of case, its unlikely we’ll see too many similar cases of this sort in the future. Clearly, this was a “test case,” in which the principle, not monetary compensation, was the motivation. Not many recipients of DMCA takedown notices will bring suit when at best they can hope to recover nominal damages plus attorney’s fees.

For an earlier post discussing a decision on this issue by Judge Stearns in the District of Massachusetts, see Judge Stearns Weighs in on Legal Standard for Copyright Takedown Notices (Sept. 30, 2013).

Lenz v. Universal Music Corp. (9th Cir. Sept. 14, 2015).

Two Recent Decisions Show the Strengths and Limitations of the CDA

Two Recent Decisions Show the Strengths and Limitations of the CDA

Many observers have commented that if they had to identify one law that has had the greatest impact in encouraging the growth of the Internet, they would chose the Communications Decency Act  (“CDA”) (47 USC § 230). 

Under the CDA (also often referred to as “Section 230”) web sites are not liable for user submitted content. As a practical matter, in most cases this means Internet providers are not liable for defamation posted by users (many of whom are anonymous or judgment-proof).* 

*note:The DMCA, not the CDA, provides Internet providers with safe harbors for claims of copyright infringement based on user submitted content.

Two recent cases illustrate the reach and limitations of this law. In one case the CDA was held to protect the website owner from liability for defamation. In the other, the law did not protect the website from potential liability based on negligence.

Jones v. Dirty World

The CDA provides immunity from information provided by users. However, if a site itself is the “content provider” — for example, the author of defamation —  it is legally responsible for the publication. In other words, the CDA does not give Internet providers or web site owners license to engage in defamation, only immunity when their users do so.

Under the CDA the term “content provider” is defined as a person “that is responsible, in whole or in part, for the creation or development of information ….” Therefore, in many cases, the issue has been who is responsible for the “creation or development” of the defamatory content – the poster or the site owner?

This was the issue before the U.S. Court of Appeals for the Sixth Circuit in Jones v. Dirty World Entertainment Recordings LLC.

Nik Richie owns Dirty World, an online tabloid (www.thedirty.com). Users, not Mr. Richie or his company, create most of the content, which often is unflattering to its subjects. However, Dirty World encourages offensive contributions by its “dirty army,” and it selects the items that are published from user contributions. In addition, Mr. Richie often adds a sentence or two of commentary or encouragement to the user contributions.

Sarah Jones, a teacher and cheerleader for the Cincinnati Bengals was repeatedly and crudely defamed on the site. However, the defamation was contained in the posts written and contributed by users, not Richie or his company. In fact, it’s easy to see that Ritchie had been carefully coached as to what he can and cannot say on the site (as distinct from what his contributors say).

Dirty World refused to remove the defamatory posts, and Sarah Jones (who apparently was unaware of the Streisland Effect) sued Richie. Two federal court trials ensued (a mistrial and a $338,000 verdict for Jones).

Before and during the trial proceedings Richie asserted immunity under the CDA. The trial judge, however, refused to apply the law in Dirty World’s favor. The district court held that “a website owner who intentionally encourages illegal or actionable third-party postings to which he adds his own comments ratifying or adopting the posts becomes a ‘creator’ or ‘developer’ of that content and is not entitled to immunity.” Of course, there was a reasonably strong argument that Dirty World and Ritchie did exactly this – encouraged defamatory postings and added comments that ratified or adopted the posts — and hence the jury verdict in Jones’ favor.

After the second trial Richie appealed to the U.S. Court of Appeals for the Sixth Circuit, which reversed, holding that Dirty World and Richie were immune from liability under the CDA.

The first question before the Sixth Circuit was whether Dirty World “developed” the material that defamed Sarah Jones. In a leading CDA case decided by the Ninth Circuit in 2008 — Fair Housing Council of San Francisco Valley v. Roommates, LLC —  the Ninth Circuit established the following “material contribution” test: a website helps to develop unlawful content, and therefore is not entitled to immunity under the CDA, if it “contributes materially to the alleged illegality of the conduct.”

The Sixth Circuit adopted this test, and held a “material contribution” meant ” being responsible for what makes the displayed content allegedly unlawful.” Dirty World was not responsible for the unlawful content concerning Ms. Jones.

Second, consistent with many other cases applying the CDA, the court held that soliciting defamatory submissions did not cause Dirty World to lose immunity.

Lastly, the Sixth Circuit rejected the district court’s holding that by “ratifying or adopting” third-party content a web site loses CDA immunity: A website operator cannot be responsible for what makes another party’s statement actionable by commenting on that statement post hoc. To be sure, a website operator’s previous comments on prior postings could encourage subsequent invidious postings, but that loose understanding of responsibility collapses into the encouragement measure of ‘development,’ which we reject.”

The $338,000 verdict was set aside, and the district court instructed to enter judgment in favor of Richie and Dirty World.

The Sixth Circuit’s decision was no surprise. Many people in the legal community believed that the trial court judge was in error in failing to dismiss this case before trial. Nevertheless, it is a reminder of how far the CDA can go in protecting website owners from user postings, and adds to the road map lawyers can use to make sure their clients stay on the “safe” side of the line between legal and illegal conduct under this law.

Jane Doe 14 v. Internet Brands (dba Modelmayhem.com)

Things went the other way for Modelmayhem, in a case decided by the Ninth Circuit on September 17, 2014.

Like Dirty World, this case involved a sympathetic plaintiff. The plaintiff, “Jane Doe,” posted information about herself on the “Model Mayhem” site, a mayhemnetworking site for the modeling industry. Two rapists used the site to lure her to a fake audition, at which they drugged and raped her. She alleged that Internet Brands knew about the rapists, who had engaged in similar behavior before her attack, but failed to warn her and other users of the site. She filed suit, alleging negligence based on “failure to warn.”*

*note: The two men have been convicted of these crimes and sentenced to life in prison.

In this case, like Dirty World, the district court again got it wrong and was reversed on appeal. However, in this case the district court wrongly held that the site was protected by the CDA.

The Ninth Circuit disagreed, stating –

Jane Doe … does not seek to hold Internet Brands liable as a “publisher or speaker” of content … or for Internet Brands’ failure to remove content posted on the website. [The rapists] are not alleged to have posted anything themselves. … The duty to warn … would not require Internet Brands to remove any user content or otherwise affect how it publishes such content. … In sum, Jane Doe’s negligent failure to warn claim does not seek to hold Internet Brands liable as the “publisher or speaker of any information provided by another information content provider.” As a result, we conclude that the CDA does not bar this claim.

This ruling has raised the hackles on advocates of broad CDA coverage. Their “parade of horribles” resulting from this decision includes questioning how broadly the duty to warn extends, practical questions about how a web site would provide effective warnings, and concerns about various unintended (and as yet hypothetical) consequences that may result from this decision. However, based on the broad interpretation the courts have given the CDA in the last two decades, it seems unlikely that this case will have significant implications for CDA jurisprudence. Nevertheless, like Jones v. Dirty World, it is one more precedent lawyers must take into consideration in advising their clients.

Jones v. Dirty World Entertainment Recordings LLC (6th Cir. 2014)

Doe v. Internet Brands, Inc. (9th Cir. Sept. 17, 2014) 

 

Viacom v. Youtube: Mother of All DMCA Copyright Cases Settles

According to my count, I’ve written seven posts on the Viacom v. Youtube DMCA copyright case. The first time I mentioned Youtube and the DMCA was in October 2006, over 7 years ago. Referencing Mark Cuban’s comment that Youtube would be “sued into oblivion” I stated:

Surprisingly few observers have asked the pertinent question here: do the Supreme Court’s 1995 Grokster decision and the DMCA (the Digital Millennium Copyright Act) protect YouTube from liability for copyright-protected works posted by third parties . . ..?

In fact, Youtube was acquired by Google for $1.65 billion. It was then sued by a group of media companies, resulting in a marathon lawsuit that never went to trial, but yielded two district court decisions and one Second Circuit decision on the issues I identifed in 2006. As I described in a two-part post in December 2013/January 2014, the second appeal to the Second Circuit had been fully briefed and was awaiting oral argument. Now the case has settled, on confidential terms of course. However, demonstrating the extent to which the interests of the media companies and Youtube have converged, the joint press release contained the unusual statement that the “settlement reflects the growing collaborative dialogue between our two companies on important opportunities, and we look forward to working more closely together.”

We may never know the terms of the settlement, but rumor has it that the plaintiffs received no money in this settlement. My guess is they recovered a  token amount, if anything. All three decisions favored Youtube, and Viacom’s case had been whittled down to next to nothing, even if it had been able to persuade the Second Circuit to crack the door a bit and remand the case a second time for damages on a limited number of video clips.

However, the settlement leaves some important questions unanswered:

  • Viacom’s argument that web sites don’t have to take any actions to “induce infringement” – that this basis for liability can be found based on the owner’s intent or state of mind alone – remains unresolved. This is the Grokster issue I identified in 2006. While I think Viacom’s argument was weak, it would have been helpful to have the Second Circuit resolve it.
  • Since the Second Circuit’s first ruling in April 2012 the courts have read the decision to reduce protection for web sites. Courts in New York applying the Second Circuit decision have held that a website can lose DMCA protection if it becomes aware of a specific infringement, or if it is aware of facts that would make it “obvious to a reasonable person” that a specific clip is infringing. Because the case has settled, the Second Circuit will have no opportunity to clarify this standard, at least in this case.
  • The Second Circuit will have no opportunity to clarify the  “actual knowledge”/”facts or circumstances” sections of the DMCA. The distinction between these two provisions remains confusing to the lower courts and to lawyers who must advise their clients under this law.
  • The Second Circuit will have no opportunity to clarify its controversial comments (in its first decision) on “willful blindness,” and help the courts reconcile this concept with the DMCA’s notice-and-takedown procedure. As noted above, the settlement leaves in place the Second Circuit’s implication that awareness of specific infringement may result in infringement liability even in the absence of a take-down notice.

It’s likely that other cases presenting these issues will make their way to the Second Circuit (arguably the nation’s most influential copyright court), but it could be years before that happens. The industry could have used additional guidance in the meantime, and one consequence of this settlement is that it will  get it later rather than sooner, if at all.