Today, oral arguments take place in Gonzalez v. Google LLC. Here’s why this matters for creators and the creator economy…
Section 230 of the Communications Decency Act became law in the 1990s and specifically provides a liability shield for internet service providers, which includes platforms such as YouTube, Twitter, and Meta, from being publishers of user-generated content.
The first of two cases before the Supreme Court is Gonzalez v. Google LLC.
YouTube is considered an “interactive computer service” and is a subsidiary of Google LLC. YouTube, as with most (if not all) other interactive computer services, relies on recommendation algorithms to serve content to end users. Courts have routinely held that this use of recommendation systems by interactive computer services is still considered covered by Section 230 liability protections.
The content at issue in Gonzalez involved auto recommendations for ISIS extremism and recruitment content. Twenty-three-year-old Nohemi Gonzalez, an American studying in Paris, was killed during the November 2015 terrorist attack carried out by ISIS gunmen, resulting in 129 people killed. Gonzalez’s relatives and estate brought a lawsuit against Google, as explained in lower court filings, alleging “that Google, through YouTube, had provided material assistance to, and had aided and abetted, ISIS, conduct forbidden and made actionable by the AntiTerrorism Act.”
The family says Google’s recommendation system helped ISIS grow. They also argue “the protections of Section 230 are limited to a publisher’s traditional editorial functions, such as whether to publish, withdraw, postpone or alter content provided by another, and do not additionally include recommending writings or videos to others.”
The Supreme Court is presented a narrow question, or sometimes a set of narrow questions, that are in dispute at lower courts. This is called a “question presented” which they’ll focus on answering.
For Gonzalez, it’s a singular, two-part question: “Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of the information provided by another info content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?”
The question put more simply: “whether section 230 protects recommendations, or is limited to traditional editorial functions.”
⁉️ Why does this matter ⁉️
🚩 Platforms forced to take on more liability for the actions of their users may implement more strict content standards and enforcement mechanisms, potentially impacting the ability of creators to create and share content in the same manners afforded under previous laws.
🚩 The First Amendment still gives protection over speech, but makes it harder for platform operators without a blanket exception to speech taking place on their platforms.
Professor Eric Goldman set forth some pre-oral argument resources that are worth exploring.
Julia Angwin, Pulitzer Prize-winning journalist at ProPublica, argues in an opinion piece for the New York Times that “there is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct.”
Image Credit: Fred Schilling, Collection of the Supreme Court of the United States, with edits by me.