Twitter, Facebook Unite to Aid Google in Upcoming Section 230 Showdown Before SCOTUS

January 26th, 2023 8:40 AM

Facebook and Twitter are coming to Google’s defense in a lawsuit that could have a great impact on content moderation online.

Bloomberg Law reported last week that the companies recently filed their own legal briefs in an upcoming case, Gonzalez v. Google, that “could fundamentally change the way the internet works.”

The lawsuit poses a narrow question of whether Section 230 applies when a website intentionally displays third-party content to users using algorithms, specifically content that promotes terrorism.

The suit was filed by the family of an American killed in a terrorist attack in Paris in 2015. The plaintiffs claimed that Section 230 does not protect the algorithms the social media platforms use to cater specific content to its users. A lower court ruled in favor of Google last year, but the plaintiffs appealed the case to the Supreme Court, which granted certiorari in October 2022.

The social media platforms argue that they cannot be held liable for the actions of others, regardless of what their algorithms promote.

Vice President of MRC Free Speech America & MRC Business Dan Schneider ripped Google for its hypocrisy on censorship.

“Google is aggressively censoring conservatives, but it doesn’t seem to care when it publishes content from terrorists,” Schneider said. “Why does Google spend so much time silencing conservatives when it allows this type of content?”

In its brief, Twitter urged the Supreme Court not to make changes to the law outside of what the plaintiffs in the lawsuit seek to litigate and argued that the law already accomplishes the relief the plaintiffs seek:

“Highlighting certain content through placement is all the more important online because billions of people worldwide are constantly generating information. Filtering, sorting, and ranking—the functions that generate what Plaintiffs call ‘targeted recommendations’— are essential to making that information meaningfully accessible.”

The company’s brief added that the protection afforded under Section 230 is “unambiguous.”

“Section 230(c)(1) is unambiguous,” Twitter added. “ Under both its ordinary meaning and the meaning derived from the common law, Section 230(c)(1) bars claims seeking to hold service providers or users liable for disseminating third-party content. “

Meta’s brief specifically highlighted the company’s own efforts to prevent users from viewing content related to terrorism:

“Like most social-media companies, Meta has long had strict policies prohibiting terrorists and terrorist groups, as well as posts that praise or support such individuals and groups, on its services. Those policies help to ensure that Meta’s services are places that users want to frequent and advertisers want to advertise. Meta has invested billions of dollars to develop sophisticated safety and security systems that work to identify, block, and remove terrorist content quickly—typically before it is ever seen by any users.” 

Meta went on to brag that it censored vast amounts of content in 2022:

“In the third quarter of 2022 alone, Meta blocked or removed nearly 17 million pieces of third-party content for violating its terrorism policies, and it identified 99.1 percent of that content on its own. If terrorism-related content evades Meta’s first-line defenses, Meta has in place measures to mitigate the risk that it will be shown to others.”

Microsoft and Reddit filed their own briefs in response to the lawsuit that voiced similar concerns.

All other pleadings related to the lawsuit can be found here.

Conservatives are under attack. Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency and an equal footing for conservatives. If you have been censored, contact us at the Media Research Center contact form, and help us hold Big Tech accountable.