A Massachusetts professor has filed a lawsuit against Meta using a novel interpretation of Section 230, a law known primarily for shielding social media companies from liability.
Facebook, X, YouTube and other social media platforms rely on a 1996 law to insulate themselves from legal liability for user posts. The protection from this law, Section 230 of the Communications Decency Act, is so significant that it has allowed tech companies to flourish.
But what if the same law could be used to rein in the power of those social media giants?
That idea is at the heart of a lawsuit filed in May against Meta, the owner of Facebook, Instagram and WhatsApp. The plaintiff in the suit has asked a federal court to declare that a little-used part of Section 230 makes it permissible for him to release his own software that lets users automatically unfollow everyone on Facebook.
The lawsuit, filed by Ethan Zuckerman, a public policy professor at the University of Massachusetts Amherst, is the first to use Section 230 against a tech giant in this way, his lawyers said. It is an unusual legal maneuver that could turn a law that typically protects companies like Meta on its head. And if Mr. Zuckerman succeeds, it could mean more power for consumers to control what they see online.
“I see and appreciate the elegance of trying to use a piece of law that has made user generated content possible, to now give users more control over those experiences and services,” he said.
Section 230, introduced in the internet’s early days, protects companies from liability related to posts made by users on their sites, making it nearly impossible to sue tech companies over defamatory speech or extremist content.
Mr. Zuckerman has focused on a part of Section 230 that spells out protection for blocking objectionable material online. In 2021, after a developer released software to purge users’ Facebook feeds of everyone they follow, Facebook threatened to shut it down. But Section 230 says it is possible to restrict access to obscene, excessively violent and other problematic content. The language shields companies from liability if they censor disturbing content, but lawyers now say it could also be used to justify scrubbing any content users don’t want to see.