Proposed Sec. 230 rewrite could have wide-ranging consequences

Proposed Sec. 230 rewrite could have wide-ranging consequences

by Tech News
0 comment 5 views
A+A-
Reset

ch-ch-ch-changes —

Sec. 230 reform bills are already pouring into this Congress.

Kate Cox

Cartoon Hands Hold Out A Band-Aid Over The Words Section 230.

A trio of Democratic Senators has taken this administration’s first stab at Section 230 reform with a new bill that would make platforms, including giants such as Facebook and Twitter, liable for certain limited categories of dangerous content. Unfortunately, although the bill’s authors try to thread a tricky needle carefully, critics warn that bad-faith actors could nonetheless easily weaponize the bill as written against both platforms and other users.

The bill (PDF), dubbed the SAFE TECH Act, seeks not to repeal Section 230 (as some Republicans have proposed) but instead to amend it with new definitions of speakers and new exceptions from the law’s infamous liability shield.

“A law meant to encourage service providers to develop tools and policies to support effective moderation has instead conferred sweeping immunity on online providers even when they do nothing to address foreseeable, obvious and repeated misuse of their products and services to cause harm,” said Sen. Mark Warner (D-Va.), who introduced the bill. “This bill doesn’t interfere with free speech—it’s about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long.”

Sens. Mazie Hirono (D-Hawaii) and Amy Klobuchar (D-Minn.) also co-sponsored the bill.

What does Section 230 do?

Section 230 of the Communications Decency Act of 1996, as it currently stands, does two major things in two small sections.

The first section reads:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

That means that Internet services that bring you information (Facebook, Twitter, etc.) are not considered to be the entities who “say” something posted or transmitted by a third party. In short: if John Doe defames Jane Smith in a series of posts he makes on Twitter, it is John Doe, not Twitter, who is responsible for that defamation. It also applies to third-party content in areas such as comments sections on news websites.

The second relevant section is a liability shield that grants platforms protection from lawsuits related to content third parties (the “publishers” or “speakers”) post on those platforms—and, critically, from lawsuits related to moderation choices those platforms make. The law does not require a platform to employ content moderation, but it does provide a liability shield from being sued over “any action voluntarily taken in good faith to restrict access to or availability of” problematic material. In practice, Sec. 230 means a platform can’t really be sued for the amount of moderation it performs or for moderation that happens to miss some posts.

The liability shield gives providers a great deal of discretion in how to handle content they deem to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Contrary to popular political claims, however, there is nothing whatsoever in the law about political neutrality, and it contains no language about “editorial control.”

What would SAFE TECH change?

The proposed bill would drastically alter section 1(a), quoted above, to read:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any speech provided by another information content provider, except to the extent the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.

The change, Sen. Warner explained, is intended directly to target online advertising, which is now omnipresent. If someone on Twitter makes a bad tweet, as happens every day, nothing changes. If someone pays Twitter to promote a bad tweet, however, Twitter itself could be considered the “speaker” of the content in that ad.

The proposal also explicitly creates new carve-outs from the “Good Samaritan” liability shield that protects platforms from lawsuits. Users would be able to file lawsuits for injunctive relief (i.e., a court order requiring someone to stop doing something) for unmoderated material that “is likely to cause irreparable harm.” Basically, if someone is harassing you on Twitter, and every report to Twitter about the offending tweets is returned with a version of “this doesn’t violate our guidelines,” you could in theory go to court to demand Twitter take the harassing posts down.

SAFE TECH also adds a litany of new exceptions to the section of the law that governs how it interacts with other laws, adding civil rights laws; antitrust laws; stalking, harassment, or intimidation laws; international human rights law; and wrongful death actions to the list of laws on which Sec. 230 has no effect.

The “red-lined” version of the bill (PDF) shows where the edits would fit into the current law.

Narrow exceptions, wide consequences

Several legal experts and lawmakers have warned that the changes in this draft bill could, if enacted, have wide-ranging and unintended consequences.

“Unfortunately, as written, [SAFE TECH] would devastate every part of the open Internet, and cause massive collateral damage to online speech,” Sen. Ron Wyden (D-Ore.), said to TechCrunch. “Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech.”

Law Professor Jeff Kosseff, who literally wrote the book on Sec. 230, had similar misgivings.

The bill “is well intentioned, but the drafting lacks precision and courts could read it as removing [Sec.] 230 protections from nearly every platform on the Internet,” Kosseff wrote on Twitter. “A good lawyer could argue that a very wide range of arrangements constitute the acceptance of payment to make the speech available.”

Fight for the Future’s director, Evan Greer, also worried that the definition of “payment” in the bill is much too broad. “As far as I can tell this bill as written would essentially destroy Bandcamp, Patreon, Wikipedia, Craigslist, Etsy, any individual musician or artist or nonprofit online seller with a store on their website, crowdfunding platforms, etc etc,” Greer wrote. “It’s a mess.”

We have some real-world experience with unintended consequences of Sec. 230 amendments: the law has been narrowly altered in the very recent past to exclude certain categories of content, to decidedly mixed effect.

In 2018, a pair of bills known as FOSTA/SESTA became law. (FOSTA was the House version; SESTA the Senate edition.) The bills were designed to limit sexual exploitation and sex trafficking online by adding a carve-out to Sec. 230 to strip liability protections from “websites that unlawfully promote and facilitate prostitution and websites that facilitate traffickers in advertising the sale of unlawful sex acts with sex-trafficking victims.”

After FOSTA/SESTA became law, however, exploitation still persists—and what we got instead was devastation of legal and consenting sex work facilitated by the Internet. Adult entertainers lost access to the platforms where they had advertised and worked, instead making their jobs less safe and giving sex workers less control over where and how they work.

You gotta have (bad) faith

Kosseff, in his Twitter thread, ended with a warning to those who would alter Sec. 230: “Think not only about the ways that you want people to use the amended 230, but think about how it could be used in ways that you do not intend,” he cautioned. “Because chances are, it will.”

There’s a long litany of legitimate concerns and criticisms one might have with the way social media platforms have approached moderation in the past several years. Efforts to revise Sec. 230—both genuine, good-faith attempts at an update and bad-faith attempts at killing it—abounded during the last years of the Trump administration.

Broadly speaking, the fight breaks down along partisan lines. Democrats and progressives tend to argue platforms should moderate more and act more quickly to remove blatant disinformation and harmful speech. Republicans and conservatives, on the other hand, tend to claim that any moderation, especially if applied to them, is tantamount to “censorship” and that platforms demonstrate an inherent “bias” against conservative voices.

Former President Donald Trump launched an all-out assault on Sec. 230 in 2020, urging the Federal Communications Commission, the Department of Justice, and the Federal Trade Commission to adopt new rules that would limit platforms’ ability to moderate content and would allow users who had their content moderated to sue. The FCC considered a proposal to limit Sec. 230, and the DOJ sent Congress a draft bill, but ultimately both efforts went nowhere before the end of the administration.

Recent studies have found that these claims of bias are themselves disinformation and that conservative speakers have, in fact, dominated social media in recent years. Nonetheless, the partisan arguments are continuing and are likely to continue for quite some time. Rep. Ted Budd (R-N.C.) already introduced a bill this year that would allow anyone to sue a platform that “breach[es] good-faith user agreements, censor[s] political speech, and suppress[es] content.”

Read More

You may also like

Leave a Comment