Lawmakers have spent years investigating how hate speech, misinformation and bullying on social media sites can lead to real-world harm. Increasingly, they have pointed a finger at the algorithms powering sites like Facebook and Twitter, the software that decides what content users will see and when they see it.
Some lawmakers from both parties argue that when social media sites boost the performance of hateful or violent posts, the sites become accomplices. And they have proposed bills to strip the companies of a legal shield that allows them to fend off lawsuits over most content posted by their users, in cases when the platform amplified a harmful post’s reach.
The House Energy and Commerce Committee will hold a hearing Wednesday to discuss several of the proposals. The hearing will also include testimony from Frances Haugen, the former Facebook employee who recently leaked a trove of revealing internal documents from the company.
Removing the legal shield, known as Section 230, would mean a sea change for the internet, because it has long enabled the vast scale of social media websites. Ms. Haugen has said she supports changing Section 230, which is a part of the Communications Decency Act, so that it no longer covers certain decisions made by algorithms at tech platforms.
Frances Haugen, a former Facebook employee, testifying at a Senate hearing in October.Credit…T.J. Kirkpatrick for The New York Times
But what, exactly, counts as algorithmic amplification? And what, exactly, is the definition of harmful? The proposals offer far different answers to these crucial questions. And how they answer them may determine whether the courts find the bills constitutional.
Here is how the bills address these thorny issues:
What is algorithmic amplification?
Algorithms are everywhere. At its most basic, an algorithm is a set of instructions telling a computer how to do something. If a platform could be sued anytime an algorithm did anything to a post, products that lawmakers aren’t trying to regulate might be ensnared.
Some of the proposed laws define the behavior they want to regulate in general terms. A bill sponsored by Senator Amy Klobuchar, Democrat of Minnesota, would expose a platform to lawsuits if it “promotes” the reach of public health misinformation.
Ms. Klobuchar’s bill on health misinformation would give platforms a pass if their algorithm promoted content in a “neutral” way. That could mean, for example, that a platform that ranked posts in chronological order wouldn’t have to worry about the law.
Other legislation is more specific. A bill from Representatives Anna G. Eshoo of California and Tom Malinowski of New Jersey, both Democrats, defines dangerous amplification as doing anything to “rank, order, promote, recommend, amplify or similarly alter the delivery or display of information.”
Another bill written by House Democrats specifies that platforms could be sued only when the amplification in question was driven by a user’s personal data.
“These platforms are not passive bystanders — they are knowingly choosing profits over people, and our country is paying the price,” Representative Frank Pallone Jr., the chairman of the Energy and Commerce Committee, said in a statement when he announced the legislation.
Mr. Pallone’s new bill includes an exemption for any business with five million or fewer monthly users. It also excludes posts that show up when a user searches for something, even if an algorithm ranks them, and web hosting and other companies that make up the backbone of the internet.
What content is harmful?
Lawmakers and others have pointed to a wide array of content they consider to be linked to real-world harm. There are conspiracy theories, which could lead some adherents to turn violent. Posts from terrorist groups could push someone to commit an attack, as one man’s relatives argued when they sued Facebook after a member of Hamas fatally stabbed him. Other policymakers have expressed concerns about targeted ads that lead to housing discrimination.
Most of the bills currently in Congress address specific types of content. Ms. Klobuchar’s bill covers “health misinformation.” But the proposal leaves it up to the Department of Health and Human Services to determine what, exactly, that means.
“The coronavirus pandemic has shown us how lethal misinformation can be and it is our responsibility to take action,” Ms. Klobuchar said when she announced the proposal, which was co-written by Senator Ben Ray Luján, a New Mexico Democrat.
The legislation proposed by Ms. Eshoo and Mr. Malinowski takes a different approach. It applies only to the amplification of posts that violate three laws — two that prohibit civil rights violations and a third that prosecutes international terrorism.
Mr. Pallone’s bill is the newest of the bunch and applies to any post that “materially contributed to a physical or severe emotional injury to any person.” This is a high legal standard: Emotional distress would have to be accompanied by physical symptoms. But it could cover, for example, a teenager who views posts on Instagram that diminish her self-worth so much that she tries to hurt herself.
What do the courts think?
Judges have been skeptical of the idea that platforms should lose their legal immunity when they amplify the reach of content.
In the case involving an attack for which Hamas claimed responsibility, most of the judges who heard the case agreed with Facebook that its algorithms didn’t cost it the protection of the legal shield for user-generated content.
If Congress creates an exemption to the legal shield — and it stands up to legal scrutiny — courts may have to follow its lead.
But if the bills become law, they are likely to attract significant questions about whether they violate the First Amendment’s free-speech protections.
Courts have ruled that the government can’t make benefits to an individual or a company contingent on the restriction of speech that the Constitution would otherwise protect. So the tech industry or its allies could challenge the law with the argument that Congress was finding a backdoor method of limiting free expression.
“The issue becomes: Can the government directly ban algorithmic amplification?” said Jeff Kosseff, an associate professor of cybersecurity law at the United States Naval Academy. “It’s going to be hard, especially if you’re trying to say you can’t amplify certain types of speech.”