Your ‘Simple Solution’ To Section 230 Is Bad: Julia Angwin Edition

That's not how any of this works.

Shocked and surprised boy on the internet with laptop computerIt’s getting to be somewhat exhausting watching people who don’t understand Section 230 insisting they have a simple solution for whatever problems they think (mostly incorrectly) are created by Section 230. And, of course, the NY Times seems willing to publish all of them. This is the same NY Times that had to run a correction that basically overturned the entire premise of an article attacking Section 230. And it did so twice.

An earlier version of this article incorrectly described the law that protects hate speech on the internet. The First Amendment, not Section 230 of the Communications Decency Act, protects it.

But that hasn’t stopped the Times from repeatedly running stories and opinion pieces that simply get Section 230’s basic fundamentals wrong.

And now it’s done so again, with brand new columnist Julia Angwin. I have a ton of respect for the investigative journalism that Angwin has done over the years at the Wall St. Journal, ProPublica, and the Markup (which she co-founded, and only recently just left). She’s helped shine some important light on places where technology has gone wrong, especially in the realm of privacy.

But that does not mean she understands Section 230.

Her very first piece for the NY Times is her recommendation to “revoke” Section 230 in a manner that she (falsely) believes will “keep internet content freewheeling,” in a piece entitled “It’s Time to Tear Up Big Tech’s Get-Out-of-Jail-Free Card.” Even if she didn’t write the headline, it is an unfortunately accurate description of her piece, and it also demonstrates just how wrong the piece is.

Let’s start with the “get-out-of-jail-free” part. Section 230 has never been and never will be a “get-out-of-jail-free” card for “big tech.” First, it protects every website, and with it, everyone who uses intermediary websites to speak. It’s not a special benefit for “big tech.” It’s a law that protects all of our speech online, making it possible for websites to host our speech.

Sponsored

Second, the whole point of 230 is to put the liability on the proper party: the one who actually violated the law. So, at best you could claim that 230 is a “keep-innocent-party-out-of-jail-card” which makes it seem a lot… more reasonable? On top of that, Section 230 has no impact on federal criminal liability (you don’t go to jail for civil suits), but I guess we can chalk that up to inaccurate rhetorical flourishes.

But just how does Angwin strive to fix 230 without destroying the open internet? Her simple solution to 230 is to say 230 only covers speech, not conduct.

But there is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct.

In this scenario, companies could continue to have immunity for the defamation cases that Congress intended, but they would be liable for illegal conduct that their technology enables.

First of all, let’s be clear: she is not actually drawing a distinction between speech and conduct. As the second paragraph shows, she’s saying that websites should be held liable for conduct by third parties that is enabled by speech also from third parties. It’s very much a “blame the tool” type of argument. And it would open the floodgates for a shitload of frivolous, vexatious litigation from lawyers looking to force basically any website to settle rather than endure the costs and attention drain of their lawsuits.

Here’s where it’s important, yet again, to explain how Section 230 actually works. The fundamental point of Section 230 is to put the blame on the proper party: whoever is imbuing the content with whatever makes that content violate the law. That’s it.

Sponsored

The desire to blame websites because they haven’t managed to stop all humans from using their websites to do something bad, is such a weird obsession. Why not just do what 230 does and put the blame on the party violating the law? Why is this so difficult?

Angwin focuses on a somewhat peculiar example, that only undermines basically all of her claims: ads on Facebook that she claims violate the Fair Housing Act (there are some questions as to whether or not many of the ads she describes in the piece actually would violate that law, but we’ll leave that aside). It goes back to a story that Angwin wrote years back at ProPublica, where she discovered that it was possible to abuse Facebook’s ad targeting to post housing ads that discriminated by race. Over the years, Facebook has made many adjustments to try to stop this, but also found that people kept working up ways to effectively do the same thing anyway.

In other words: some people are going to do bad stuff. And even if you make social media sites try to stop them from doing bad stuff… the people are going to try to figure out ways to continue to do bad stuff. And, no one, especially not the folks at Facebook, is smart enough to figure out every possible abuse vector and prevent it from happening. And that’s why Section 230 does exactly the right thing here: it says we don’t blame social media because someone figured out how to game the system to do something illegal: you blame the person who did the actual illegal thing (i.e., post an ad that violates anti-discrimination laws).

Angwin, somewhat oddly, seems to suggest that the legal change is necessary to put pressure on Facebook to be more responsive, but her own piece details how Facebook has continually responded to public pressure (often from articles Angwin and her colleagues have written) to try to cut off this or that avenue for bad actors to abuse the system. She also notes that Facebook was sued a bunch of times over all this and… still reached multiple settlements to settle those lawsuits.

In 2019, three years after I purchased that first discriminatory housing ad, Facebook reached a settlement to resolve several legal cases brought by individual job seekers and civil rights groups and agreed to set up a separate portal for housing, employment and credit ads, where the use of race, gender, age and other protected categories would be prohibited. The Equal Employment Opportunity Commission also reached settlements with several advertisers that had targeted employment ads by age.

[….]

Last year, Meta agreed to yet another settlement, this time with the U.S. Department of Justice. The company agreed to pay a fine of more than $115,000 and to build a new algorithm — just for housing ads — that would distribute such ads in a nondiscriminatory manner. But the settlement didn’t fix any inherent bias embedded in credit, insurance or employment ad distribution algorithms.

So, uh, that sounds like the law is actually working? Also, the public pressure? Why do we need to take away 230 again?

Also, highlighting Fair Housing Act claims is doubly weird, as one of the most famous Section 230 cases was the Roommates case, in which the 9th Circuit said Roommates.com did not qualify for Section 230 protections in a case where it had created a pull-down menu that allowed users to express their own preferences for roommates based on race. In that case, the court (correctly) distinguished the difference between speech of third parties, and a situation where the site itself imbued the content with its problematic nature.

And, as our own Cathy Gellis detailed, the long-forgotten part of the Roommates saga was that after the company lost 230 protections, years later, it still won the case. Just because you think something bad has happened, does not mean it’s illegal, and it does not mean you should get to throw legal liability on any tool that was used in the process. As Eric Goldman has noted, the only proper way to view Section 230 is as a procedural benefit that helps websites get rid of frivolous lawsuits at an earlier, less expensive stage.

This is important, because Angwin makes a fundamental factual error in her piece that many, many people make regarding Section 230: if you remove it, it does not automatically create liability for companies. It just means that they no longer have the faster procedural path to get out of cases where no liability should be there. Angwin, and many others, assume that removing 230 would automatically create liability for companies, even as we’ve seen in Roommates and lots of other cases that is just not true.

In fact, Angwin gets this so wrong in her piece, that she falsely states the following:

Courts have already been heading in this direction by rejecting the use of Section 230 in a case where Snapchat was held liable for its design of a speed filter that encouraged three teenage boys to drive incredibly fast in the hopes of receiving a virtual reward. They crashed into a tree and died.

This is just flat out false, and the NY Times should append a correction here. The case she’s referring to, Lemmon v. Snap has (so far) simply held that Snap can’t get out of the case on 230 grounds. No court has said that Snap is liable for the design. It’s possible the case may get there, but looking at the docket (something Angwin or her editors could have easily done, but apparently chose not to?) shows that the case is still going through discovery. No determination has been yet made regarding Snap’s liability. The only thing that’s been decided is that it can’t use 230 to get the case dismissed. It is entirely possible (and perhaps likely?) that like many other cases where a 230 defense is rejected, eventually the platform wins anyway, just after a much longer and more expensive process.

So, what would happen if Angwin got her wish? Would it actually “keep internet content freewheeling”? Of course not. As her own example showed, it’s effectively impossible for Facebook — or any website — to stop individuals from abusing their tools to do something that might be illegal, immoral, or unethical. Assuming that they can is a fool’s errand, and Julia Angwin is no fool.

What Section 230 does is actually give companies like Facebook much more freedom to experiment and to adjust to try to stop those abuses without fear that each change will subject them anew to a set of costly lawsuits.

So if we make the change that Angwin wants, you now make it so many fewer companies will offer these kinds of useful services, because the risk of being flooded by frivolous, vexatious lawsuits increases. Even worse, you make it much more difficult to adjust and experiment and try to stop the bad behavior in question, because each change introduces you to new potential liability. A better approach for companies in such a scenario is actually never to try to fix anything, because doing so will suggest they have knowledge of the problem, and any of these lawsuits is dead on arrival if the company cannot be shown to have any knowledge.

We’ve also talked about this before, and it’s a common mistake that those who don’t understand Section 230 make: they assume that if you remove 230 and something bad happens, sites would be automatically liable. Not true. The 1st Amendment would require that the website have actual knowledge of the problem.

So: the end result of this little change would be many, many website refuse to host certain types of content at all. And other websites would refuse to do anything to try to stop any bad behavior, because any change subjects them anew to litigation to argue over whether or not that change enabled something bad. And you encourage the few remaining websites left willing to host this kind of content to put their head in the sand, lest they show themselves to have the knowledge necessary for liability.

In other words: it’s a clusterfuck that does nothing to solve the underlying problem that Angwin describes of discriminatory ads.

You know what does help? Leaving 230’s protections in place, allowing companies to constantly adjust and get better, without fear of liability because one jackass abuses the system. On top of that, letting lawsuits and enforcement target the actual bad actors again does the proper thing in going after the people actually violating the law, rather than the tool they used.

Once again, I will note that Angwin is a fantastic reporter, who has done important work. But I hope that her contributions to the NY Times will involve her getting a better understanding of the underlying issues she’s writing about. Because this first piece is not up to the level I would expect from her, and actually does quite a bit to undermine her previous work.

Your ‘Simple Solution’ To Section 230 Is Bad: Julia Angwin Edition

More Law-Related Stories From Techdirt:

DOJ Supports ‘Right To Repair’ Class Action Against John Deere
Thousands Of Bite-Sized Privacy Law Violations Could See White Castle Subjected To Billions In Fines
UK Proposes Even More Stupid Ideas For Directly Regulating The Internet, Service Providers

CRM Banner