By Daphne Mark
Boston University News Service
Due to claims of misinformation, racism and even insurrection, online companies like Facebook and Twitter have recently come under attack for not holding their users accountable for their posts. Social media may be a slough of misinformation, hate speech, and poorly punctuated birthday wishes from grandmas, but according to civil rights advocates, this smorgasbord of content is held together by a single flimsy law.
This law, Section 230 of the Common Decency Act, was created to protect the right to free speech online by protecting content platforms — including Facebook, Twitter, Reddit, Parler, and even wikiFeet — from taking legal responsibility for what gets posted on their sites. However, bipartisan calls for Section 230’s repeal and replacement, including 24 looming federal bills, may permanently damage how we interact online, legal experts warn.
“It’s the most important law for the internet,” Jess Miers, a law student specializing in internet law at Santa Clara University, said during a Zoom interview on Nov. 19. “It’s the free speech law of the internet.”
Miers called herself “a section 230 protégé”, and flashed her tattoo of a cursor clicking on “§230”.

Miers said Section 230 protects these companies by holding the author — and not the platform the author posted his or her content to — legally responsible.
“[Section] 230 doesn’t give any extra special gifts or any extra special privileges for free speech,” Miers said. “It just helps get to the same exact conclusion they would get to if they were launching a First Amendment defense. If I tweet something defamatory about you, you can sue me [because] I wrote the post, you can’t sue Twitter.”
Even for people familiar with libel laws, this can be confusing.
Before the internet, there were two legal categories for determining if an organization was responsible for communication conducted using their services: distributor liability and common carrier liability, Miers said.
Traditional news outlets fall into a liability framework called distributor liability. News outlets write, edit, and fact-check all of their own content pre-publication and make conscious decisions about what is posted, published, and available for their audience. Even for opinion articles — which are not always written by staff writers — the outlet must confirm the piece’s accuracy. Not only this, but the outlet is still responsible for any erroneous information included. Because these outlets are in complete control of the content they distribute, the outlet, as well as the authors, are responsible under the law for any harm caused by their negligence or malice.
“The difference is that [the news media] have the control,” Miers said. “They are the gatekeepers. They’re the ones that decide whether they’re going to publish it or not.”
This is in contrast to the second framework, common carrier liability, which is applied to organizations like telephone companies. In this case, a company bears no responsibility for any defamation that happens on its medium.
“If you and I are on a phone call and I defame you or I defame somebody else on our phone call, there is literally nothing the telephone company can do,” Miers said. “They’re not going to know about our conversation. They can’t stop it. They can’t cut the phone line. Once I’ve said it, it’s done. It’s out there.”
Both of these frameworks are based on who has control and knowledge of the message. However, this question has been complicated by a question raised In 1996: which category does the internet fit into?
“Section 230 said [the internet does not fit into either category]; they’re a third category,” Miers said. “Websites can act as both.”
In 2016,The Washington Post published approximately 1,200 stories per day. Meanwhile, The New York Times published a daily average of 230 stories. However, newspapers could not keep up with the social media platforms, such as Twitter. According to Internet Live Stats, as of 2020 Twitter received 500 million tweets per day—more than 350,000 tweets per minute.
This combination appears in nuanced ways on websites today. While online news outlets are responsible for what their staff authors post, they are not liable for third party postings in the comments section of those same articles. The inverse is true for any content created and posted by a company on their own site — even if most of that platform’s content is populated by third-party users.
This makes companies responsible for their own posts and content, including how they fact check information on their site. Miers said sometimes lawsuits disputing a platform’s decision to fact check or hide posts will arise.
“This is where people get uncomfortable,” Miers said. “You’re suing them for a decision they are making about third party content. The fact check itself, the literal words that Twitter uses in that fact check, that’s first party content to Twitter. That’s not protected by section 230.”
This could be why, until recently, so many platforms have been hesitant to instate fact-checking systems. Mark Zuckerburg, Facebook CEO, told CNBC in May 2020 that the company would not be fact checking posts by politicians.
“I don’t think that Facebook, or internet platforms in general, should be arbiters of truth,” Zuckerberg said in a virtual interview with CNBC reporter Andrew Ross Sorkin. “Political speech is one of the most sensitive parts in a democracy, and people should be able to see what politicians say.”
The social media outlet still does not fact check posts from politicians for its more than 2.8 billion users.
Despite this, conservative politicians in the U.S. claim that social media fact checking is biased to silence conservative viewpoints. Social media companies have been called out by many individuals, including senators Ted Cruz and Josh Hawley, YouTube stars Diamond and Silk and former President Donald Trump.
“Something is happening with those groups of folks that are running Facebook and Google and Twitter,” Trump said at a press conference in March 2019, “I do think we have to get to the bottom of it. It’s collusive, and it’s very, very fair to say we have to do something about it.”
Devin Nunes, the U.S. Representative for California’s 22nd congressional district, argued that through fact checking or hiding posts, platforms are engaging in the same editorial process as online newspapers. As a result, these platforms should be held to that same standard.
His lawsuit against Twitter for the content posted by two parody accounts, including one impersonating his mother, was dismissed in June 2020.
But social media companies are not news organizations — a distinction that is growing blurry. The number of people using online platforms for news is growing, according to a 2018 Pew Research Center study. Combined, 55% of adults in the US said they get their news online (20% from social media, 33% from online news organizations). This is triple the 16% of individuals who reported reading print newspapers.
While many individuals are receiving their news on social media, not everyone is consuming accurate information. According to an Ohio State University study published in March 2020, people using social media to gather news have a hard time verifying sources while scrolling. This issue is slowly being rectified. Within the last year, social media companies have become more proactive about labeling misinformation in posts.
This spring, Twitter began adding fact checking labels to tweets that possibly contained misinformation about the election. Facebook began linking posts about coronavirus to the CDC website. Instagram put banners with information about registration and polling places on any post including the word “vote.” Stepping up the arms race, this spring, Pinterest banned all misinformation related to the coronavirus pandemic from its platform.
This interaction by normally indifferent tech companies has left users wondering: is this censorship? Do these actions infringe on free speech?
These questions were compounded for many after Twitter banned Trump’s accounts after the capitol riot on Jan. 6, 2021. Since his main mode of communication was through Twitter, many claimed that the ban was an act of censorship.
Though social media is referred to as ‘the modern public square,’ these companies are not required to be a non-biased, non-partisan space, John Villasenor, a member of University of California’s National Center for Free Speech and Civic Engagement and the director of the UCLA Institute for Technology, Law and Policy, said. This is because, as Villasenor said, social media companies are private, not public entities, and, as such, they are not under any obligation to offer the full range of speech conferred by the First Amendment.
“Social media companies can and do have policies that prohibit the posting of racist speech and they can and do revoke your account,” Villasenor said. “But that has nothing to do with the First Amendment.”
Miers said that many of these complaints are actually not about Section 230, but about the First Amendment. She lays this out in a quasi-poem, Your Problem Is Not With Section 230, But The 1st Amendment, in Techdirt, a blog reporting on legal and economic challenges in tech.
People feel more comfortable attacking Section 230 as a stand-in for the First Amendment, Miers said. The piece ends:
And at the end of the day, If you hate editorial discretion and free speech,
You probably just hate the First Amendment… not Section 230.
“Especially these red blooded Americans… they don’t want to be told, ‘oh my god, I’m actually very anti-free speech’,” Miers said. “I would really like more people to understand that Section 230 is literally just a fast lane for the First Amendment. That’s it.”
This piece is part of a series on Section 230. The next installment, covering the 13 Senate bills, 11 House proposals, and executive agency proposals to amend Section 230, will be available in March.
This sounds like total crap. People aren’t against what section 230 gives social media they are against the fact that it requires social media company to act in good faith and it’s very clear they do not.