
Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.
Reddit is one of the largest user-generated content platforms on the internet, built around thousands of independent communities known as subreddits. Some subreddits cover everyday interests, while others host discussions about specialized or controversial topics. These communities are created and moderated by volunteers, and the site’s decentralized model means that Reddit hosts a vast range of user speech without relying on centralized editorial control.
Ben Lee is Chief Legal Officer at Reddit, where he oversees the company’s legal strategy and policy work on issues including content moderation and intermediary liability. Before joining Reddit, Lee held senior legal roles at other tech companies including Plaid, Twitter, and Google. At Reddit, he has been closely involved in litigation and policy debates surrounding Section 230, including cases addressing the legal risks faced by platforms and their users and moderators. He was interviewed by Joe Mullin, a policy analyst on EFF’s Activism Team.
Joe Mullin: When we talk about user rights and Section 230, what rights are most at stake on a platform like Reddit?
Ben Lee: Reddit, we often say, is the most human place on the internet. What’s often missing from the debate is that section 230 protects people—not platforms.
It protects millions of everyday humans and volunteer moderators who participate in online communities. Without it, people could face lawsuits for voting down a post, enforcing community rules, or moderating a discussion. These are foundational activities on Reddit, and frankly, the whole internet.
If you had to describe section 230 to a regular Reddit user without naming the law, what would you say it does for them?
Section 230 protects your ability to participate in community moderation.
Even if all you are doing is up-voting or down-voting content, that’s participation. On Reddit, everyone is a content moderator, through voting. Up-voting determines the visibility of content.
We believe, strongly, this is one of the only models to allow Reddit to scale. You make the community part of the moderation process. They’re invested in the community, making it better.
How would user speech be affected if Section 230 were eliminated or weakened?
We would undermine community self governance—the notion that humans can do content moderation, and take that responsibility for themselves. Whether you’re a small blog or big forum. I like to think of Reddit as composed of this federation of communities that range from the tiny to the humongous. That’s what the internet is!
The legal risk would discourage people from moderating, or even speaking at all. The kind of speech we’re trying to protect is often critical of powerful people or entities. If a moderation decision leads to litigation from those powerful entities, that’s an expensive proposition to fight.
Reddit relies on user-run communities and volunteer moderators. Can you walk me through how content moderation and legal complaints actually work in practice, and where section 230 comes into that?
We have a tiered structure, like our federal system. Each community is like a state: it has its own rules, and enforces them. The vast majority of content moderation decisions are made by the communities, not by Reddit itself.
Reddit is built on self-governing communities that are moderated by volunteers, supported by automated tools. Section 230 gives Reddit the freedom to experiment, and lets users shape healthy, interest-based spaces.
Section 230 is fundamental to protecting the moderators from a frivolous lawsuit. A screenwriting community might want to protect their community from scammy competitions—and then they get sued by that competition.
Or a community wants to keep their conversation civil. And, for example, may not allow Star Trek characters to be called “soy boys,” and they enforce that. Then a person sues.
I wish these were hypotheticals. But they were actual lawsuits. And we have them, routinely.
What are policymakers missing about Section 230?
The [moderation] decisions being criticized in court, are decisions to try to make the internet safer. In none of the cases that I mentioned is there a moderator saying, “I want to increase harmful content!” These are good-faith decisions about what makes the internet better.
Section 230 is, at its core, protecting the ability for people to make those choices for their own communities.
There’s a price to be paid for not having a Section 230. And it will be paid by internet users—not the biggest platforms.
Some see 230 as a way to punish Big Tech. But removing it doesn’t punish Big Tech—it makes them more powerful. It’s startups, community driven platforms, and individual moderators who rely on Section 230 to compete and innovate. Weakening Section 230 will harm the open internet, and reduce the choice, diversity, and resilience of the internet.
The big guys, they have armies of lawyers. They have the budget to withstand a flood of lawsuits. Weakening Section 230 just entrenches them.
In Reddit’s amicus brief in the Gonzalez v. Google Supreme Court case, you point out that without Section 230, many moderation decisions wouldn’t be protected. The brief states: “A plaintiff might claim emotional distress from a truthful but hurtful post that gained prominence when a moderator highlighted it as a trending topic. Or, a plaintiff might claim interference with economic relations arising from an honest but very critical two-star restaurant review.”
When you have situations where moderators get threats or litigation, what can you do?
We have had cases where our own moderators got sued, along with us. In the “soy boy” case, we worked to help find pro bono counsel for the moderators.
Someone posted “Wesley Crusher is a soy boy,” and it got removed. I’m enough of a Star Trek fan that I understand both the reference, and why the moderator decided—“hey, it’s gone. I don’t want this here.”
This would not violate our Reddit rules. But the community took it down under its own rules about being civil. It was just not a kind-hearted action, and the community had a right to decide.
But the moderator got sued. We got sued, actually, because the poster disagreed with that moderation choice. Section 230 is what allowed us to win that case.
These are just average people, implicated only because they moderated their own community. They are trying to do the right thing by their community.
In cases where litigation happens, when does Section 230 come into play?
Section 230 is usually one of the first things that’s talked about in the case. It’s usually the most effective way of saying: if you believe someone who defamed you—please go to the person who has defamed you. If you’re looking to the moderator, or to Reddit itself, this is not a great way of getting the justice that you seek.
Is there a different workflow internationally?
There’s a very different workflow. We had a prominent case in France where a company was trying to sue moderators, and of course, we didn’t have section 230 to protect them. So we had to do all sorts of other things to protect them. It got much more complicated.
The breadth of content that’s considered illegal in certain jurisdictions can be somewhat breathtaking.
Our goal is always to preserve as much freedom of expression as possible for our community. In the U.S., we look at it through the lens of the First Amendment, and other aspects. Outside the U.S., we rely more on the lens of international human rights.
How would you characterize legal demands around user content, the ones you see most often?
They tend to be: somebody said something mean about me—take this down. Or someone says: you didn’t allow me to say something mean about someone or some entity. It completely runs the spectrum.
One law that has already passed that weakens Section 230 is SESTA/FOSTA. From Reddit’s perspective, what changed after that?
There’s some communities we had to shut down, in particular, support communities. There was a cost. Every time Section 230 is narrowed, there’s a cost—some types of speech and communities have a harder time staying online.
The cost may not seem high to some people, because those communities are not for them. But if they visited them, they’d see that these are actual people, interacting in a positive way. If it wasn’t positive, we have rules for that—but that’s a different question.







