Briefly Noted: Substack’s Safety Debate is Really about Something Deeper

A group of feminist and LGBTQ+ writers published an open letter this week accusing Substack of failing to protect them from harassment. They describe an escalating pattern: “Every time we post a note, our inboxes flood with rape threats, slurs and hate from men.” Blocking and reporting, they say, has proven ineffective because “many of these trolls create new accounts to harass women who have blocked them.” Their conclusion is that Substack must strengthen moderation, ban repeat offenders, and introduce safety features that prevent anonymous or brand-new accounts from targeting writers.

On its face, this seems uncontroversial. Threats and harassment are real, and no one wants to open an app anticipating abuse. But the response from other writers has been mixed. Some have warned that expanding platform-level moderation risks changing Substack’s fundamental character, shifting it away from a decentralized publishing model toward something closer to traditional social media. For them, the concern is not whether harassment is acceptable, but whether the solution alters the independence that made the platform attractive in the first place.

Substack began primarily as a publishing platform. Writers sent newsletters, readers subscribed, and interaction was limited. That distance created a sense of control. But Substack Notes changed that. It introduced real-time visibility and conversational dynamics closer to social media. Writers became more accessible and more exposed, and with that came familiar problems. Every expressive platform eventually confronts the same tension: the conditions that allow open expression also allow bad actors to participate.

At that point, the platform faces a choice. It can increase centralized moderation to reduce interpersonal threat, or it can preserve decentralization and accept a higher level of interpersonal risk. Both approaches have costs. More moderation can improve safety but raises concerns about institutional control. Less moderation preserves autonomy but places more responsibility on individual users to manage their own exposure. Substack is now entering that phase of negotiation.

The most likely outcome is neither inaction nor a fundamental philosophical shift. Instead, expect more user-level controls: stronger filters, more granular comment permissions, and better tools for writers to limit unwanted interaction. These changes would address safety concerns while preserving Substack’s underlying model of writer autonomy. In other words, Substack will try to solve the safety problem without abandoning the independence that made it attractive in the first place.

Whether that balance holds is an open question. But the conflict itself signals something important. Substack is no longer just a collection of newsletters. It has become a social environment. And when a platform changes in that way, expectations built for the old version no longer fully apply. Social environments inevitably have to decide how they handle safety and freedom when those values begin to pull against each other.

https://open.substack.com/pub/lettersfromafeminist/p/an-open-letter-to-the-substack-team

Next
Next

February Reading Roundup