Notes on: Different Sides of Fairness: Evaluations of Fairness of Nextdoor’s Content Moderation System (2025)
Warning
Katsaros et al. (2025, p. 1)
To illustrate the investment platforms make in content moderation:
“More than 40,000 people globally work on trust and safety issues for TikTok. … This year we expect to invest more than two billion dollars in trust and safety efforts…”
— Shou Zi Chew, CEO of TikTok (2024)
Katsaros et al. (2025, p. 1)
Top-Down Governance
Community-Driven Governance
Katsaros et al. (2025, p. 2)
Research highlights how moderation systems often create an adversarial dynamic:
“moderation tends to structure platforms and their users as opposing parties”
This creates opportunities for platforms to better engage with moderated users as stakeholders in the design process.
Katsaros et al. (2025, p. 2)
Katsaros et al. (2025, p. 3)
When evaluating whether a system is fair, people consider:
Katsaros et al. (2025, p. 3)
Katsaros et al. (2025, p. 3)
Participants:
Nextdoor users who recently reported content (N = 2,536) or had content removed (N = 1,004)
Logged Platform Data:
6 months prior + 3 months following survey
Survey Measures:
Katsaros et al. (2025, p. 4)
What makes Nextdoor unique as a research context:
Katsaros et al. (2025, p. 5)
Content reporters and users with removed content were NOT distinct groups:
Implication: The “target” vs. “perpetrator” framing may be reductive — most users experience moderation from multiple perspectives.
Katsaros et al. (2025, p. 6)
Hypotheses 1b and 2b: Supported
Katsaros et al. (2025, p. 7)
Hypotheses 1a and 2a: NOT Supported
This differs from findings on other platforms like Twitter and Facebook.
Katsaros et al. (2025, pp. 6–7)
Hypotheses 1c and 2c: Supported
Katsaros et al. (2025, p. 7)
| Hypothesis | Relationship | Result |
|---|---|---|
| Fairness → Reporting | Positive | Supported |
| Fairness → Removal | None | Not Supported |
| Fairness → Visitation | Positive | Supported |
| Prior behavior → Future behavior | Positive | Strong |
Katsaros et al. (2025, pp. 6–7)
The authors note this differs from prior studies on Twitter and Facebook. Possible explanations:
Katsaros et al. (2025, p. 12)
An interesting finding about what matters most on Nextdoor:
“In drawing conclusions about fairness of their moderation experience on Nextdoor, people seem to be equally concerned with the outcome fairness as they are with the fairness of the process.”
This contrasts with Twitter, where procedural elements were much more strongly correlated with overall fairness than distributive elements.
Katsaros et al. (2025, p. 12)
1. Rethink moderation as a user journey
2. Provide broader education about moderation
3. Treat rule violators as potential safety stewards
Katsaros et al. (2025, pp. 10–11)
The authors suggest Large Language Models could transform moderation:
“LLMs can be leveraged to change the nature of moderation away from a purely transactional system … toward a more dialogic system where a platform user can converse directly with an LLM to get more information about the platform’s rules, who was involved in making any decisions, how rules get applied to others, or even to help a platform user better structure their appeal.”
Katsaros et al. (2025, p. 11)
Methodological concerns:
Katsaros et al. (2025, pp. 11–12)
What the field needs:
Open questions:
Katsaros et al. (2025, p. 11)
Fairness in content moderation not only influenced compliance with rules, but also overall engagement with the platform.
“An ideal system of conflict management has multiple goals. The first is to lessen the future occurrence of rule breaking. […] Procedural justice is effective in achieving the goal of resolving a conflict in a way that leads both parties to engage more in the platform in the future.”
Katsaros et al. (2025, p. 12)
According to the authors, effective moderation should:
“It is also desirable to manage conflicts about online content in ways that do not drive away those who feel victimized by online posts and those who post content that others find objectionable.”
Katsaros et al. (2025, pp. 12–13)
How might these findings apply to larger platforms like Facebook or Twitter where moderation is centralized?
Should platforms prioritize procedural fairness (how decisions are made) or distributive fairness (the outcomes themselves)?
What are the risks of treating rule violators as “potential safety stewards”?
How might LLM-based dialogic systems change users’ fairness perceptions?