
Credit: Pixabay/CC0 Public Domain
Big tech platforms often present content moderation as a seamless, tech-driven system. However, human labor, which often outsources countries such as India and the Philippines, plays a crucial role in making judgments, including understanding the context. Technology alone cannot do this.
In a closed room, hidden human moderators are responsible for filtering out some of the internet’s most harmful materials. They often do so under minimal mental health support and strict non-confidential contracts.
After receiving ambiguous training, moderators are expected to make decisions within seconds, ensuring at least 95% accuracy, keeping in mind the platform’s ever-changing content policy.
Do these working conditions affect mitigation decisions? To date, there is not much data on this. A new study published in New Media & Society examined the daily decision-making process of Indian commercial content moderators.
Our results revealed how moderators’ employment conditions shape the outcomes of their work, and three important arguments that emerged from our interviews.
Efficiency for suitability
The 28-year-old audio moderator who works for an Indian social media platform said:
Moderators operate under productive goals, and are forced to prioritize content that can be processed quickly without attracting attention from supervisors.
In the excerpt above, the moderator explained that she avoided content and processes that required more time to keep up the pace. While observing her work in a screensharing session, I noticed that reducing visibility of content (derailing) involves four steps. On the other hand, you will need to end the live stream or delete the post.
To save time, she skipped and ranked the flagged content. As a result, content marked for less visibility, such as spoofing, remained on the platform until other moderators stepped in.
This shows how moderate industry productivity pressures can easily lead to problematic content online.
Decontextualized decisions
“Make sure no highlighted yellow words remain in the profile” – instructions received by the text/image moderator.
Moderation tasks often include automated tools that can detect specific words in text, transcrib speeches, and scan the content of photos using image recognition.
These tools are supposed to assist moderators by flagging potential violations for further judgment that takes into account the context. For example, is a potentially offensive language just a joke or is it actually a violation of policy?
In fact, we found that under a strict timeline, moderators mechanically follow the cues of tools frequently, rather than exercising independent judgment.
The above cited moderators are described by the supervisor to simply remove text detected by the software. During screensharing, I observed her removing flagged words without evaluating the context.
Often, automation tools that queue and organize content remove it from the broader conversational context. This makes it even more difficult for moderators to make context-based judgments about content that was flagged but actually innocent. That decision is despite being one of the reasons why human moderators are hired in the first place.
The impossible of thorough judgment
“If you can’t do your job and complete your target, you may leave” – Workgroup Message from a freelance content moderator.
Unstable employment forces moderators to shape decision-making processes regarding job security.
They are forced to use strategies that allow them to make decisions quickly and appropriately. Second, this will affect their future decisions.
For example, over time, I found that moderators would create a list of “DOS and DONS”. They may dilute vast moderation guidelines into a easily remembered list of ethically clear violations that they can immediately follow.
These strategies reveal how the very structure of moderation industry can prevent thoughtful decisions and make thorough judgments impossible.
What should I take away from now?
Our findings show that moderation decisions are not only shaped by platform policies. The unstable working conditions of moderators play a key role in how content is moderated.
If employment practices in the moderation industry are also not improving, online platforms cannot implement consistent, thorough mitigation policies. We argue that content moderation and effectiveness is as much a labor issue as policy challenges.
For truly effective moderation, online platforms need to address economic pressures on moderators, such as strict performance goals and precarious employment.
We need greater transparency about how much platform we spend on human labor of trust and safety, both within the home and outsourced. It is not clear whether human resources investments are really proportional to the amount of content flowing through the platform right now.
Beyond employment conditions, the platform will also need to redesign moderation tools. For example, consolidating quick access rulebooks, implementing violation-specific content queues, and standardizing the steps required for different enforcement actions streamline decisions, so moderators are not the default just to save time.
Learn more: Tania Chatterjee et al, Whether to enable content: Understand content moderators’ decision-making processes on social media platform New Media & Society (2025). doi:10.1177/14614448251348900
Provided by conversation
This article will be republished from the conversation under a Creative Commons license. Please read the original article.
Quote: The hard working conditions of online moderators directly affect how well the internet is policed. New Survey (July 23, 2025) Retrieved from July 23, 2025 https://techxplore.com/news/2025-07-hard-labor-conditions-online-moderators.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.