Bluesky this week released its first transparency report documenting the actions taken by its Trust & Safety team and the results of other initiatives such as age guarantee compliance, influence manipulation monitoring, and automated labeling.
The social media startup, a rival to X and Threads, grew nearly 60% in 2025, from 25.9 million users to 41.2 million. This includes both accounts hosted on Bluesky’s own infrastructure and accounts running their own infrastructure as part of Bluesky’s decentralized social network based on the AT protocol.
Last year, users made 1.41 billion posts on the platform. This represents 61% of all posts ever made on Bluesky. Of those, 235 million posts included media, accounting for 62% of all media posts ever shared on Bluesky.
The company also reported a five-fold increase in legal requests from law enforcement, government regulators and legal representatives, from 238 in 2024 to 1,470 in 2025.
The company previously shared moderation reports in 2023 and 2024, but this is the first time it has compiled a comprehensive transparency report. The new report also addresses areas beyond moderation, such as regulatory compliance and account verification information.
54% increase in moderation reports from users
Compared to 2024, when Bluesky’s moderation reports increased 17x, the company reported a 54% increase in the number of user reports this year, from 6.48 million in 2024 to 9.97 million in 2025.
Although the numbers have jumped, Blueski noted that this growth is “roughly on track” with the 57% increase in users that occurred during the same period.
tech crunch event
boston, massachusetts
|
June 23, 2026
Approximately 3% of the user base, or 1.24 million users, filed reports in 2025, with the top categories being “misleading” (including spam) at 43.73%, “harassment” at 19.93%, and sexual content at 13.54%.
The overarching “Other” category included 22.14% of reports that did not fit into these categories, or other reports such as violence, child safety, site rule violations, and self-harm, but these were a much smaller proportion.
Of the 4.36 million reports in the “Misleading” category, spam accounted for 2.49 million reports.
Meanwhile, hate speech accounted for the largest proportion of the 1.99 million “harassment” reports, at around 55,400. Other areas of activity observed include targeted harassment (approximately 42,520 reports), trolling (29,500 reports), and identity theft (approximately 3,170 reports).
However, Mr Bruski said the majority of “harassment” reports involved things that fell into the gray area of anti-social behavior, which could include rude remarks, but not other categories such as hate speech.
Most reports of sexual content (1.52 million) were related to mislabeling, Brusky said. That meant the adult content wasn’t properly marked with metadata, the tags that allow users to control their own moderation experience using BlueSky’s tools.
A small number of reports focused on non-consensual intimate images (approximately 7,520), abusive content (approximately 6,120), and deepfakes (more than 2,000).
Reports focused on violence (24,670 total) were divided into subcategories such as threats or incitement (approximately 10,170 reports), glorification of violence (6,630 reports), and extremist content (3,230 reports).
In addition to user reports, Bluesky’s automated system flagged 2.54 million potential violations.
One area where Bluesky reported success related to the reduction in daily reports of anti-social behavior on the site, which, like X, fell by 79% after introducing a system that identifies harmful replies and reduces their visibility by placing them behind extra clicks.
Bluesky also saw a month-over-month decline in user reports, with reports per 1,000 monthly active users dropping 50.9% from January to December.

Brusky noted that outside of its moderation efforts, it removed 3,619 accounts suspected of influence activity, most of which were likely operating from Russia.
Increase in takedowns and legal requests
The company said last fall that it was becoming more aggressive about deregulation and enforcement, and that appears to be the case.
Last year, Bluesky removed 2.44 million items, including accounts and content, by 2025. The previous year, Bluesky removed 66,308 accounts and its automated tools removed 35,842 accounts.
Additionally, moderators removed 6,334 records, and automated systems removed 282.

Blue Sky also handed out 3,192 suspensions and 14,659 permanent bans for evasion in 2025. Most of the permanent suspensions focused on accounts involved in fraudulent activity, spam networks, and impersonation.
However, the report suggests that they prefer labeling content rather than excluding users. Last year, Bluesky applied 16.49 million labels to content, an increase of 200% year over year. Meanwhile, account deletions increased by 104%, from 1.02 million to 2.08 million. Most of the labels included adult and suggestive content and nudity.
