Twitter’s erratic changes following Musk’s acquisition have led to a rise of several new Twitter-like platforms, including Mastodon, which promises to be a decentralized social media free from the influence of billionaire tech moguls. However, according to a new study by Stanford University, this lack of content moderation has caused a Child Sexual Abuse Material (CSAM) pandemic on Mastodon, raising significant concerns about user safety.

In order to detect CSAM images, researchers utilized tools like Google’s SafeSearch API, designed to identify explicit images, and PhotoDNA, a specialized tool to detect flagged CSAM content. The study uncovered 112 instances of known CSAM within 325,000 posts on the platform, with the first one appearing within a mere five minutes of searching.

Additionally, the research highlighted 554 posts containing frequently used hashtags or keywords that bad actors exploited to gain more engagement. Moreover, 1,217 text-only posts pointed to “off-site CSAM trading or grooming of minors,” thus further raising some serious concerns about the platform’s moderation techniques.

“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,” researcher David Thiel.

Shortcomings of a decentralized platform

Unlike platforms like Twitter, which are governed by algorithms and content moderation rules, Mastodon operates on instances, each administered independently. And although this offers autonomy and control to end-users, it also means that administrators lack real authority over content or servers.

This shortcoming was also evident in the study, which highlighted an incident where the mastodon.xyz server suffered an outage due to CSAM content. And the maintainer of the server stated that they handle moderation in their spare time, causing delays of up to several days in addressing such content.

See also  Discover The Sunny Southwest! 3 Underrated Cities With Cheap Stays This Winter

How to fix the moderation issue?

Although the specific approach towards moderating content on decentralized platforms is still a subject of debate, one potential solution could involve forming a network of trusted moderators from various instances collaborating to address problematic content. But, for new platforms like Mastodon, this could be a costly endeavour.

However, another emerging solution could be the development of advanced AI systems capable of detecting and flagging potentially abusive posts or illegal material.

Source link