Abstract
As the social media landscape undergoes broad transformation for the first time in over a decade, with alternative platforms like Mastodon, Bluesky, and Threads emerging where X has receded, many users and observers have celebrated the promise of these new services and their visions of alternative governance structures that empower consumers. Drawing on a large-scale textual analysis of platform moderation policies, capabilities, and transparency mechanisms, as well as semistructured group interviews with developers, administrators, and moderators of federated platforms, we found that federated platforms face considerable obstacles to robust and scalable governance, particularly with regard to persistent threats such as coordinated behavior and spam. Key barriers identified include underdeveloped moderation technologies and a lack of sustainable financial models for trust and safety work. We offer four solutions to the collective safety and security risks identified: (1) institutionalize shared responses to critical harms, (2) build transparent governance into the system, (3) invest in open-source tooling, and (4) enable data sharing across instances.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2024 Journal of Online Trust and Safety