https://tsjournal.org/index.php/jots/issue/feed Journal of Online Trust and Safety 2024-02-28T13:13:28-08:00 Shelby Grossman trustandsafetyjournal@stanford.edu Open Journal Systems <p>The Journal of Online Trust and Safety is a cross-disciplinary, open access, fast peer-review journal that publishes research on how consumer internet services are abused to cause harm and how to prevent those harms. </p> https://tsjournal.org/index.php/jots/article/view/168 Burden of Proof: Lessons Learned for Regulators from the Oversight Board’s Implementation Work 2023-12-11T16:52:02-08:00 Naomi Shiffman nshiffman@osbstaff.com Carly Miller cmiller@osbstaff.com Manuel Parra Yagnam mparra@osbstaff.com Claudia Flores-Saviaga cflores-saviaga@osbstaff.com 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/169 Assuming Good Faith Online 2023-12-10T18:44:56-08:00 Eric Goldman egoldman@gmail.com 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/170 Should Politicians Be Exempt from Fact-Checking? 2023-12-11T19:34:24-08:00 Sarah Fisher sarah.fisher@ucl.ac.uk Beatriz Kira beatriz.kira@ucl.ac.uk Kiran Arabaghatta Basavaraj k.basavaraj@ucl.ac.uk Jeffrey Howard jeffrey.howard@ucl.ac.uk 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/173 Bridging Theory & Practice: Examining the State of Scholarship Using the History of Trust and Safety Archive 2023-12-13T09:41:24-08:00 Megan Knittel knittel2@msu.edu Amanda Menking amanda@tspa.org 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/171 Securing Federated Platforms 2023-12-12T10:48:05-08:00 Yoel Roth yoel@yoyoel.com Samantha Lai samantha.lai@ceip.org <p class="p2">As the social media landscape undergoes broad transformation for the first time in over a decade, with alternative platforms like Mastodon, Bluesky, and Threads emerging where X has receded, many users and observers have celebrated the promise of these new services and their visions of alternative governance structures that empower consumers. Drawing on a large-scale textual analysis of platform moderation policies, capabilities, and transparency mechanisms, as well as semistructured group interviews with developers, administrators, and moderators of federated platforms, we found that federated platforms face considerable obstacles to robust and scalable governance, particularly with regard to persistent threats such as coordinated behavior and spam. Key barriers identified include underdeveloped moderation technologies and a lack of sustainable financial models for trust and safety work. We offer four solutions to the collective safety and security risks identified: (1) institutionalize shared responses to critical harms, (2) build transparent governance into the system, (3) invest in open-source tooling, and (4) enable data sharing across instances.</p> 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/136 Content Modeling in Multi-Platform Multilingual Social Media Data 2023-10-09T22:15:06-07:00 Arman Setser arman.setser@graphika.com Libby Lange libby.lange@graphika.com Kyle Weiss kyle.weiss@graphika.com Vlad Barash vlad.barash@graphika.com <p class="p2">An increase in the use of social media as the primary news source for the general population has created an ecosystem in which organic conversation commingles with inorganically seeded and amplified narratives, which can include public relations and marketing activity but also covert and malign influence operations. An efficient and easily understandable analysis of such data is important, as it allows relevant stakeholders to protect online communities and free discussion while better identifying activity and content that may violate social media platform terms of service. To accomplish this, we propose a method of large-scale social media data analysis, which allows for multilingual conversations to be analyzed in depth across any number of social media platforms simultaneously. Our method uses a text embedding model, i.e., a natural language processing model that holds semantic and contextual understandings of language. The model uses an “understanding” of language to represent posts as coordinates in a high-dimensional space, such that posts with similar meanings are assigned coordinates close together. We then cluster and analyze the posts to identify online topics of conversation existing across multiple social media platforms. We explicitly show how our method can be applied to four different datasets, three consisting of Chinese social media posts related to the Belt and Road Initiative and one relating to the Russia-Ukraine war, and we find politically-influenced conversations that contain misleading information relating to the Chinese government and the Russia-Ukraine war.</p> 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/152 Fingerprints of Conspiracy Theories: Identifying Signature Information Sources of a Misleading Narrative and Their Roles in Shaping Message Content and Dissemination 2023-10-17T11:01:34-07:00 Soojong Kim sjokim@ucdavis.edu Kwanho Kim kk744@cornell.edu Haoning Xue hnxue@ucdavis.edu <p class="p2">This study investigates the role of information sources in the propagation and reception of misleading narratives on social media, focusing on the case of the Chemtrail conspiracy theory—a false claim that the trails in the sky behind airplanes are chemicals deliberately spread for sinister reasons. We collected data from Facebook Pages and Groups discussing the conspiracy theory. We specifically focused on identifying and analyzing “signature” information sources, which are repeatedly used by online communities engaged in the discussion of a misleading narrative but are not widely used by other communities. The findings indicate that messages referencing signature sources contain more death-, illness-, risk-, and health-related words, convey more negativity, and elicit more negative reactions from users, compared with those without signature sources. The insights from this study could contribute to the development of effective strategies to monitor and counter the spread of misleading narratives in digital spaces.</p> 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/144 Effects of Browsing Conditions and Visual Alert Design on Human Susceptibility to Deepfakes 2023-10-11T21:21:16-07:00 Emilie Josephs emilie.josephs.1@gmail.com Camilo Fosco camilolu@mit.edu Aude Oliva oliva@mit.edu <p class="p2">The increasing reach of deepfakes raises practical questions about people’s ability to detect false videos online. How vulnerable are people to deepfake videos? What technologies can help improve detection? Previous experiments that measure human deepfake detection historically omit a number of conditions that can exist in typical browsing conditions. Here, we operationalized four such conditions (low prevalence, brief presentation, low video quality, and divided attention), and found in a series of online experiments that all conditions lowered detection relative to baseline, suggesting that the current literature underestimates people’s susceptibility to deepfakes. Next, we examined how AI assistance could be integrated into the human decision process. We found that a model that exposes deepfakes by amplifying artifacts increases detection rates, and also leads to higher rates of incorporating AI feedback and higher final confidence than text-based prompts. Overall, this suggests that visual indicators that cause distortions on fake videos may be effective at mitigating the impact of falsified video.</p> 2024-02-28T00:00:00-08:00 Copyright (c) 2024 Journal of Online Trust and Safety