https://tsjournal.org/index.php/jots/issue/feedJournal of Online Trust and Safety2024-02-28T13:13:28-08:00Shelby Grossmantrustandsafetyjournal@stanford.eduOpen Journal Systems<p>The Journal of Online Trust and Safety is a cross-disciplinary, open access, fast peer-review journal that publishes research on how consumer internet services are abused to cause harm and how to prevent those harms. </p>https://tsjournal.org/index.php/jots/article/view/168Burden of Proof: Lessons Learned for Regulators from the Oversight Board’s Implementation Work2023-12-11T16:52:02-08:00Naomi Shiffmannshiffman@osbstaff.comCarly Millercmiller@osbstaff.comManuel Parra Yagnammparra@osbstaff.comClaudia Flores-Saviagacflores-saviaga@osbstaff.com2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safetyhttps://tsjournal.org/index.php/jots/article/view/169Assuming Good Faith Online2023-12-10T18:44:56-08:00Eric Goldmanegoldman@gmail.com2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safetyhttps://tsjournal.org/index.php/jots/article/view/170Should Politicians Be Exempt from Fact-Checking?2023-12-11T19:34:24-08:00Sarah Fishersarah.fisher@ucl.ac.ukBeatriz Kirabeatriz.kira@ucl.ac.ukKiran Arabaghatta Basavarajk.basavaraj@ucl.ac.ukJeffrey Howardjeffrey.howard@ucl.ac.uk2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safetyhttps://tsjournal.org/index.php/jots/article/view/173Bridging Theory & Practice: Examining the State of Scholarship Using the History of Trust and Safety Archive2023-12-13T09:41:24-08:00Megan Knittelknittel2@msu.eduAmanda Menking amanda@tspa.org2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safetyhttps://tsjournal.org/index.php/jots/article/view/171Securing Federated Platforms2023-12-12T10:48:05-08:00Yoel Rothyoel@yoyoel.comSamantha Laisamantha.lai@ceip.org<p class="p2">As the social media landscape undergoes broad transformation for the first time in over a decade, with alternative platforms like Mastodon, Bluesky, and Threads emerging where X has receded, many users and observers have celebrated the promise of these new services and their visions of alternative governance structures that empower consumers. Drawing on a large-scale textual analysis of platform moderation policies, capabilities, and transparency mechanisms, as well as semistructured group interviews with developers, administrators, and moderators of federated platforms, we found that federated platforms face considerable obstacles to robust and scalable governance, particularly with regard to persistent threats such as coordinated behavior and spam. Key barriers identified include underdeveloped moderation technologies and a lack of sustainable financial models for trust and safety work. We offer four solutions to the collective safety and security risks identified: (1) institutionalize shared responses to critical harms, (2) build transparent governance into the system, (3) invest in open-source tooling, and (4) enable data sharing across instances.</p>2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safetyhttps://tsjournal.org/index.php/jots/article/view/136Content Modeling in Multi-Platform Multilingual Social Media Data2023-10-09T22:15:06-07:00Arman Setserarman.setser@graphika.comLibby Langelibby.lange@graphika.comKyle Weisskyle.weiss@graphika.comVlad Barashvlad.barash@graphika.com<p class="p2">An increase in the use of social media as the primary news source for the general population has created an ecosystem in which organic conversation commingles with inorganically seeded and amplified narratives, which can include public relations and marketing activity but also covert and malign influence operations. An efficient and easily understandable analysis of such data is important, as it allows relevant stakeholders to protect online communities and free discussion while better identifying activity and content that may violate social media platform terms of service. To accomplish this, we propose a method of large-scale social media data analysis, which allows for multilingual conversations to be analyzed in depth across any number of social media platforms simultaneously. Our method uses a text embedding model, i.e., a natural language processing model that holds semantic and contextual understandings of language. The model uses an “understanding” of language to represent posts as coordinates in a high-dimensional space, such that posts with similar meanings are assigned coordinates close together. We then cluster and analyze the posts to identify online topics of conversation existing across multiple social media platforms. We explicitly show how our method can be applied to four different datasets, three consisting of Chinese social media posts related to the Belt and Road Initiative and one relating to the Russia-Ukraine war, and we find politically-influenced conversations that contain misleading information relating to the Chinese government and the Russia-Ukraine war.</p>2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safetyhttps://tsjournal.org/index.php/jots/article/view/152Fingerprints of Conspiracy Theories: Identifying Signature Information Sources of a Misleading Narrative and Their Roles in Shaping Message Content and Dissemination2023-10-17T11:01:34-07:00Soojong Kimsjokim@ucdavis.eduKwanho Kimkk744@cornell.eduHaoning Xuehnxue@ucdavis.edu<p class="p2">This study investigates the role of information sources in the propagation and reception of misleading narratives on social media, focusing on the case of the Chemtrail conspiracy theory—a false claim that the trails in the sky behind airplanes are chemicals deliberately spread for sinister reasons. We collected data from Facebook Pages and Groups discussing the conspiracy theory. We specifically focused on identifying and analyzing “signature” information sources, which are repeatedly used by online communities engaged in the discussion of a misleading narrative but are not widely used by other communities. The findings indicate that messages referencing signature sources contain more death-, illness-, risk-, and health-related words, convey more negativity, and elicit more negative reactions from users, compared with those without signature sources. The insights from this study could contribute to the development of effective strategies to monitor and counter the spread of misleading narratives in digital spaces.</p>2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safetyhttps://tsjournal.org/index.php/jots/article/view/144Effects of Browsing Conditions and Visual Alert Design on Human Susceptibility to Deepfakes2023-10-11T21:21:16-07:00Emilie Josephsemilie.josephs.1@gmail.comCamilo Foscocamilolu@mit.eduAude Olivaoliva@mit.edu<p class="p2">The increasing reach of deepfakes raises practical questions about people’s ability to detect false videos online. How vulnerable are people to deepfake videos? What technologies can help improve detection? Previous experiments that measure human deepfake detection historically omit a number of conditions that can exist in typical browsing conditions. Here, we operationalized four such conditions (low prevalence, brief presentation, low video quality, and divided attention), and found in a series of online experiments that all conditions lowered detection relative to baseline, suggesting that the current literature underestimates people’s susceptibility to deepfakes. Next, we examined how AI assistance could be integrated into the human decision process. We found that a model that exposes deepfakes by amplifying artifacts increases detection rates, and also leads to higher rates of incorporating AI feedback and higher final confidence than text-based prompts. Overall, this suggests that visual indicators that cause distortions on fake videos may be effective at mitigating the impact of falsified video.</p>2024-02-28T00:00:00-08:00Copyright (c) 2024 Journal of Online Trust and Safety