https://tsjournal.org/index.php/jots/issue/feed Journal of Online Trust and Safety 2024-09-18T09:42:19-07:00 Journal of Online Trust and Safety trustandsafetyjournal@stanford.edu Open Journal Systems <p>The Journal of Online Trust and Safety is a cross-disciplinary, open access, fast peer-review journal that publishes research on how consumer internet services are abused to cause harm and how to prevent those harms. </p> https://tsjournal.org/index.php/jots/article/view/213 Nuances and Challenges of Moderating a Code Collaboration Platform 2024-08-02T06:13:19-07:00 Margaret Tucker margaret-tucker@github.com Rose Coogan literarytea@github.com Will Pace will-pace@github.com 2024-09-18T00:00:00-07:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/215 A Multi-Stakeholder Approach for Leveraging Data Portability to Support Research on the Digital Information Environment 2024-08-05T14:43:06-07:00 Zeve Sanderson zns202@nyu.edu Lama Mohammed lrm413@nyu.edu 2024-09-18T00:00:00-07:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/204 A Survey of Scam Exposure, Victimization, Types, Vectors, and Reporting in 12 Countries 2024-06-12T09:05:45-07:00 Mohamed Houtti mhoutti@google.com Abhishek Roy abhishekroy@google.com Venkata Narsi Reddy Gangula narsi@google.com Ashley Walker amwalker@google.com <p>Scams are a widespread issue with severe consequences for both victims and perpetrators, but existing data collection is fragmented, precluding global and comparative local understanding. The present study addresses this gap through a nationally representative survey (n = 8,369) on scam exposure, victimization, types, vectors, and reporting in 12 countries: Belgium, Egypt, France, Hungary, Indonesia, Mexico, Romania, Slovakia, South Africa, South Korea, Sweden, and the United Kingdom. We analyze six survey questions to build a detailed quantitative picture of the scams landscape in each country, and compare across countries to identify global patterns. We find, first, that residents of less affluent countries suffer financial loss from scams more often. Second, we find that the internet plays a key role in scams across the globe, and that GNI per capita is strongly associated with specific scam types and contact vectors. Third, we find widespread underreporting, with residents of less affluent countries being less likely to know how to report a scam. Our findings contribute valuable insights for researchers, practitioners, and policymakers in the online fraud and scam prevention space.</p> 2024-09-18T00:00:00-07:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/197 Characteristics and Prevalence of Fake Social Media Profiles with AI-generated Faces 2024-06-12T09:06:04-07:00 Kaicheng Yang yang3kc@gmail.com Danishjeet Singh singhdan@iu.edu Filippo Menczer fil@iu.edu <p>Recent advancements in generative artificial intelligence (AI) have raised concerns about its potential to create convincing fake social media accounts, but empirical evidence is lacking. In this paper, we present a systematic analysis of Twitter (X) accounts using human faces generated by Generative Adversarial Networks (GANs) for their profile pictures. We present a dataset of 1,420 such accounts and show that they are used to spread scams, disseminate spam, and amplify coordinated messages, among other inauthentic activities. Leveraging a feature of GAN-generated faces—consistent eye placement—and supplementing it with human annotation, we devise an effective method for identifying GAN-generated profiles in the wild. Applying this method to a random sample of active Twitter users, we estimate a lower bound for the prevalence of profiles using GAN-generated faces between 0.021% and 0.044%—around 10,000 daily active accounts. These findings underscore the emerging threats posed by multimodal generative AI. We release the source code of our detection method and the data we collect to facilitate further investigation. Additionally, we provide practical heuristics to assist social media users in recognizing such accounts.</p> 2024-09-18T00:00:00-07:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/206 Algorithmic Impact Assessments at Scale: Practitioners’ Challenges and Needs 2024-06-17T10:46:36-07:00 Amar Ashar amarashar@gmail.com Karim Ginena kginena@gmail.com Maria Cipollone mariacip@gmail.com Renata Barreto rbarreto@berkeley.edu Henriette Cramer henriette.cramer@gmail.com <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p class="p1">Algorithmic Impact Assessments (AIAs) are often suggested as a tool to help identify and evaluate actual and potential harms of algorithmic systems. While the existing literature on AIAs provides a valuable foundation, critical understanding gaps remain, including the lived experiences of practitioners who implement assessments and a lack of standardization across industry. Such gaps pose significant risks to the usefulness of assessments in the responsible development of algorithmic systems. By conducting 107 assessments with practitioners who build personalization, recommendation, and subscription systems at a large online audio-streaming platform and 8 semi-structured stakeholder interviews, we attempt to bridge this gap by identifying practitioners’ challenges when applying AIAs that might hinder their effectiveness. The paper analyzes whether harms described in the literature related to machine learning and recommendation systems are similar to the concerns practitioners have. We find that the challenges practitioners encounter can be grouped into three categories: technical and methods, infrastructure and operations, and resourcing and prioritization. We also describe ways for teams to more effectively mitigate concerns. This paper helps bridge gaps between the theory and practice of AIAs, advances understanding of the potential harms of algorithmic systems, and informs assessment practices to serve their intended purpose.</p> </div> </div> </div> 2024-09-18T00:00:00-07:00 Copyright (c) 2024 Journal of Online Trust and Safety https://tsjournal.org/index.php/jots/article/view/205 Factors Associated with Help-Seeking Among Online Child Sexual Abuse Material Offenders 2024-06-12T09:04:52-07:00 Tegan Insoll tegan.insoll@suojellaanlapsia.fi Valeriia Soloveva valeriia.soloveva@suojellaanlapsia.fi Eva Díaz Bethencourt eva.diaz.bethencourt@suojellaanlapsia.fi Anna Katariina Ovaska anna.ovaska@suojellaanlapsia.fi Juha Nurmi juha.nurmi@tuni.fi Arttu Paju arttu.paju@tuni.fi Mikko Aaltonen mikko.e.aaltonen@uef.fi Nina Vaaranen-Valkonen nina.vaaranen-valkonen@suojellaanlapsia.fi <p>The proliferation of child sexual abuse material (CSAM) online is a global epidemic. Recently, programs to prevent child sexual abuse perpetration are being developed, and mounting evidence suggests that these measures can mitigate the risk of abuse. This study investigates the factors associated with help-seeking among CSAM offenders to inform efforts to increase uptake and strengthen effectiveness of such interventions. We analyze survey responses from 4,493 individuals who self-report to use CSAM. 55.0% of respondents report that they would like to stop using CSAM; however, only 13.8% have sought help, and only 3.2% have accessed help. The respondents most likely to want to stop using CSAM are those who report difficulties with mental health, in daily life, and related to use of CSAM. The respondents most likely to have sought treatment include those who report that they have sought contact with children, have experienced self-harm or suicidal thoughts, are in contact with other offenders, experience difficulties related to CSAM and in daily life, and view CSAM depicting toddlers and infants. We complement the findings with insights from a survey of 255 help-seeking individuals participating in an online self-help program to stop CSAM use.</p> 2024-09-18T00:00:00-07:00 Copyright (c) 2024 Journal of Online Trust and Safety