Journal of Online Trust and Safety https://tsjournal.org/index.php/jots <p>The Journal of Online Trust and Safety is a cross-disciplinary, open access, fast peer-review journal that publishes research on how consumer internet services are abused to cause harm and how to prevent those harms. </p> en-US trustandsafetyjournal@stanford.edu (Journal of Online Trust and Safety) trustandsafetyjournal@stanford.edu (Journal of Online Trust and Safety) Wed, 18 Sep 2024 09:42:19 -0700 OJS 3.3.0.7 http://blogs.law.harvard.edu/tech/rss 60 A Survey of Scam Exposure, Victimization, Types, Vectors, and Reporting in 12 Countries https://tsjournal.org/index.php/jots/article/view/204 <p>Scams are a widespread issue with severe consequences for both victims and perpetrators, but existing data collection is fragmented, precluding global and comparative local understanding. The present study addresses this gap through a nationally representative survey (n = 8,369) on scam exposure, victimization, types, vectors, and reporting in 12 countries: Belgium, Egypt, France, Hungary, Indonesia, Mexico, Romania, Slovakia, South Africa, South Korea, Sweden, and the United Kingdom. We analyze six survey questions to build a detailed quantitative picture of the scams landscape in each country, and compare across countries to identify global patterns. We find, first, that residents of less affluent countries suffer financial loss from scams more often. Second, we find that the internet plays a key role in scams across the globe, and that GNI per capita is strongly associated with specific scam types and contact vectors. Third, we find widespread underreporting, with residents of less affluent countries being less likely to know how to report a scam. Our findings contribute valuable insights for researchers, practitioners, and policymakers in the online fraud and scam prevention space.</p> Mo Houtti, Abhishek Roy, Narsi Gangula, Ashley Marie Walker Copyright (c) 2024 Journal of Online Trust and Safety https://creativecommons.org/licenses/by-nc-sa/4.0 https://tsjournal.org/index.php/jots/article/view/204 Wed, 18 Sep 2024 00:00:00 -0700 Characteristics and Prevalence of Fake Social Media Profiles with AI-generated Faces https://tsjournal.org/index.php/jots/article/view/197 <p>Recent advancements in generative artificial intelligence (AI) have raised concerns about its potential to create convincing fake social media accounts, but empirical evidence is lacking. In this paper, we present a systematic analysis of Twitter (X) accounts using human faces generated by Generative Adversarial Networks (GANs) for their profile pictures. We present a dataset of 1,420 such accounts and show that they are used to spread scams, disseminate spam, and amplify coordinated messages, among other inauthentic activities. Leveraging a feature of GAN-generated faces—consistent eye placement—and supplementing it with human annotation, we devise an effective method for identifying GAN-generated profiles in the wild. Applying this method to a random sample of active Twitter users, we estimate a lower bound for the prevalence of profiles using GAN-generated faces between 0.021% and 0.044%—around 10,000 daily active accounts. These findings underscore the emerging threats posed by multimodal generative AI. We release the source code of our detection method and the data we collect to facilitate further investigation. Additionally, we provide practical heuristics to assist social media users in recognizing such accounts.</p> Kaicheng Yang, Danishjeet Singh, Filippo Menczer Copyright (c) 2024 Journal of Online Trust and Safety https://creativecommons.org/licenses/by-nc-sa/4.0 https://tsjournal.org/index.php/jots/article/view/197 Wed, 18 Sep 2024 00:00:00 -0700 Algorithmic Impact Assessments at Scale: Practitioners’ Challenges and Needs https://tsjournal.org/index.php/jots/article/view/206 <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p class="p1">Algorithmic Impact Assessments (AIAs) are often suggested as a tool to help identify and evaluate actual and potential harms of algorithmic systems. While the existing literature on AIAs provides a valuable foundation, critical understanding gaps remain, including the lived experiences of practitioners who implement assessments and a lack of standardization across industry. Such gaps pose significant risks to the usefulness of assessments in the responsible development of algorithmic systems. By conducting 107 assessments with practitioners who build personalization, recommendation, and subscription systems at a large online audio-streaming platform and 8 semi-structured stakeholder interviews, we attempt to bridge this gap by identifying practitioners’ challenges when applying AIAs that might hinder their effectiveness. The paper analyzes whether harms described in the literature related to machine learning and recommendation systems are similar to the concerns practitioners have. We find that the challenges practitioners encounter can be grouped into three categories: technical and methods, infrastructure and operations, and resourcing and prioritization. We also describe ways for teams to more effectively mitigate concerns. This paper helps bridge gaps between the theory and practice of AIAs, advances understanding of the potential harms of algorithmic systems, and informs assessment practices to serve their intended purpose.</p> </div> </div> </div> Amar Ashar, Karim Ginena, Maria Cipollone, Renata Barreto, Henriette Cramer Copyright (c) 2024 Journal of Online Trust and Safety https://creativecommons.org/licenses/by-nc-sa/4.0 https://tsjournal.org/index.php/jots/article/view/206 Wed, 18 Sep 2024 00:00:00 -0700 Factors Associated with Help-Seeking Among Online Child Sexual Abuse Material Offenders https://tsjournal.org/index.php/jots/article/view/205 <p>The proliferation of child sexual abuse material (CSAM) online is a global epidemic. Recently, programs to prevent child sexual abuse perpetration are being developed, and mounting evidence suggests that these measures can mitigate the risk of abuse. This study investigates the factors associated with help-seeking among CSAM offenders to inform efforts to increase uptake and strengthen effectiveness of such interventions. We analyze survey responses from 4,493 individuals who self-report to use CSAM. 55.0% of respondents report that they would like to stop using CSAM; however, only 13.8% have sought help, and only 3.2% have accessed help. The respondents most likely to want to stop using CSAM are those who report difficulties with mental health, in daily life, and related to use of CSAM. The respondents most likely to have sought treatment include those who report that they have sought contact with children, have experienced self-harm or suicidal thoughts, are in contact with other offenders, experience difficulties related to CSAM and in daily life, and view CSAM depicting toddlers and infants. We complement the findings with insights from a survey of 255 help-seeking individuals participating in an online self-help program to stop CSAM use.</p> Tegan Insoll, Valeriia Soloveva, Eva Díaz Bethencourt, Anna Katariina Ovaska, Juha Nurmi, Arttu Paju, Mikko Aaltonen, Nina Vaaranen-Valkonen Copyright (c) 2024 Journal of Online Trust and Safety https://creativecommons.org/licenses/by-nc-sa/4.0 https://tsjournal.org/index.php/jots/article/view/205 Wed, 18 Sep 2024 00:00:00 -0700 Nuances and Challenges of Moderating a Code Collaboration Platform https://tsjournal.org/index.php/jots/article/view/213 Margaret Tucker, Rose Coogan, Will Pace Copyright (c) 2024 Journal of Online Trust and Safety https://creativecommons.org/licenses/by-nc-sa/4.0 https://tsjournal.org/index.php/jots/article/view/213 Wed, 18 Sep 2024 00:00:00 -0700 A Multi-Stakeholder Approach for Leveraging Data Portability to Support Research on the Digital Information Environment https://tsjournal.org/index.php/jots/article/view/215 Zeve Sanderson, Lama Mohammed Copyright (c) 2024 Journal of Online Trust and Safety https://creativecommons.org/licenses/by-nc-sa/4.0 https://tsjournal.org/index.php/jots/article/view/215 Wed, 18 Sep 2024 00:00:00 -0700