New York Times Discusses Journal Commentary
An September 2023 article in the New York Times about fact checking discussed a Journal of Online Trust and Safety commentary titled "Future Challenges for Online, Crowdsourced Content Moderation: Evidence from Twitter’s Community Notes."
MIT Technology Review Discusses Journal Commentary
An October 2023 article in the MIT Technology Review about artificial intelligence discussed a Journal of Online Trust and Safety commentary titled "Toward Better Automated Content Moderation in Low-Resource Languages."
Inside Higher Ed Cites Journal Article
An April 2023 article in Inside Higher Ed about military files leaked on Discord cited a Journal of Online Trust and Safety article titled "Pride and Professionalization in Volunteer Moderation: Lessons for Effective Platform-User Collaboration."
The Federal Trade Commission Cites Journal Articles in its Report on Combating Online Harms
In June 2022, The Federal Trade Commission issued its "Combatting Online Harms Through Innovation” report to Congress. The report cites Journal articles on perceptual hashing, transparency reporting, and crowdsourcing moderation.
Journal Article Evaluating YouTube’s Recommendations Algorithm Receives Widespread News Coverage
In September 2022, the article “Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden” received news coverage on NBC, The Verge, Tech Policy Press, and Tech Crunch. The article found that YouTube “users who were more skeptical of the [2020 U.S.] election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election.”
The Guardian Publishes a Story on Study Published in the Journal
On March 1, 2022, The Guardian published an article summarizing findings from
“Risk Factors for Child Sexual Abuse Material Users Contacting Children Online,” an article in the Journal of Online Trust and Safety. The journal article, anonymously surveyed individuals searching for child sexual abuse material on dark web search engines. The Guardian article describes the paper as “the largest ever survey on the thoughts and behaviours of people who watch child sexual abuse material (CSAM) online.”
The Washington Post Highlights Findings from Crowdsourced Fact-checking Study in the Journal of Online Trust and Safety
On March 1, 2022, The Washington Post published an article on Twitter's Birdwatch, a project to crowdsource fact-checking on the platform. To contextualize how reliable crowdsourced fact-checking efforts are, authors Will Oremus and Jeremy B. Merill spoke with Joshua Tucker (NYU) about a paper he co-authored in the journal titled “Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-checking.” From the story:
“Crowdsourcing fact checks can be dicey if not done carefully, said Joshua Tucker, co-director for the NYU Center for Social Media and Politics. He co-authored a recent study, published in the Journal of Online Trust and Safety, which found that people struggled to identify false news stories, performing no better than random guessing in many contexts. The study did not attempt to replicate Birdwatch’s approach, which relies on self-selecting volunteers, but it did indicate that certain more sophisticated approaches to crowdsourcing might have some potential as part of a larger fact-checking project — especially if that project includes professional fact-checkers, which Birdwatch so far does not.”
Twitter Cites Journal of Online Trust & Safety Article in Announcement of Future Transparency Plans
On December 3, 2021, Twitter announced its transparency plans to expand access to data beyond information operations, citing directly to research published by Camille Francois and evelyn douek in the Journal of Online Trust and Safety. In their paper, Francois and douek argue that information operations disclosure reporting regimes like those at Facebook, Twitter, and Google are running the risk of becoming “isolated accidents of history...rather than the beginning of a new era of transparency.” The paper warns that the outsized emphasis on certain categories of online influence operations comes at the expense of "examining the vast corpus of "closest cousin' online manipulative behaviors." In Twitter’s blog post announcing their plans for 2022 and beyond, Yoel Roth and Vijaya Gadde wrote that “information operations are just one area of public concern,” citing to Francois and douek’s paper in agreement that other content moderation domains deserve attention as well.
Journal Commentary Featured in Senate Testimony
Stanford Professor Nathaniel Persily included an article published in the Journal of Online Trust and Safety in his recent testimony before the U.S. Senate Committee on Homeland Security and Governmental Affairs. The article, A Proposal for Researcher Access to Platform Data: The Platform Transparency and Accountability Act, argues that Congress needs to act immediately to develop "an unprecedented system of corporate data-sharing, mandated by government for independent research in the public interest." Persily’s work informed the Platform Transparency Act, which would increase transparency into social media platforms and was recently introduced by Senators Chris Coons, Rob Portman, and Amy Klobuchar.
The Journal of Online Trust and Safety Featured in The Platformer Newsletter
Journal co-editor Shelby Grossman gave an exclusive first look at the Journal’s first issue in an interview with journalist Casey Newton, published in the Platformer newsletter.