Abstract
Online platforms continue to grapple with the spread of false information about the COVID-19 pandemic, especially about the safe and effective COVID-19 vaccine. Some users who disseminate vaccine misinformation report that they were bullied by other users in response to their anti-vaccine messages. When they arise, these reports pit a platform’s prerogative to reduce the spread of misinformation against its obligation to protect users from online harassment. To resolve this tension, we present a framework that evaluates user interactions based on three criteria: intensity, specificity, and persistence. This approach can help content moderators determine when criticism of anti-vaccine messages by other users turns to harassment. After exploring the framework and its theoretical under-pinnings, we report the results from an experimental survey (n=21) we ran comparing moderation decisions made according to a new policy framework for our social media platform, Patio, against those made solely according to our existing community guidelines. The framework yields a statistically significant improvement in the overall accuracy and precision of moderation decisions involving potential harassment of users spreading COVID-19 vaccination misinformation. We conclude by considering the limitations of our analysis and avenues for further research.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2022 Journal of Online Trust and Safety