Online platforms are increasingly being held to account for the content that their users post. Regulation of content has long been a secondary concern of platforms, but more recently as platforms focus on their content governance, they have typically drawn their regulatory model from offline legal frameworks built around sanctioning and punishment of rule violators. This study approaches these problems using an alternative approach, also drawn from legal scholarship, that is based upon motivating voluntary rule following by emphasizing the fairness of platform rules and the justice of the processes used to communicate content moderation decisions. Using a survey (n=10,487) sent to rule violators on Twitter paired with an analysis of participants’ platform behaviors, this study looks at the relationship between people’s judgments of the procedural justice of an enforcement action and the participants’ likelihood of reoffending in the future. We find that those who felt more fairly treated during their enforcement were less likely to recidivate (beta = -.05, p < .001). This, along with the study’s other findings, indicates an opportunity for platforms to put a stronger focus on people’s experience with enforcement systems as a potential pathway for reducing recidivism.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2022 Journal of Online Trust and Safety