Intense public and regulatory pressure following revelations of Russian interference in the US 2016 election led social media platforms to develop new policies to demonstrate how they had addressed the troll-shaped blind spot in their content moderation practices. This moment also gave rise to new transparency regimes that endure to this day and have unique characteristics, notably: the release of regular public reports of enforcement measures; the provision of underlying data to external stakeholders and, sometimes, the public; and collaboration across industry and with government. Despite these positive features, platform policies and transparency regimes related to information operations remain poorly understood. Underappreciated ambiguities and inconsistencies in platforms’ work in this area create perverse incentives for enforcement and distort public understanding of information operations. Highlighting these weaknesses is important as platforms expand content moderation practices in ways that build on the methods used in this domain. As platforms expand these practices, they are not continuing to invest in their transparency regimes, and the early promise and momentum behind the creation of these pockets of transparency are being lost as public and regulatory focus turns to other areas of content moderation.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.