Do Users Want Platform Moderation or Individual
Control? Examining the Role of Third-Person Effects and Free Speech
Support in Shaping Moderation Preferences. Shagun Jhaver, Amy Zhang.
arXiv, Jan 5 2023. https://arxiv.org/abs/2301.02208
Abstract:
Online platforms employ commercial content moderators and use automated
systems to identify and remove the most blatantly inappropriate content
for all users. They also provide moderation settings that let users
personalize their preferences for which posts they want to avoid seeing.
This study presents the results of a nationally representative survey
of 984 US adults. We examine how users would prefer for three categories
of norm-violating content (hate speech, sexually explicit content, and
violent content) to be regulated. Specifically, we analyze whether users
prefer platforms to remove such content for all users or leave it up to
each user to decide if and how much they want to moderate it. We
explore the influence of presumed effects on others (PME3) and support
for freedom of expression on user attitudes, the two critical factors
identified as relevant for social media censorship attitudes by prior
literature, about this choice. We find perceived negative effects on
others and free speech support as significant predictors of preference
for having personal moderation settings over platform-directed
moderation for regulating each speech category. Our findings show that
platform governance initiatives need to account for both the actual and
perceived media effects of norm-violating speech categories to increase
user satisfaction. Our analysis also suggests that people do not see
personal moderation tools as an infringement on others' free speech but
as a means to assert greater agency to shape their social media feeds.
Bipartisan Alliance, a Society for the Study of the US Constitution, and of Human Nature, where Republicans and Democrats meet.
No comments:
Post a Comment