Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization. Mark Ledwich, Anna Zaitsev. arXiv Dec 24 2019. https://arxiv.org/abs/1912.11211
Abstract: The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.
V. LIMITATIONS AND CONCLUSIONS
There are several limitations to our study that must be considered for the future. First, the main limitation is the anonymity of the data set and the recommendations. The recommendations the algorithm provided were not based on videos watched over extensive periods. We expect and have anecdotally observed that the recommendation algorithm gets more fine-tuned and context-specific after each video that is watched. However, we currently do not have a way of collecting such information from individual user accounts, but our study shows that the anonymous user is generally directed towards more mainstream content than extreme. Similarly, anecdotal evidence from a personal account shows that YouTube suggests content that is very similar to previously watched videos while also directing traffic into more mainstream channels. That is, contrary to prior claims; the algorithm does not appear to stray into suggesting videos several degrees away from a user’s normal viewing habits. Second, the video categorization of our study is partially subjective. Although we have taken several measures to bring objectivity into the classification and analyzed similarities between each labeler by calculating the intraclass correlation coefficiencies, there is no way to eliminate bias. There is always a possibility for disagreement and ambiguity for categorizations of political content. We, therefore, welcome future suggestions to help us improve our classification. In conclusion, our study shows that one cannot proclaim that YouTube’s algorithm, at the current state, is leading users towards more radical content. There is clearly plenty of content on YouTube that one might view as radicalizing or inflammatory. However, the responsibility of that content is with the content creator and the consumers themselves. Shifting the responsibility for radicalization from users and content creators to YouTube is not supported by our data. The data shows that YouTube does the exact opposite of the radicalization claims. YouTube engineers have said that 70 percent of all views are based on the recommendations [38]. When combined with this remark with the fact that the algorithm clearly favors mainstream media channels, we believe that it would be fair to state that the majority of the views are directed towards left-leaning mainstream content. We agree with the Munger and Phillips (2019), the scrutiny for radicalization should be shined upon the content creators and the demand and supply for radical content, not the YouTube algorithm. On the contrary, the current iteration of the recommendations algorithm is working against the extremists. Nevertheless, YouTube has conducted several deletion sweeps targeting extremist content [29]. These actions might be ill-advised. Deleting extremist channels from YouTube does not reduce the supply for the content [50]. These banned content creators migrate to other video hosting more permissible sites. For example, a few channels that were initially included in the Alt-right category of the Ribero et al. (2019) paper, are now gone from YouTube but still exist on alternative platforms such as the BitChute. The danger we see here is that there are no algorithms directing viewers from extremist content towards more centrist materials on these alternative platforms or the Dark Web, making deradicalization efforts more difficult [51]. We believe that YouTube has the potential to act as a deradicalization force. However, it seems that the company will have to decide first if the platform is meant for independent YouTubers or if it is just another outlet for mainstream media.
A. The Visualization and Other Resources
Our data, channel categorization, and data analysis used
in this study are all available on GitHub for anyone to see.
Please visit the GitHub page for links to data or the Data
visualization. We welcome comments, feedback, and critique
on the channel categorization as well as other methods applied
in this study.
B. Publication Plan
This paper has been submitted for consideration at First
Monday
Response to critique on our paper “Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization” https://medium.com/@anna.zaitsev/response-to-critique-on-our-paper-algorithmic-extremism-examining-youtubes-rabbit-hole-of-8b53611ce903
Sunday, December 29, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment