All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Sarah Kreps, R. Miles McCain and Miles Brundage. Journal of Experimental Political Science, Nov 20 2020. https://doi.org/10.1017/XPS.2020.37
Rolf Degen's take: https://twitter.com/DegenRolf/status/1329736218721050624
Abstract: Online misinformation has become a constant; only the way actors create and distribute that information is changing. Advances in artificial intelligence (AI) such as GPT-2 mean that actors can now synthetically generate text in ways that mimic the style and substance of human-created news stories. We carried out three original experiments to study whether these AI-generated texts are credible and can influence opinions on foreign policy. The first evaluated human perceptions of AI-generated text relative to an original story. The second investigated the interaction between partisanship and AI-generated news. The third examined the distributions of perceived credibility across different AI model sizes. We find that individuals are largely incapable of distinguishing between AI- and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals’ policy views. The findings have important implications in understanding AI in online misinformation campaigns.
Note:
The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi:10.7910/DVN/1XVYU3. This research was conducted using Sarah Kreps’ personal research funds. Early access to GPT-2 was provided in-kind by OpenAI under a non-disclosure agreement. Sarah Kreps and Miles McCain otherwise have no relationships with interested parties. Miles Brundage is employed by OpenAI.
No comments:
Post a Comment