Rational Silence and False Polarization: How Viewpoint Organizations and Recommender Systems Distort the Expression of Public Opinion
Main Article Content
Abstract
Social media platforms are one of the most important domains in which artificial intelligence (AI) has already transformed the nature of economic and social interaction. AI enables the massive scale and highly personalized nature of online information sharing that we now take for granted. Extensive attention has been devoted to the polarization that social media platforms appear to facilitate. However, a key implication of the transformation we are experiencing due to these AI-powered platforms has received much less attention: how platforms impact what observers of online discourse come to believe about community views. These observers include policymakers and legislators, who look to social media to gauge the prospects for policy and legislative change, as well as developers of AI models trained on large-scale internet data, whose outputs may similarly reflect a distorted view of public opinion. In this paper, we present a nested game-theoretic model to show how observed online opinion is produced by the interaction of the decisions made by users about whether and with what rhetorical intensity to share their opinions on a platform, the efforts of viewpoint organizations (such as traditional media and advocacy organizations) that seek to encourage or discourage opinion-sharing online, and the operation of AI-powered recommender systems controlled by social media platforms. We show that signals from ideological viewpoint organizations encourage an increase in rhetorical intensity, leading to the rational silence of moderate users. This, in turn, creates a polarized impression of where average opinions lie. We also show that this observed polarization can also be amplified by recommender systems that, pursuant to a platform’s incentive to maximize engagement, encourage the formation of viewpoint communities online that end up seeing a skewed sample of opinion. Unlike existing models, these well-known online phenomena are not here attributed to distortion in the formation of opinions nor to the seeking out of like-minded others, but rather to the interaction of the incentives of users, viewpoint organizations, and platforms implementing recommender systems. In addition to showing how these interactions can play out in simulations, we also identify practical strategies platforms can implement, such as reducing exposure to signals from ideological viewpoint organizations and a tailored approach to content moderation.