With the advent of feed algorithms, social media streams are no longer simple time-ordered lists of posts from accounts users follow. Instead, algorithms curate what users see by inserting posts from accounts they do not follow and reshuffling the order of content, which can push down and, as a result, effectively hide posts from followed accounts. Algorithms curate personalised feeds primarily to maximise users’ engagement with the platform and potentially also for other purposes. Concerns that social media algorithms may distort attitudes and affect social and political outcomes are widespread (e.g. Pariser 2011, Settle 2018, Sunstein 2018, Persily and Tucker 2020, Rose-Stockwell 2023, Braghieri et al. 2025). These algorithms may promote content that grabs users’ attention, such as extreme and toxic posts. They also may prioritise posts that reinforce users’ existing beliefs and misconceptions, contributing to the creation of polarised information environments, so-called filter bubbles.
Yet, surprisingly, prior large-scale rigorous experiments found that political attitudes were unaffected by switching the feed algorithms off (Guess et al. 2023). In particular, a study conducted with Meta during the 2020 US election showed that replacing users’ feed from an algorithmically curated one to a chronological one changed what users saw on their feeds, decreased their engagement, but did not measurably affect political attitudes, partisanship, or polarisation.
This finding leaves important open questions: are the commonly expressed worries about the algorithms unwarranted? Could exposure to algorithmic feeds have effects on attitudes that persist even after the algorithm is turned off, implying that there is an asymmetry in the effects of switching an algorithmic feed on versus off? Do the results generalise to other platforms? Should one expect some political attitudes to be more malleable and more easily affected by the algorithm than others in highly polarised and partisan environments?
The experiment
In a new paper (Gauthier et al. 2026), we present the results from an experiment we conducted on X (formerly Twitter) in the summer of 2023. X offered a rare opportunity for studying feed algorithms without platform cooperation. Users could choose between two feeds: a chronological feed (‘Following’ tab) that showed posts from accounts users followed in reverse chronological order, and an algorithmic feed (‘For you’ tab) that both reordered content and added posts from accounts users did not follow. We recruited about 5,000 active X users, randomly assigned them to either the algorithmic or the chronological feed, and paid them to stay on their assigned feed setting for seven weeks.
Some participants were thus required to switch feed settings, while others kept their previously used feed setting. This design allowed us to study two distinct treatments: switching the algorithm on for users who had previously relied on a chronological feed, and switching it off for users who had previously used the algorithmic feed. We measured the effects of this intervention on user engagement, political views, and choices of which accounts to follow.
Switching the algorithm on moved opinions toward the right; switching it off had no effect
First, as one would expect, because the algorithm maximises engagement, users who were switched from the chronological to the algorithmic feed spent more time on X compared to those who remained on the chronological feed. Our second finding is new compared to prior literature: the political attitudes were strongly and significantly affected by switching the algorithm on. Exposure to the algorithmic feed shifted users’ political views in a pro-Republican direction. After seven weeks of experiment-induced exposure to the algorithmic feed, these users were more likely to prioritise policy issues typically emphasised by Republicans, such as inflation, immigration, and crime, over policy issues typically emphasised by Democrats, such as healthcare and education, compared to users who stayed on the chronological feed.
They were also, on average, more inclined to consider criminal investigations into Donald Trump unacceptable, viewing them as undermining democracy and the rule of law. They were also more likely to take a pro-Kremlin stance regarding Russia’s invasion of Ukraine and express negative sentiments towards Ukrainian leadership and Joe Biden’s support of Ukraine. We illustrate some of these effects with selected outcomes in panel (a) of Figure 1.
Figure 1 Effects of switching the algorithm on and off on engagement and selected political attitudes
(a) Users initially on chronological feed switched to algorithmic feed
(b) Users initially on algorithmic feed switched to chronological feed
Notes: Selected attitudinal outcomes. Panel (a) shows the effects of switching users from the chronological to the algorithmic feed; panel (b) shows the effects of switching users from the algorithmic to the chronological feed. Bars show mean survey outcomes (engagement with X, policy priorities, opinions about investigations into Donald Trump, and sentiments towards the Ukrainian President Volodymyr Zelenskyy and investigations into US President Donald Trump) after seven weeks of assigned feed exposure, grouped by users’ initial feed (panels a and b) and by treatment feed (grey and red bars). 95% confidence intervals reported. For details, see Gauthier et al. (2026).
Source: Gauthier et al. (2026).
These results are driven by effects among users who self-reported as Republican or Independent in the pre-treatment survey, consistent with a common finding in the persuasion literature that persuasion is most effective for positively predisposed audiences (e.g. Adena et al. 2015).
Equally striking is what we do not find. First, switching users from the algorithmic feed to the chronological feed had essentially no effect on political attitudes (Figure 1, panel (b)). This is fully consistent with the Meta study (Guess et al. 2023) and suggests general validity of the findings in both Meta and our experiments.
Second, we found no effect on self-reported partisanship or polarisation for either switching the algorithm on or off. This suggests that algorithms may change views on current policy issues and policy priorities, but not users’ more rigid partisan identity.
How algorithms leave a lasting footprint
At first glance, the asymmetry of the effects of switching the algorithm on and off on political attitudes is puzzling. If algorithmic curation pushes opinions in one direction, why does removing it not reverse those effects? The answer lies in how the algorithm shapes users’ behaviour.
To understand the mechanism, we analysed both the content shown to users on their feed and the accounts they chose to follow. First, we asked users to run a purpose-built Google Chrome extension that downloaded the content of their feeds under both feed settings. These data provided direct evidence of what the X algorithm promoted in the summer of 2023. Compared with the chronological feed, the algorithmic feed showed many more posts that had already generated high engagement (likes, comments, and reposts).
With regard to politics, the algorithmic feed had a significantly higher share of political content, and within that, it prioritised right-wing content much more than left-wing content. It showed significantly more posts from political activists (defined as regular users who post a lot about politics and who could not be classified as media, governments, or organisations), both on the right and on the left, but showed fewer posts from traditional news outlets, also both on the right and on the left. Even though we found substantial heterogeneity in the share of right-wing content in the feeds of Republican-leaning and Democrat-leaning users, the share of right-wing content among all political content was significantly higher for both groups of users. We illustrate which content the algorithm promotes in Figure 2. (The content differences between chronological and algorithmic feed settings are the same irrespective of whether we include user fixed effects.)
Figure 2 What the algorithm promotes, by self-declared partisanship
(a) Democrats
(b) Republicans and Independents
Notes: Content of feeds, by self-declared partisanship. Average content shown to users in each feed setting: the chronological feed (grey) and the algorithmic feed (red). 95% confidence intervals reported. For details, see Gauthier et al. (2026).
Source: Gauthier et al. (2026).
Second, we find that exposure to the algorithm changes which accounts users chose to follow. Users who switched to the algorithmic feed become more likely to follow political activist accounts, especially right-wing activists. In contrast, we do not see changes in the followed accounts for users who switched to the chronological feed. This explains the asymmetry in effects. It implies that the algorithm nudges users toward new sources, and users continue to follow them even after the algorithm is switched off (i.e. users do not actively unfollow those accounts). The influence of these sources thus persists even when the algorithmic feed is switched off.
Implications for policy and platform design
These results carry important implications for the debate about regulating social media algorithms.
Our results strongly suggest that social media feed algorithms are not politically neutral. They can influence what people believe, and those effects may last longer than the algorithm itself because they affect users’ online behaviour and, more notably, the choice of followed accounts. This calls for a serious discussion about the regulation of feed algorithms.
Furthermore, our findings highlight that algorithms can shape political attitudes without increasing self-reported partisan polarisation or changing partisan identity, at least not in the short run. This challenges the common tendency to equate political influence solely with polarisation. Subtle shifts in issue priorities and beliefs about current events may be just as consequential for democratic outcomes. Further, if people use these platforms over years, one cannot rule out that influence on such priorities and beliefs accumulates over time and eventually also changes more deeply held political identities.
Finally, our study underscores the importance of studying platforms independently and in real-world settings. As algorithms, content, and user behaviour may evolve rapidly, and their effects depend on platform-specific design choices and incentives, a systematic monitoring of social media algorithms is needed (Aridor et al. 2025).
References
Adena, M, R Enikolopov, M Petrova, V Santarosa, and E Zhuravskaya (2015), “Radio and the rise of the Nazis in prewar Germany”, The Quarterly Journal of Economics 130(4): 1885–940.
Aridor, G, R Jiménez-Durán, R Levy, and L Song (2025), “A practical guide to running social media experiments”, VoxEU.org, 8 June.
Braghieri, L, S Eichmeyer, R Levy, and M Mobius (2025), “Article-level slant and polarisation of news consumption on social media”, VoxEU.org, 17 April.
Gauthier, G, R Hodler, P Widmer, and E Zhuravskaya (2026), “The political effects of X’s feed algorithm”, Nature, doi:10.1038/s41586-026-10098-2.
Guess, M, N Malhotra, J Pan, et al. (2023), “How do social media feed algorithms affect attitudes and behavior in an election campaign?”, Science 381(6656): 398–404.
Pariser, E (2011), The filter bubble: How the new personalized web is changing what we read and how we think, Penguin Press.
Persily, N, and J Tucker, eds. (2020), Social media and democracy: The state of the field, prospects for reform, Cambridge University Press.
Rose-Stockwell, T (2023), Outrage machine: How tech amplifies discontent, disrupts democracy – and what we can do about it, Hachette Books.
Settle, J E (2018), Frenemies: How social media polarizes America, Cambridge University Press.
Sunstein, C R (2018), Republic: Divided democracy in the age of social media, Princeton University Press.






