Unmasking the bots: Researcher warns of threat to democratic processes
Bots not only shape political discourse in Canada, they can be used to amplify specific narratives, influence public opinion and reinforce ideological divides, Sophia Melanson Ricciardone’s research shows.
BY Andrea Lawson
January 14, 2025
Social media bots are exerting significant influence on the political discourse in Canada and could negatively impact democratic processes if left unchecked, a McMaster researcher warns.
Sophia Melanson Ricciardone, a postdoctoral fellow in the department of Psychology, Neuroscience and Behaviour, examined the SNC Lavalin scandal from 2019, and found bots were not just passive digital entities, but actively shaped the online conversation.
Ricciardone’s findings were published in International Journal of Digital Humanities last month.
The SNC Lavalin affair centred around accusations that Prime Minister Justin Trudeau and his office pressured the attorney general to intervene in a criminal case against the Quebec-based engineering firm. The Ethics Commissioner found Trudeau violated the Conflict-of-Interest Act, leading to the resignation of several key officials.
Analyzing tweets from Mar. 14 to April 9, 2019, Ricciardone found bots significantly influenced the language of human users without their explicit awareness, reinforcing political echo chambers and potentially undermining core democratic values.
We asked her about bots, political discourse and the implications of her research.
What inspired you to investigate the influence of AI bots on political discourse?
It was 2019, nearing the tumultuous end of the first Trump presidency, and political polarization in the United States was accelerating at an alarming rate.
My academic background in social science, communication and media studies led me to reflect deeply on the role of social media in shaping two critical dimensions of the public sphere.
First, I considered the communication styles employed by politicians when engaging with the public on social media.
Second, I examined how these political communication styles, in turn, shape the ways the public discusses and navigates important political issues.
Finally, I contemplated how politicians’ use of AI and bots to automate the circulation of their messaging on social media could introduce another significant layer in the dynamic of political communication and polarization within online spaces.
I’m curious about SNC-Lavalin as the example — this was in 2019. Presumably there are more and “smarter” bots out there now?
Yes, indeed. Which suggests that my findings may be even more significant today if I were to examine a more contemporary issue! I often wonder what I would find if I were to replicate this study using another important political topic relevant to our current political climate.
I do intend to explore this phenomenon further with future research, but given the evolution of generative AI technologies, there is good reason to assume that the bots circulating political content now are even more adept at microtargeting constituents than they were in 2019.
What are the potential implications of your findings for policymakers and social media platforms?
The most obvious implication is that bots can significantly shape the language and focus of political conversations, even more effectively than human-to-human interactions. This highlights the capacity of bots to amplify specific narratives, influence public opinion, and reinforce ideological divides.
A related implication is that bots can be used to contribute to the polarization of political discourse by reinforcing echo chambers, particularly in emotionally charged contexts.
This dynamic can hinder productive dialogue and critical engagement with diverse perspectives, leading to entrenchment of partisan viewpoints that undermine our collective ability to cooperate as a polity, which is a cornerstone of sustaining democracy, which seems to be increasingly fragile.
By influencing linguistic constructs and the framing of key issues like the SNC Lavalin affair, bots may shape voters’ perceptions of political events and candidates without constituents’ explicit awareness, which raises concerns about the fairness and integrity of elections, as digital manipulation can undermine informed decision-making.
Given that the integration of bots within threads of political discourse on social media platforms like Twitter transpires without constituents’ explicit awareness and without their explicit consent, one could argue that the use of bots by political candidates and their affiliated parties to influence how Canadians think about important political issues violates their fundamental rights and freedoms according to Chief Justice Dickson’s interpretation of Sections 2(a) and 2(b) of the Charter.
The implication being that additional measures for regulating political communication with constituents via social media should be pursued to safeguard Canadians’ right to freedom of thought, belief, opinion, and expression.
What can the public do to be more aware of the influence of bots in online political discussions?
There are countless actions the public can take, but if I had to prioritize one, it would be fostering awareness. At the heart of my research lies Interaction Alignment Theory, a framework widely applied in social psychology and the social sciences. This theory examines how individuals, as social interlocutors, adapt and synchronize their linguistic, cognitive and behavioural patterns to achieve mutual understanding, coherence and effective communication.
If people become aware that such content exists and understand its potential use in shaping opinions on critical political issues, this awareness could, in theory, reduce the effect bots have on the way we think about important political issues, thus reducing political polarization.
The challenge lies in understanding how our social brains are wired and how that dynamic influences how we respond to bot-generated content, especially since a lack of transparency prevents us from distinguishing between a tweet generated by a bot from one created by a human peer.
This dynamic can prime us in subtle ways, reinforcing political beliefs and aligning us with content that resonates with our values, yet may not necessarily be aligned in its consequence with those values — especially within the echo chambers that bots help cultivate. These digital spaces often amplify messages that mirror our deepest convictions, creating a feedback loop that intensifies polarization.
Learning about the fundamentals of our social human nature can release us from the cognitive traps that bot-generated content sets for us within online spaces.
Understanding this dynamic is essential for mitigating the power of bots in our online spaces, especially when they are used as botaganda, which is the integration of content circulated by automated bots to spread misleading, biased, or manipulative information, typically for political, commercial, or ideological purposes.
Or, put another way, enhancing our digital literacy requires that we tap into our knowledge about what it is about being human that creates cognitive blind spots when engaging with bot-generated content on social media.