-
TrendsArtificial IntelligencePolarizationSocial Media
-
SectorIT and Communications
-
CountriesGlobal
In a globally fragmented landscape, political polarization has emerged as one of the greatest threats to modern democracies. This phenomenon erodes trust in institutions, distorts public discourse, and jeopardizes the stability of electoral processes. Far from being a minor or temporary issue, severe polarization undermines the fundamental principles that keep democracies running.
Among its most damaging effects, legislative bodies are reduced to rubber-stamp entities, executive powers grow at the expense of other checks and balances, and attacks on the independence of the judiciary intensify. Polarization undermines essential norms, such as accepting electoral defeat, which are crucial for the coexistence of political diversity.
Today, polarization has been accelerated by the rise of two high-impact technologies: digital social networks and artificial intelligence. Both have profoundly changed how public debate unfolds and how information is manipulated.
However, while these tools have gained prominence over the last few decades, their role in polarization is more about amplification than origin. Various studies confirm that political polarization has much deeper roots than social media. A study by the University of Cambridge titled From Backwaters to Major Policymakers: Policy Polarization in the States, 1970–2014 (1) suggests that polarization in the U.S. began intensifying in the 1970s, with a sharp rise starting around 2000—well before platforms like Facebook or X gained significant influence.
Even so, it’s undeniable that social media has sped up this process. Digital platforms have transformed how citizens get informed, engage in debate, and make political decisions.
The algorithms driving these platforms aren’t designed to promote moderation or constructive dialogue. Instead, they prioritize content that generates more interaction, which often means amplifying more polarizing, emotionally charged messages. A recent study by LLYC, The Hidden Drug (2), based on an analysis of over 600 million messages, confirmed that polarization in social conversations across Latin America increased by 39% between 2018 and 2022. Thus, social networks have become amplifiers of polarization, pushing people towards more extreme positions and stifling democratic conversation.
The role of social media has evolved over time. In the first half of the 2010s, open platforms like Facebook and X dominated the public and political space, with notable cases like the Cambridge Analytica scandal in 2016. This scandal revealed how the misuse of personal data harvested from Facebook was leveraged to profile over 87 million people and used in the election campaigns of Donald Trump and the Brexit referendum. This episode demonstrated the immense power that open social networks could wield over democratic processes.
In recent years, the landscape has changed drastically. Closed platforms like WhatsApp and Telegram have taken over the political and social conversation. According to Statista, as of April 2024, WhatsApp had nearly 3 billion users (3), marking a 50% increase since early 2020. This growth has changed the playing field. Instead of happening on open, transparent networks where content can be monitored, much of the political conversation has moved to closed, obscure environments where misinformation can spread unchecked.
In these more closed spaces, radicalization can grow within small, intimate circles, making it harder to detect and control. Political radicalization that used to occur openly on platforms like X is now incubated in private spaces like WhatsApp, where it can jump back into more visible networks. This dynamic has played a key role in recent political upheavals, such as the Capitol attack in January 2021, where it was revealed that much of the planning occurred through platforms like WhatsApp and Parler, a niche network that also promotes privacy and closed communication.
Given the scale of these challenges, judicial systems and regulators in several countries have begun to intervene. A clear example is Judge Alexandre de Moraes in Brazil, who, on August 30, 2024, ordered the immediate suspension of X (formerly Twitter) due to the platform’s refusal to remove six user profiles linked to former president Jair Bolsonaro. Elon Musk, the CEO of X, refused to comply with the order, calling the judge a “dictator.” This standoff highlights the growing importance of content moderation and the tricky balance between free speech and the fight against misinformation.
Another notable case is the August 24, 2024, arrest of Pavel Durov, founder and CEO of Telegram, in France. Durov was detained for allegedly failing to cooperate with French authorities and for not implementing effective moderation measures on his platform, which allowed the proliferation of illegal activities and harmful content. These cases reflect how the impact of social networks on electoral processes and political polarization has forced judicial systems to take a hard stance despite the complex tensions between regulation and freedom of expression.
However, local actions have limited reach when dealing with a phenomenon that is, by nature, global and cross-border. Disinformation operations don’t respect national borders, and digital capitalism has created an international economy of disinformation.
In recent years, the landscape has shifted dramatically. Closed platforms like WhatsApp and Telegram have taken over political and social conversations.
A study by Qurium (4) revealed that, in 2022, Iranian activists from the #MeToo movement were targeted by disinformation campaigns orchestrated by Pakistani digital marketing firms. These transnational operations show how bad actors can hire disinformation services in countries with looser regulations, making the fight against this phenomenon even more challenging.
In this context, artificial intelligence (AI) can be a key tool in exacerbating political polarization. AI plays a triple role: first, AI-driven recommendation algorithms determine what content users see, amplifying the most engaging material—and often the most polarizing. Second, micro-targeting based on personal data allows political actors to target specific population segments with tailored messages that can manipulate voting behavior. Finally, generative AI has enabled disinformation on an unprecedented scale. Deepfakes— synthetic videos and audio—have evolved from a technological curiosity to powerful tools for manipulating audiences.
A recent case, exposed in 2023, involved a network of Iranian accounts dismantled by OpenAI as part of a disinformation campaign surrounding the U.S. presidential elections. This network used AI to generate fake content, including text, images, and videos designed to influence public opinion. Generative AI, with its ability to create synthetic content nearly indistinguishable from reality, poses a new challenge to electoral integrity.
The issue of deepfakes is especially concerning. In 2024, Grok, X’s AI, was accused of generating hyper-realistic images of politicians like Donald Trump, Kamala Harris, and Joe Biden, depicting them in compromising situations that never actually occurred. These images not only raised alarms among fact-checking services but also underscored how difficult it is to detect and stop the spread of disinformation in today’s environment.
A report by the Stanford Internet Observatory (5) , in collaboration with Georgetown University’s Center for Security and Emerging Technology, published in early 2023, warned about the impact of large language models (LLMs) on disinformation. These models allow bad actors to design and execute campaigns at low cost and on an unprecedented scale. The report emphasizes that LLMs’ ability to generate persuasive, long-form content that’s hard to identify as malicious poses a growing risk to democracies.
In terms of regulation, most countries are ill prepared to face these challenges. While some nations, like China, have attempted to introduce regulations for AI-generated content—such as requiring watermarks on synthetic videos—most countries lack robust legal frameworks to tackle this issue. Moreover, there is a risk that regulations could be misused to control information rather than protect the integrity of democratic processes.
The combination of political polarization, disinformation, and the growing power of AI poses an existential threat to electoral processes and modern democracies. As these technologies evolve, governments and societies must find ways to mitigate their corrosive effects without undermining freedom of expression. The question remains whether we will be able to regulate these tools in time to protect the integrity of our democracies or if we are destined for an era of manipulated elections, extreme polarization, and institutional distrust.
(1) From Backwaters to Major Policymakers: Policy Polarization in the States, 1970–2014
(2) The Hidden Drug
(3) Number of unique WhatsApp mobile users worldwide from January 2020 to June 2024
(4) Qurium
(5) Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations
Miguel is an expert telecommunications engineer with over 20 years of experience developing natural language processing solutions and AI technologies. At LLYC, he leads a team of experts focused on designing and deploying innovative AI-based solutions. He also heads the firm’s Data Analytics specialty, working with large datasets. In 2008, he founded Acteo, a company that partnered with LLYC on innovative reputation measurement and data analysis projects. [Spain]