GPO 2025

Dealing with the New WMDs - Weapons of Mass Disinformation

Weapons of Mass Destruction defined the 20th century, but today, WMDs take a new form: ‘Weapons of Mass Disinformation.’ Jean-Marc Rickli argues that Geneva can remain the central hub for disarmament diplomacy by tackling these new WMDs.

Geneva Policy Outlook
Jan 20, 2025
5 min read
© Antoine Tardy

By Jean-Marc Rickli

What is the centre of gravity of a democracy? What functions must a political institution have to perform its duties and guarantee survival?

The social contract is democracy’s centre of gravity. A cornerstone of Rousseau’s, Hobbes’, and Locke’s political philosophy on the state, the social contract relies on trust. Citizens abide by government-defined rules, which restrict certain freedoms in exchange for state protection of security and rights. For this to happen, citizens must trust in their governments and institutions.

Access to information is a crucial enabler of trust, allowing citizens to judge the efficacy of a government’s policies and to form an opinion to elect a democratic government. Verified information is a critical currency in democracies. Schools and professional media are also crucial guarantors of trustworthy information. Yet, these gatekeepers are increasingly overwhelmed with disinformation.

Disinformation Dramaturgy

Disinformation, defined as “the deliberate creation and dissemination of false and/or manipulated information with the intent to deceive and/or mislead”, is not a new phenomenon. It has been a feature of human communication for centuries at least since Roman times, yet with the development of communication technologies, information has become increasingly vulnerable to disinformation.

The invention of the Internet made information ubiquitous and accessible worldwide. Social media has allowed for the democratisation of free speech and the emergence of information bubbles and echo chambers, thus enabling the increasing weaponisation of narratives.

Now generative artificial intelligence (GenAI) heralds a new era for disinformation where subversion, the deliberate attempt to undermine a legitimate authority, is becoming a key feature of power politics and global influence.

The use of artificial intelligence (AI) through machine learning, deep learning, and now generative artificial intelligence (GenAI) heralds a new era for disinformation where subversion, the deliberate attempt to undermine a legitimate authority, is becoming a key feature of power politics and global influence.

In August 2023, Wired reported that it was possible to generate propaganda at scale 24 hours a day, 7 days a week, for less than $400. Since then, the cost has been divided by almost four. An American citizen, John Mark Dougan, used GenAI to produce content and created a “web of deception” by creating 167 Russian disinformation websites masquerading as independent local U.S. news sources as of May 2024. AI content farms are growing exponentially, from 49 AI-generated news and information websites identified in 2023 to 1090 in October 2024. The pink slime phenomenon, i.e., fake websites funded by partisan groups posing as neutral local news outlets, now outnumber actual local daily newspapers. In a survey published in April 2024, 31% of US adults report feeling uncertain most of the time, and 52% are unsure about the accuracy of information about the 2024 election. Citizens increasingly mistrust information, which contributes to the erosion of trust in democracies.

Democracy, Disrupted

In 2024, 50% of the world’s population was eligible to elect its government or parliament. Fears were high that AI would be disruptive in manipulating elections. Examples of AI-enabled manipulations were bountiful, such as robocalls impersonating President Biden’s voice during the Democratic primaries. Deepfakes were everywhere, such as the deepfake video of Bollywood actor Aamir Khan endorsing a specific political party during the Indian elections or Donald Trump’s deepfake images with Black Americans to shore up their political support. However, the impact of these manipulations on election results was marginal, leading some to argue that AI’s impact on elections was being overblown or, as the Washington Post stressed, it did not sway the election result.

Current AI-enabled disinformation can generate disinformation at scale, saturating and polluting the information space, and blurring the line between what is false and true.

Such an argument overlooks a vital point: The main impact of AI-enabled disinformation is less about the ability to sway someone’s political opinion specifically. Instead, it is about instilling doubts surrounding the legitimacy of the political institutions and the outcomes of democratic processes. Current AI-enabled disinformation can generate disinformation at scale, saturating and polluting the information space, blurring the line between what is false and true, and eventually supporting views that subjective interpretations and perceptions have the same value as objective facts. This culminates in a global epistemic crisis characterised by abandoning Enlightenment-era values based on rational analytic human reason.

The proliferation of augmented and virtual reality tools (such as Apple‘s Vision Pro headset released in February 2024) combined with emerging neurotech devices that monitor brain activity (such as Emotiv devices) to read someone’s emotional state offer dystopian prospects about the future of disinformation. These technologies could potentially enable targeted individual manipulations at scale. While this level of maturity has not yet been reached, the exponential advancement of these technologies means that the human brain will increasingly become a battlefield enabling cognitive warfare, seeking to control what people think in order to control how they act.

In the future, the malicious uses of immersive (metaverses) and invasive (and later non-invasive) neurotechnologies such as brain-computer interfaces (e.g. devices developed by Neuralink) combined with AI could become some of the weapons of this new type of warfare if these technologies are not appropriately regulated.

Disinformation Diplomacy: A Multilateral Response

International institutions are addressing disinformation, such as the UNGA Resolution 76/227, which invites states to counter disinformation through adequate measures, the UNESCO action plan, the EU Code of Practice on disinformation, and the OECD efforts to tackle disinformation. States are also taking action. In 2021, Chile established a national legislation to protect “mental integrity” and neurodata in its constitution, becoming the first Supreme Court to rule on a neuro-privacy case in 2023. Nevertheless, these different initiatives require multilateral coordination.

Birthplace of the World Wide Web, Geneva has played a pivotal role in global communication and digital governance through multilateral institutions like the International Telecommunication Union and the Internet Governance Forum, as well as UN-led initiatives such as the Open-Ended Working Group or the Group of Governmental Experts on Cyberspace.

Geneva is uniquely positioned to become the global governance hub on combating disinformation and creating an emerging regime on subversion control, mimicking those on arms control.

Geneva is uniquely positioned to become the global governance hub on combating disinformation and creating an emerging regime on subversion control, mimicking those on arms control. The exponential growth of AI-enabled disinformation has created a new class of WMD: Weapons of Mass Disinformation whose technology are mainly developed by the private sector. The imperative here is to prevent these technologies from becoming the new 21st century WMDs, which, although they are not lethal, could prove as disruptive as the Weapons of Mass Destruction (WMDs) of the 20th century, and for which Geneva had an equally important role to play.


About the Author

Jean-Marc Rickli is the Head of Global and Emerging Risks at the Geneva Center for Security Policy.

Disclaimer
The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the Geneva Policy Outlook or its partner organisations.