With only few days left ahead of the European Union parliamentary elections, the fear of foreign actors trying to influence the democratic voting process has spread rapidly across the continent. On a daily basis, news headlines point fingers at those “bad actors” allegedly responsible for the downfall of the West and at the role that social media plays in the process.
Over the past two years, terms like “disinformation,” “misinformation,” and “fake news” have become a growing part of the Western lexicon – “fake news” was named “Word of the Year” by Collins Dictionary in 2017; the act of spreading false information unintentionally, referred to as “misinformation” received the same honor from Dictionary.com in 2018. At the same time, Western observers have become increasingly fixated on the use of information as a tool of conflict, especially by Russia. What began as curiosity regarding the hybrid tactics of the Russo-Georgian conflict in 2008 grew from concern in the aftermath of the Russia’s 2014 invasion and occupation of Crimea to an obsession following Russia’s interference in the 2016 US elections. Now, as Europe faces the worrisome rise of populist and nationalist parties across several of its member countries, these hybrid tactics have begun to feel more like existential threats. In turn, what used to be sober-minded geopolitical analysis is now increasingly tinged with panic.
Every day, news outlets across western countries publish articles aimed at exposing the latest cases of fake news, alerting the public opinion about the risk of being targeted by foreign influence, and blaming social media platforms for facilitating the spread of disinformation. The debate around these issues has been heightened to the point that it now disrupts public discourse, both locally and globally. But isn’t this what such “bad actors” look to achieve in the first place?
The so-called “Gerasimov Doctrine,” named after the current chief of the General Staff of the Russian Federation, explicitly refers to weaponization of the information sphere as a tactic of modern warfare. To be clear, however, Russia did not invent the practice of manipulating information and targeting specific audiences, and disinformation is not a phenomenon that started in the era of social media. Political actors have for centuries been undertaking propaganda, biased information used to influence an audience in order to advance their agenda and interests. Disinformation is thus considered a subset of propaganda, which refers specifically to the intention of spreading explicitly false information.
Although disruptive state actors, political parties, or individuals are the ones responsible for purposefully disseminating disinformation, a growing discontent towards social media started to permeate the public debate around Europe, the United States, and the rest of the western world. In particular, in the past few years, media investigations have held social network giants accountable for enabling, instead of preventing and controlling, a number of malicious activities on their platforms. Social media companies were accused, among other things, of allowing “bad actors” to use their platforms to spread disinformation, collecting users’ data to create targeted political and commercial campaigns, amplifying hostile narratives, and exploiting so-called “echo chambers” to increase polarization of the public opinion on sensitive political issues. As all these malicious activities combined had a significant impact on the political discourse, and in some alleged cases on electoral results, across all continents – from the United States to the Brexit referendum, from Latin America to South Asia and Europe – the public debate started to identify social media companies themselves as a hazard to democracy.
Denying the role that social media has had in shaping society and the public discourse for the past decade is simply short-sighted, but what do we gain from demonizing it? By design, social media platforms allow communication and information to run in real-time around the globe, making the world’s population more interconnected and borderless than ever before, while simultaneously exposing billions of people to the threat of targeted inaccurate or intentionally manipulated information. While more comprehensive regulations of social networks to promote best practices and increase user protection are an absolute necessity, governments, tech giants, media, and civil society have been pointing fingers at each other for too long without developing an adequate response strategy.
When implementing what is known as “chaos doctrine,” Russia – but not only – aims to expand its influence sphere in Europe and over the West by undermining democratic institutions on a political, social, and economic level. The strategy is simple: eradicate trust from citizens towards institutions and intensify tensions within a system that has grown unstable. The goal is then achieved, for example, by polarizing public opinion, infiltrating hostile narratives into political debate that deepen discontent and suspicion, and promoting forces from within the system that disrupt unity and cohesion. In other words, the oldest war strategy: “Divide et impera.” Thus, by bouncing responsibilities onto one another instead of cohesively working together to protect the interest of their citizens when targeted by malicious influence operations, decisionmakers across all different sectors involuntarily achieve the ultimate goal of those who undermine democratic institutions by nurturing chaos. With time, the same tactics have been learned and applied by domestic political parties and actors, which – either independently or channeling foreign interests – have become the main responsible for the disruption of national political discourses and systems across states.
So far, the measures implemented by governments and civil society organizations to counter the impact of information operations promoted by malicious actors have been only partially adequate and still too limited, especially given the caliber and the continuously changing nature of the threat. In fact, while social media platforms have centralized all sources of information – once diversified across television, press, radio, etc. –, the development of new technologies has concurrently made the production and viral spread of disinformation way easier and faster than it is to counter it. The fast-changing nature of such malicious tactics made, for example, a mere activity of fact-checking simply ineffective, while restrictive measures taken by social media companies on the content shared on their platform upon quality analysis still incur in the risk of being mistaken for censorship.
In order to limit the disruptive impact of manipulated information targeting different audiences, therefore, governments, tech companies, media, and civil society need to collaborate to foster digital resilience across populations around the globe to make them less vulnerable to disinformation, influence operations, and manipulation. Promoting digital resilience means providing citizens with the adequate tools to navigate the digital space that allow them to move from being passive consumers of massive volumes of information online to activating quality filters during such consumption, applying critical thinking, selecting truthful sources and information, and, consequently, making informed decisions in their day-to-day life. While keeping working to build policies that regulate and protect the information environment by malicious actors, both domestic and foreign, governments, tech companies, and civil society need to promote coordinated digital information literacy programs to educate global citizens on how to defend themselves from hostile influence operations. Fostering digital resilience, therefore, is the only sustainable approach to fight bad actors and it means building a world population resistant to the infiltration of disinformation, misinformation, and other similar disruptive forces.
Although information operations aimed at creating division within western democracies gained significant traction at first by infiltrating a system that was unprepared to respond, citizens around the world are becoming increasingly skeptical and are starting to react. Building digital resilience might take some time, and it might be a more complex process than debunking false information case by case, but, if built across borders with joint efforts by governments, civil society, and tech companies, it will undoubtedly represent a more sustainable and a stronger measure to eventually make disinformation and influence operations ineffective.