JPRI Occasional Paper 56 (July 2018)
The Social Media Revolution: Political and Security Implications
NATO Parliamentary Assembly
Note from JPRI editors: In 2013, JPRI co-hosted the opening meeting of a North Atlantic Treaty Organization Parliamentary Assembly (NATO PA) delegation visit to California. Since that meeting, JPRI has periodically checked back with this inter-parliamentary body to gauge prevailing thinking among foreign policy elites in NATO member states regarding global security issues. Here, we are pleased to share a NATO PA policy report on a relatively new security concern—the “weaponization” of social media.
Part I. Introduction
Part II. Social Media and Democratic Governance
Part III. The “Weaponization” of Social Media: Case Studies on Daesh and Russia
Part IV. Current Responses to Social Media’s Security Challenges
Part V. Conclusions and Recommendations for Future Action
The rise of social media is one of the most far-reaching manifestations of the Information Age. Facilitated by the rapid growth in internet-enabled mobile devices, the proliferation of social media in recent years has been truly extraordinary. For many across the globe, social media is inseparable from many of our most fundamental activities: keeping in touch with friends and family, finding a job, expanding our social circles, and making sense of the world around us. According to a 2016 Pew Research Center report, 62% of U.S. citizens rely at least in part on social media sites for their news (Gottfried and Shearer). Another study based on a survey of over 50,000 people across 26 countries revealed that, for young adults (respondents aged 18-24), social media has already surpassed television as a primary news source (Newman, et al.).
While the dramatic transformation of information and communication technology (ICT) has impacted all aspects of life, this report focuses on the political and security implications of the social media revolution. Changing communications, computing and information storage patterns are challenging notions such as privacy, identity and national borders. The profound changes inherent in this revolution are also changing the way we look at security, often in unanticipated ways, and demand innovative responses. Twitter and Facebook both amplify the voices of and decrease the cost for people to connect more intimately and to communicate and organize among themselves and with their governments. As well, the anonymity that is possible on social media can embolden those who propagate hate speech as equally as those fighting against authoritarian regimes without fear of reprisal. Furthermore, social media provides new opportunities for those who seek to disrupt the liberal democratic world order by abusing the intrinsic openness of the cyber domain. Social media is being used by terrorist organizations as a recruiting and propaganda tool. They are also being exploited by states that seek to influence and undermine liberal democracies, their government institutions and their social fabric—at times, to great effect. This has become known as the “weaponization” of social media.
Because internet-based social media is such a recent phenomenon, the full consequences are difficult to foresee. The aim of this policy report is first and foremost to raise awareness and launch a discussion on this emergent theme and to offer some initial thoughts on ways to counter the malicious use of social media.
II. Social Media and Democratic Governance
The social media revolution has had a profound impact on democratic institutions and political life across the globe. Over the course of the last decade, citizens in general and political actors in particular have used social media sites such as Twitter and Facebook to challenge the political establishment and rally voices across the political spectrum. In the United States, for example, over a third of new social media users regularly direct their activity to commenting on government and politics. Twitter reported that the U.S. presidential election was tweeted about over one billion times, and nearly 128 million accounts talked about the presidential race on Facebook in the United States. The President of the United States, Donald Trump, has highlighted the value of communication via social media, particularly Twitter, which allows him to connect directly to the public, bypassing certain mainstream media outlets that, according to President Trump, produce “fake news.” To the north, in Canada’s 2015 federal election, civil society teamed up with Google to find innovative ways to increase voter turnout.
More than just a sounding board, user activity on social media sites is possibly predictive of voting behavior. After the U.S. presidential election, researchers found a strong correlation between the candidate a voter followed on Twitter and who that individual voted for on election day (Thompson). In some cases, pollsters even found that user activity on Facebook was more predictive of the U.S. election than traditional polls. During the United Kingdom’s EU referendum, scholars observed more activity on and support for the “Leave” campaign on Instagram and Twitter than the “Remain” campaign. Although activity does not necessarily mean support, commentators concluded that campaigners underappreciated the popularity of “Leave” on social media and how that would translate into votes (Polonski).
The ability of social media to turn any individual into an information actor benefits civil society and human rights activists both in democratic and authoritarian states. Social media lowers the cost of communication across internet-enabled devices to help movements overcome isolation or fragmentation. Similarly, social media produces information cascades—when dissenting and risk-taking first-movers express their grievances, those who may have otherwise not participated feel more comfortable joining in. This has two effects: the public sphere grows and protests can be coordinated across large geographic areas; also, the cost of repression increases because, thanks to social media, certain regions (i.e., the Middle East) have “developed a robust infrastructure for publicizing abuse of protestors” (Lynch).
The most prominent examples where social media played a central role in large-scale political mobilization include the Iranian protests in 2009, when people took to the streets to protest suspected fraud in the re-election of President Mahmoud Ahmadinejad, forcing the Iranian regime to temporarily suspend access to new social media until the government regained control of the crisis. However, it was the Arab Spring in 2011 that most clearly demonstrated the power of social media. The Facebook-organized protests on 25 January 2011 drove Egyptians to public squares across the country to demand bread, dignity, and freedom. Eventually, Egyptian President Hosni Mubarak’s 29-year regime toppled under civilian protest and military pressure. Another powerful example of social media’s role in mass mobilization was the pro-democracy movement of 2013-2014 in Ukraine that ousted President Viktor Yanukovych. Twitter and Facebook were used to organize and solidify protesters, and to enable key figures to communicate effectively with demonstrators.
Social media also empower human rights and anti-corruption activists. An example of this can be found in what happened when a prominent Russian anti-corruption crusader, Alexei Navalny, produced a 50-minutes video which revealed the stunning wealth of Prime Minister Dmitry Medvedev, exploiting, among other things, Medvedev’s passion for posting pictures on social media. Russia’s state media ignored Navalny’s video, but it spread rapidly across the Russian society through social networks and YouTube.
Protesters in NATO countries routinely use social media networks. The Occupy Wall Street campaign in New York in 2011, and protests in Istanbul’s Gezi Park in 2013 are but two examples. In the latter case, Twitter was so effective that the Turkish government temporarily disabled the service for Turkish users during demonstrations. Also in Turkey, social media played a critical role in defeating the coup attempt in July 2016. President Recep Tayyip Erdogan famously broadcasted his address to the nation using a FaceTime application on his smartphone. His message urging people to take to the streets was quickly disseminated via Twitter, Facebook, WhatsApp and other social channels.
However, the correlation between the emergence of social media and democratization is not as strong as one would hope. Adroit use of social media does not necessarily cultivate productive discourse nor does it automatically strengthen democratic institutions. Further, not all actors are necessarily interested in democratizing their societies. An important characteristic of online political activity is how deeply segregated it is. A data journalist at MIT’s Media Lab studying the 2016 U.S. presidential election suggests that political commentary online is segregated because users occupy ideological or issue-area “bubbles” ( e.g., immigration or gun rights) within which they conform. Whether segregated networks lead to polarized politics is unclear.
User preference algorithms and social media “bots” seem to play an important role. Facebook and Twitter enable self-segregation in that they are designed to provide personal, curated content based on individual user preferences. Both platforms use algorithms to curate content for users. Using data collected on their past behaviors and preferences, these algorithms filter content displayed to an individual user. This curation increases the likelihood of engagement with like-minded users and exposure to pictures, discussions, news, and opinions that support an individual user’s preferences. It also reduces the likelihood of exposure to dissenting or conflicting views (Lee; Thompson). Notably, Facebook and other platforms are reluctant to introduce a “dislike” button.
The increased frequency of debate does not necessarily translate into a robust exchange of conflicting or diverse ideas. Social media “bots” increase polarization by manufacturing and disseminating content that reinforces skewed user beliefs. “Bots” are easily programmable accounts on Facebook and, especially, Twitter that automatically generate content. Often, authentic account holders who receive fabricated content do not know they are interacting with “bots” (Guilbeault and Woolley). “Bots” are in widespread use and have already demonstrated a capacity for disruption. For example, a study found that a sizeable share of pro-Trump and pro-Clinton tweets during the U.S. presidential campaign were generated by “bots” programmed to search and disseminate specific messaging instantaneously. A single “bot” account can send out thousands of tweets a day, drowning out real Twitter users who may offer relevant, and potentially productive, dialogue on social media. Searching for content using key terms, “bots” are designed to redistribute ( e.g. , retweet) this material without verifying its validity. Marginal and/or extremely partisan actors can coopt trending discussion to give their issue areas traction especially when they program “bots” to redistribute their content on their behalf.
The relative success of many anti-establishment parties in the Euro-Atlantic area may be attributed to skillful social media strategies. Often the most prolific political accounts are far-left and far-right anti-establishment party leaders and groups. Their accounts usually post more online content, use colorful and even inflammatory language, and interact more intimately with their constituents than their more mainstream counterparts ( The Economist , 2015).
Social media networks have facilitated the propagation of false and disruptive stories, which users accept at face value. The danger is that “fake news” has already started to shake the confidence citizens have in their institutions and leaders. The proliferation of alternative, nontraditional media sites has accelerated this trend in recent years. The incentive to misinform on social media networks is, in fact, profit. Dramatic and often false stories increase clicks on sites trying to attract readers. The advertisement payment structure used by Google and Facebook is based on this “per click” model (Alexander and Silverman). False stories can be rapidly transmitted to multiple websites, gaining traction in the news cycle before content editors at major news agencies have time to intervene and question sources. A survey conducted by Ithaca College in New York found that 40% of local newsrooms do not have procedures to fact-check social media content before it is included in a newscast (Adornato). This can have devastating consequences for public perceptions on any number of issues. For example, polls suggest that there is a positive relationship between the proliferation of “fake” or hyper-partisan news and increased negative perceptions of one’s government. Gallup’s polling data supports the claim that mistrust in government is on the rise and is reaching record highs.
In sum, social media have had a profound effect on democracies and in authoritarian countries. Social media can make societies more pluralistic, but not in the traditional sense. It may be better, instead, to describe the confluence of democracy, political activism and social media as “chaotic pluralism” as some experts suggest. This is a pluralism that offers a diversity of mobilized voices and movements, but that is often unpredictable, unstable, and unsustainable (Margetts et al.). While political engagement on social media has enriched democratic discourse and opened new avenues for information flow, it has also entrenched users within ideological cocoons. The loudest and most engaged voices online are producing deep political change, but those calls increasingly come from polar ends of the political spectrum.
III. The “Weaponization” of Social Media: Case Studies on Daesh and Russia
The scale of the social media revolution cannot but have an effect on global security. There is a growing interest among some state and non-state actors in using social media against their adversaries—a process that Thomas Elkjer Nissen, of the Royal Danish Defence College, refers to as the “weaponization” of social media. Nissen identifies several ways of using social media for military purposes, including intelligence collection, psychological warfare and even command and control (C2) activities. For example, opposition groups in Syria that have no formal C2 structure resort to using social media for coordinating and synchronizing actions, and in some cases giving commands or direction (Nissen). Nigel Inkster, former deputy chief of Britain’s Secret Intelligence Service (MI6) notes that for intelligence officers, social media analysis can provide an unprecedentedly fine grain picture because images taken on ground level can often yield more information than satellite or aerial reconnaissance footage. Activities on social media are virtual, but they can have real-life effects, for instance by instigating mass protests, withdrawing money from banks, or attacks on certain groups or by portraying individuals as the enemy (Lange-Ionatamishvili and Svetoka).
Examples of convergence between social media and military domains are legion. These include: U.S. airstrikes against a Daesh command based on information received from a social media post by a Daesh militant in June 2015; the monitoring of Twitter feeds from Tripoli by NATO intelligence officers during the Libya campaign; the extensive tweeting by dedicated teams of the Israeli army during the 2014 conflict in Gaza, sometimes engaging directly in online exchanges with Hamas operatives; and, the confusion over a fake news story on Twitter that led Pakistan’s Defense Minister to threaten the use of nuclear weapons against Israel. During a military exercise in June 2016, Australian intelligence analysts were able to identify the location, equipment, and organization of opposing forces participating in the exercise by analyzing information freely available on social media. 
While Allied and partner militaries have had success in operationalizing social media platforms in combat, non-state actors such as Daesh and states such as Russia have also achieved a high degree of proficiency in weaponizing this new medium.
A. Daesh and Social Media
Daesh is not the first terrorist organization to grasp the importance of social media. Members of Hamas have reportedly used platforms such as Facebook and Twitter to disseminate their ideology. Al-Shabaab used Twitter to claim credit for its attack on the Nairobi Westgate shopping mall, posting pictures of it in near-real-time. In April 2015, the Al-Nusra Front (now known as the Jabhat Fateh al-Sham) launched a social media-enabled campaign called “Mobilize” that managed to recruit some 5,000 children to join its ranks. Several NATO Allies have experienced home-grown terrorist attacks that were inspired through online means of communication: for instance, perpetrators of some high-profile attacks in the West were inspired by online sermons of radical preacher Anwar al-Awlaki (Ruane).
However, it is widely agreed that Daesh has elevated the malicious use of social media to a new level. Daesh seems to have grasped the feature of social networks called the “power curve.” On one end of this curve, few dominant contributors drive the conversation on the network in the so-called “broadcast mode.” On the other end, networks scale down to very small groups where high-quality conversations take place (“conversation mode”). Modern terrorists have figured out that the advantage is to work both ends of the curve: they manage to get a dominant influencer to convey their messages, while also luring individuals into small group conversations where they can attract new recruits or radicalize the other discussants (Carafano). The architecture of Twitter is particularly attractive for Daesh because it is well suited for anonymous communications with a broad audience and enables a faster recovery when accounts are suspended (Shaheen).
Experts from NATO Strategic Communications Centre of Excellence (StratCom) have analyzed Daesh’s Twitter traffic network and discovered that the terrorist group developed a so-called Core-Periphery structure on Twitter: namely, that there was a high number of accounts with low centrality measures (peripheral), and only a few accounts with high centrality scores (core group). However, the core group were responsible for 76% of the traffic. It is plausible that the core group accounts are managed by an even smaller group of Daesh operatives. Daesh’s offshoots in other parts of the Middle East and North Africa (MENA) region may retain a degree of autonomy, but overall, Daesh’s messaging machine appears to be highly centralized and coordinated (Shaheen).
It may seem that centralization of Daesh’s social media activity is a liability because it means the group’s core accounts can be identified and closed. However, Daesh operatives have developed a series of measures to overcome this problem. Typically, Daesh cyber operatives create several idle Twitter accounts that are part of a network surrounding a core account. Once a core account is closed, an idle account is activated and is either turned into a core account itself or it informs the rest of the followers—using a system of hashtags and symbols—about the identity of the old account when it re-opens under a new name. Daesh also uses certain techniques to avoid detection and closure of accounts. For instance, usernames and the URLs (Uniform Resource Locators) of its core accounts are periodically changed, which enables these accounts to elude URL-based detection software used by state security services. Daesh operatives also slightly alter popular Daesh-related images to avoid detection by image recognition software. Daesh operatives seem to understand well the dangers of using Twitter’s native geo-tagging function which provides a GPS-produced tag with geographic coordinates attached to each tweet. In fact, in December 2014, Daesh issued an edict forbidding its fighters from turning on Twitter’s geo-tagging function (Shaheen). Finally, Daesh social-media operators know how to post tweets, including links, hashtags and images, in a way that would not trigger Twitter’s spam-detection algorithms (Farwell).
In sum, Daesh seems to have developed remarkable technological proficiency and has effectively turned into a Twitter hydra. It is estimated that since 2013, tens if not hundreds of thousands of Daesh Twitter accounts have been suspended or deleted on Twitter (Shaheen) but that did not prevent Daesh from generating as many as 90,000 tweets every day (Schmitt). Daesh’s Core-Periphery approach and a clever use of hashtags give the terrorist group a high degree of visibility on social media. For instance, as Daesh marched into Mosul, its supporters produced up to 44,000 tweets a day, making the group’s message among the most prominent when one searched Twitter for “Baghdad” (Farwell). According to RAND, the number of active Daesh opponents on social media is six times larger than pro-Daesh accounts. Yet, Daesh supporters routinely out tweet opponents, producing 50% more tweets per day (Bodine-Baron et al.).
Other characteristics of Daesh’s use of social media include:
➢ Daesh understands well the importance of visual contents on social media: it is estimated that 88% of Daesh content is visual (63% picture, 20% video, 5% graphic) (NATO StratCom, 2016a), which is particularly attractive to the younger generation. This visual content is of very professional quality;
➢ Daesh tweets in several languages including English, Arabic, German, Farsi, Hindi, and French;
➢ Daesh messages are relevant to current news, short and easy to digest;
➢ Daesh also practices hijacking popular hashtags such as those linked with the World Football Championship in Brazil (Farwell) or #Bruxelles and #Belgique that emerged in the wake of the terrorist attacks in Brussels and originally were intended to be used to express support for the victims (NATO StratCom, 2016b);
➢ It is also estimated that at least 16% of Daesh accounts were in fact automated (“bots”) (Shaheen).
On the ground, Daesh is losing territory. In 2016, the so-called Caliphate lost over 50% of its territory in Syria and over 70% in Iraq. 2017 brought further losses. Even so, it is believed that Daesh will increase its online activities. The sophistication of the Daesh social media machine creates the impression of a solid and effective organization, seemingly one that is worth joining (NATO StratCom, 2016a). Daesh presents itself on social media as a true defender of Islam and an agent of change. Its image as a brutal and fearsome fighting machine is combined with warmer images, for instance showing foot soldiers eating Snickers bars and nurturing kittens (Farwell). Experts observe that since 2015, Daesh has produced more content geared to normalizing the so-called Caliphate, than content depicting violence (Matejic). This narrative appears to be quite successful in attracting new recruits for Daesh.
It is estimated that more than 30,000 people, including about 5,000 EU citizens, travelled to Syria and Iraq to join the ranks of terrorist organizations since the break-out of the conflict there in 2011. It is difficult to assess how many of them were radicalized and recruited via social networks, but according to the U.S. Department of Justice, most young terrorist recruitment is linked to social media. Recruitment typically starts on a public platform as an exchange of radical ideas, and then the conversation moves to one of the encrypted platforms (such as WhatsApp, Kik or Telegram) where the recruitment can continue in private. Daesh has an elaborate system of questioning potential candidate to ensure that he or she is not an intelligence operative (NATO StratCom, 2016b).
In addition to propaganda and recruitment, Daesh also uses social media to provide technological advice and guidance to their followers. Social media is also a vital part of Daesh’s fundraising strategy (NATO StratCom, 2016b). However, Daesh tries to minimize the use of social media for command-and-control functions in order to conceal identities and location of its leadership (Farwell).
Still, social media is a double-edged sword and is being used by counter-terrorism agencies to collect information and prevent terrorist attacks. For instance, Israeli security services use specially developed algorithms to monitor the social media accounts of young Palestinians to identify potential terrorists, and in some cases, were able to prevent suicide attacks (The Economist, 2016a). NATO StratCom analysts also argue that, given enough data, they could infer the total number of future recruits Daesh gathers from platforms such as Twitter, and also deduce the total number of fighters on the ground, some of their attributes (age, gender, level of education etc.), and thus provide limited predictions on potential tactics and strategies employed (Shaheen).
B. Social Media as a Foreign Policy Tool: The Case of Russia
President Vladimir Putin’s Russia exploits and mobilizes new and old forms of media through information operations to achieve its foreign policy goals. The Kremlin has “weaponized” information, turning media into a weapon of mass deception/distraction and a de facto extension of its military and diplomacy. The roots of this strategy can be traced back to the Soviet era when the USSR employed methods such as “reflexive control” and “active measures” to mislead, manipulate and intimidate its opponents in the West. The effectiveness of these methods during the Cold War was limited. However, the rise of the Internet and social media has opened remarkable new opportunities for the Kremlin’s information warfare.
Moscow’s intention to use information and cyber space as critical elements of national security is articulated in a number of documents—notably, the 2014 Military Doctrine, 2015 National Security Strategy and 2015 Information Security Doctrine. These documents portray Russia as a victim of the West’s “information aggression,” stress the need to counteract information threats to Russia’s sovereignty and security, and advocate for the development of an effective means to influence public opinion abroad. In his oft-cited article outlining the principles of the hybrid warfare, Russia’s Chief of General Staff Valery Gerasimov pointed out, inter alia, that “[t]he information space opens wide asymmetrical possibilities for reducing the fighting potential of the enemy” (NATO StratCom, 2015). Speaking at the State Duma on 22 February 2017, Defense Minister Sergey Shoigu announced that “information operations forces have been established that are expected to be a far more effective tool than all we used before for counter-propaganda purposes” (Rettman).
According to Timothy Thomas, a renowned expert on Soviet/Russian information warfare, Russia views information war as having two aspects: information-technical and information-psychological. The former includes technological means to collect useful digital data. The latter includes the concept of “psycho viruses” designed to influence the attitudes and behavior of the population. Another prominent expert on Russia, Mark Galeotti, notes that the Kremlin’s focus on information warfare and other hybrid techniques “reflects the parsimonious opportunism of a weak but ruthless Russia trying to play a great power game without a great power’s resources.” Former Deputy Director of the National Security Agency John Chris Inglis believes that Russia is 10 years ahead of the United States in using social media for information operations (Calabresi).
The objectives of Russia’s information warfare are two-fold: first, for the state to monopolize the information space within Russia in order to “neutralize” external information activities targeting Russians, “particularly young Russians, with the goal of undermining traditional Russian spiritual and moral values”; and, second, to project Russia’s interests abroad using new technological capabilities.
In terms of domestic media control, President Putin came to power with the clear agenda of building an unchallenged “power vertical” and gradually subduing all key stakeholders, including media outlets, placing them under the control of the Kremlin. Under President Putin’s watch, Russia’s ratings by Freedom House have progressively deteriorated; the country has been listed in the “not free” category since 2005. Until recently, Internet access in Russia had been essentially unrestricted. However, the freedom of online activities in Russia has been jeopardized by a series of measures adopted in the past few years: the Blogger Registration Law (requiring bloggers with more than 3,000 followers to register as a media outlet; also giving the authorities the right to access user’s information), a law that allows the government to shut down any website (the right was used to block websites of opposition figures Alexey Navalny and Garry Kasparov), the law on personal data storage (requiring Internet service providers who handle Russian customer data to physically keep their servers on Russian soil, thus enabling security institutions to monitor their activities) (Giles) and new “anti-terrorism” legislation that allows government authorities to penalize or even imprison Russian citizens for re-posting or “liking” articles on social media that the regime considered hostile (Gregory). Russia’s government authorities have also enforced a change of ownership of the country’s social media giant VKontakte. 
Reportedly, Russia is stepping up its cyber cooperation with China and studying the “Great Firewall of China” method to control the Internet. In July 2017, President Putin signed a law that bans the use of so-called virtual private networks (VPNs) and other ‘anonymizer’ technologies. These technologies allowed Internet users to mask their identity by funneling their online activity through a third-party’s computer. Users were then able to access online material banned by state-controlled internet service providers. By banning VPNs, Russia’s government is essentially able to censor the Internet through a similar approach used to that of China.
Finally, pro-Russia hackers and “trolls” regularly target opposition politicians and journalists. This includes frequent “Distributed Denial of Service” attacks against the remnants of free media, such as Ekho Moskvy radio station and Novaya Gazeta newspaper, and through the online dissemination of compromising materials (kompromats) on the regime’s opponents, obtained from Russia’s security services.
While consolidating domestic media control, Moscow skillfully exploits the pluralistic nature of the media in Western societies and the fact that Western governments have little control over the media in their countries. The West’s economic and information resources are infinitely greater, but Russia’s disinformation machine appears to have the edge due to its professionalism, lack of scruples and ethical boundaries. RAND experts have characterized Russia’s approach to propaganda as “the firehose of falsehood” because of its two distinctive features: high numbers of channels and messages and a shameless willingness to disseminate partial truths or outright fictions (Paul and Matthews). In recent years, Russia has significantly increased its footprint in global media by spending hundreds of millions of U.S. dollars to enhance its multi-language outlets such as RT and Sputnik.
The Kremlin’s external information strategy is also effective because, unlike the Soviet Union, President Putin’s Russia does not project a clear ideology; its propaganda machine does not have to convince audiences that Russia’s model is superior. RT and Sputnik do not focus on Russia. The goal is to demoralize and divide Western societies and to establish moral equivalence between Russia and the West by promoting the notion of Western hypocrisy. For instance, the Kremlin’s response to extensive Western reporting that Russia’s parliamentary and presidential elections were rigged was to suggest that elections in other countries are no better (Inkster). According to Matthew Sussex, an expert in Russian foreign and security policy, “the Russians have picked up that across the West there is a widespread apathy amongst voters and mistrust of politics and government. Anything you can do to increase that distrust serves Russian interests.” This approach is also relatively inexpensive as it does not require engaging in time and money-consuming investigative journalism. As a result, while Russia was unable to prevent the deterioration of its global image in the wake of its aggression against Ukraine, its cyber and information activities have still managed to contribute to growing general uncertainty and fragmentation in the West.
The explosion in the use of social media provides additional opportunities for Russia to influence populations and politicians in targeted countries. The nature of the social media techniques discussed in the present paper is conducive to the Kremlin propaganda strategy, which is to confuse rather than convince and to challenge the notion of the existence of objective truth. Russia’s information warriors react to major international events with remarkable speed and reach out to wide international audiences disseminating pro-Kremlin narratives and spreading unverified or falsified stories and conspiracy theories. According to RAND experts, people assume that repeated information from multiple sources must be true, while little attention is paid to the credibility of those sources (Paul and Matthews). In social psychology, this is referred to as the “illusory truth effect.” In the context of social media, where quantity of information becomes affirmation of it, Russia’s information machine is highly effective in capitalizing on this trait by widely using “trolls” and “bots” with the aim of achieving its objectives. 
The Kremlin has engaged in an intensive social media-driven disinformation campaign that was launched during the Euro-Maidan revolution in Ukraine and which continues today. Since 2014, Russia’s online information warriors flooded social media with fabricated reports or doctored images of atrocities allegedly committed by the Ukrainian forces, including the torture and murder of children, the use of civilians for organ trafficking, and even acts of cannibalism. A number of wild conspiracy theories have mushroomed on Russian social media following the downing of Malaysian Air flight MH17 in 2014, with the aim of convincing the public that objective truth about the incident will never be established. Exploiting the fact that information on social media is often conveyed through images, pro-Kremlin sources widely portrayed Ukraine and Ukrainians in contexts of fascist symbolism and violence. These disinformation campaigns are designed to confuse and to disinform social media users. Counter-propaganda teams such as StopFake.org and EU Mythbusters continue exposing Russia’s fake social media reporting on Ukraine almost on a daily basis.
Russia’s fake news campaigns on social media have increasingly targeted Western audiences as well. In November 2016, German Chancellor Angela Merkel expressed her concern that “social bots” and “trolls” could be used to sway public opinion during the upcoming electoral campaign in Germany as they were in France and the United States. The head of the German intelligence agency also raised concerns about Russia’s potential interference in Germany’s election through the use of fake news. NATO continues to be the target of Russia’s “trolls,” the most recent example being the dissemination of a fake story about a teenage Lithuanian girl raped by a German soldier who was deployed in Lithuania as part of NATO’s enhance Forward Presence (eFP) mission in the Baltic States and Poland. In recent years, Russia has also visibly stepped up its information attacks against its Nordic neighbors Denmark, Sweden and Finland. Russia has also allegedly targeted countries outside of the Euro-Atlantic community. For instance, in May 2017, it was reported that U.S. intelligence and law enforcement officials concluded that pro-Russia hackers were behind a cyber-attack on the Qatar News Agency, planting a fake news story that contributed to a major crisis among several Gulf states and the United States.
The Kremlin information warriors use various approaches to spread disinformation: a) through creating multiple social media accounts, including authoritative-sounding ones, such as the Finnish language accounts @Vaalit, @Eduskuntavaalit (Elections, Parliamentary Elections) (Giles), b) by hijacking accounts (e.g., the Twitter account of the Swedish TV4 channel and a Twitter account opened in Swedish Defense Minister Peter Hultqvist’s name), and c) by hijacking hashtags (e.g., Russia’s Ministry of Foreign Affairs used #UnitedforUkraine, hashtag created by the U.S. State Department in support of Ukraine, to post tweets with comments by Foreign Minister Sergey Lavrov).
Russia’s state-sponsored “trolls” have also been used to spread panic: in 2014, a coordinated campaign of hundreds of tweets triggered alarms in the United States by reporting an alleged chemical accident in a Louisiana factory. A New York Times investigation traced the tweets to a location in St. Petersburg. Another example involved terrifying the people of Donbas (Ukraine) who “learned” from social media that the regional water supply had been poisoned (NATO StratCom, 2016a). The success of such disinformation campaigns could encourage similar endeavors on a larger scale in the future.
These “trolls” have also conducted orchestrated attacks designed to intimidate and silence the Kremlin’s critics such as Finnish journalist Jessikka Aro who personally experienced an extraordinary degree of online harassment, including the publication of details of her personal life. Another prominent target was Eliot Higgins, the founder of investigative journalism network Bellingcat, which has been reporting on Russia’s activities in Ukraine. The pro-Kremlin hacker group CyberBerkut hacked his email, iCloud, and social media account and posted his personal pictures, a scanned copy of his passport, his girlfriend’s name and other private information online. In May 2017, a Canadian research organization, The Citizens Lab, published a report unmasking a large-scale cyber campaign against more than 200 high-profile Kremlin critics (including government officials, journalists and civil society activists) in 39 countries. The goal of this campaign was to steal personal digital data, doctor it and then leak it in order to discredit the victims. Aggressive and intimidating pro-Russia trolling has led several media portals, such as Reuters and CNN, to close off their comments section. Unfortunately, such policies have also curtailed the possibility of meaningful online debate.
Russia targets its adversaries not only on an individual level, but also on an industrial scale. The use of social media, for instance by Western military personnel posted in Ukraine, provides Russia’s government agencies and their sympathizers with an opportunity to harvest large amounts of personal data. Pro-Russia information warriors have used such data in the past to harass and intimidate: for example, in January 2014, when individuals taking part in the Maidan protests in Kyiv were sent threatening SMS messages, and in November 2015, when Polish military personnel were telephoned en masse (Giles). Pro-Russia hackers have also reportedly sent tailored messages carrying malware to more than 10,000 Twitter users in the U.S. Defense Department with the aim of getting access to and control of the victim’s phone or computer as well as their Twitter accounts (Calabresi). Now and into the future, the Kremlin will likely continue using these tools to demoralize and incapacitate its adversaries—a particularly important reality for NATO Allies participating in the eFP mission.
Finally, social media can reinforce messages spread by more traditional media channels, such as RT and Sputnik. RT produces a tweet every two minutes, many of them shared hundreds of times. However, the analysis shows that most RT retweets and Facebook post “likes” come from relatively few followers. An analysis shows that of the 50 accounts that most often retweet RT, 16 are probably “bots” ( The Economist, 2016b). These manipulations have contributed to reinforcing RT’s claim to be one of the world’s leading media outlets. It has to be noted that the reinforcing relationship between social and traditional media works both ways. For instance, when Russia’s Ria Novosti news agency re-published a clearly fabricated report about 3,600 U.S. tanks to be deployed in Poland (the actual number was 87), it gave certain credibility and wider attention to a story that was produced by an obscure group of Donbas-based online propagandists.
Russia’s use of social media is highly sophisticated and resourceful and poses a real challenge to the Euro-Atlantic community. However, the Kremlin is not invincible in this field. Since the beginning of Russia’s aggression against Ukraine, the West has substantially increased its awareness and understanding of Russia’s information warfare. Techniques are being developed to identify “trolls” and ‘bots’ with greater accuracy. In Latvia and Lithuania, for example, communities who label themselves “elves” are identifying pro-Russia bots and debunking fake news as a volunteer, civilian national guard. Furthermore, social media also presents a certain liability for Russia: the careless use of social media by Russia’s soldiers deployed in Donbas and Crimea has provided ample and convincing evidence of Russia’s military involvement in Ukraine, discrediting the Kremlin’s denials. That said, Russia can be expected to further develop information warfare capabilities and techniques in response to Western counter-measures. Therefore, Russia’s information activities are likely to remain one of the key challenges for the Euro-Atlantic community in the foreseeable future.
IV. Current Responses to Social Media-related Security Challenges
It is increasingly understood that the challenges of the social media revolution for national and international security are complex and require the combined efforts of international, regional and national authorities and the private sector as well as subnational and transnational groupings of individual activists. NATO has taken some steps to incorporate the social media dimension into its activities, particularly with respect public outreach. NATO has more than 1.2 million followers on Facebook and more than 400,000 on Twitter. NATO’s Secretary General, the Supreme Allied Commander Europe (SACEUR), and other senior officials have been using social media, some more actively than others. In 2017 NATO launched the #WeAreNATO campaign online to “explain NATO’s core mission of guaranteeing freedom and security”. NATO Assistant Secretary General for Public Diplomacy Tacan Ildem explained that the campaign seeks to educate and inform the younger generations in NATO member states as well as the wider world about NATO’s role in global security. According to the NATO Military Public Affairs Policy booklet, NATO personnel are reminded to exercise caution while using social media and “advised to consult with their chain of command before publishing NATO-related information and imagery to the internet.” In September 2014, the Supreme Headquarters Allied Powers Europe (SHAPE) adopted a social media directive that identifies best practices for using social media to enhance NATO’s engagement with key audiences during peacetime and military operations.
Since the Russia-Ukraine conflict, NATO has stepped up its communication capabilities and strengthened its Public Diplomacy Division. It has increased public outreach assistance to partner countries such as Ukraine and Georgia. NATO’s “NATO-Russia relations: the facts” website uses facts to debunk myths promoted by the Kremlin on issues such as NATO enlargement or the alleged NATO threat to Russia. In January 2014, several Allied nations took a significant step when they established a NATO Strategic Communications Centre of Excellence (StratCom) in Riga, Latvia. The Centre has produced a series of leading-edge studies that indicate how NATO and its members can counter hostile and disruptive cyber activities. The NATO Science and Technology Organisation has also developed the Digital and Social Media Playbook, a continually updated, information-environment assessment tool aimed at understanding the goals and methods used by adversaries in the information space. 
NATO is also beginning to incorporate overt information operations through social media in its military exercises. During Trident Juncture 2015, for example, participants trained on how to quickly produce high volumes of pro-NATO content through official accounts on social media to counter anti-NATO messaging. It was established during this exercise that anti-NATO sentiment decreased gradually as the messaging from pro–NATO voices (in local languages) increased. It needs to be stressed that, at this time, NATO doctrine does not foresee the use of covert information operations, such as the use of fake identities, bots, and trolling against target audiences. Furthermore, psychological operations in general can only be used in the context of a military operation declared by the North Atlantic Council (NATO StratCom, 2016a).
The EU’s efforts to counter fake online news and hostile propaganda have been concentrated in two new institutions, East Stratcom Task Force and the Europol’s Internet Referral Unit (IRU). The former, also referred to as EU “Myth-busters,” is a team of ten nationally-seconded diplomats, tasked with exposing Russia’s online disinformation on a daily basis. It disseminates its findings on its website, via email and social media platforms. It does not have a separate budget and relies heavily on data provided by a network of more than 400 experts, journalists, officials, NGOs and think tanks in over 30 countries. In November 2016, the European Parliament adopted a resolution calling for an increase in the Task Force’s capabilities. IRU is tasked with monitoring terrorist content on the Internet and social media platforms and working with service providers to flag and remove such content. According to a July 2016 report, the IRU has assessed and referred over 11,000 messages from across 31 online platforms for removal. As a result, the online providers in question removed over 91% of this content (Morelli and Archick).
A number of measures have been adopted in recent years by NATO members on a national level. One of the leading counter-propaganda tools deployed by the United States is the State Department’s Global Engagement Centre (GEC), created in 2011 and re-branded and strengthened in 2016. GEC is charged with coordinating U.S. counterterrorism (mainly counter-Daesh) messaging to foreign audiences, primarily by nurturing a global network of “positive messengers,” including NGOs and investigative journalists. GEC is quite active on Twitter and its tactics include promoting anti-radical messages using pro-Daesh hashtags such as #accomplishmentsofISIS. U.S. authorities have also taken action regarding other security-related uses of social media. These include a directive signed in May 2016 by then Director of National Intelligence James Clapper that permits the collection of publicly-available social-media information on potential federal employees during the security clearance process. (It is important to underline that this policy places restrictions on federal agencies to protect privacy rights. For instance, investigators cannot request or require the individual to provide passwords to private accounts, or collect information on individuals other than the individual being investigated unless there is a clear national security concern.) Another example is the Countering America’s Adversaries Through Sanctions Act of July 2017, which was adopted with an overwhelming majority in the U.S. Congress, that imposes sanctions on Russia as a result of the U.S. intelligence community’s report in which it concluded that pro-Russia agencies hacked into the servers of the Democratic National Committee and released information with the intent of influencing the outcome of the U.S. presidential election.
In the United Kingdom, a dedicated police Counter Terrorism Internet Referral Unit (CTIRU) identifies online content that it assesses as contravening national terrorism legislation, and refers such content to internet platforms. An internet platform would then voluntarily remove the content if it agrees that there is a breach of its terms and conditions. CTIRU does not remove content itself. Since its inception in February 2010, CTIRU has established relationships with over 200 communication service providers and has secured the removal of more than 260,000 pieces of terrorist-related content. The public broadcaster BBC joined the fight against fake news by boosting Reality Check, a fact-checking service that will work with Facebook. In 2015, the British army reportedly created “the 77th Brigade” comprised of experts skilled in using social media to conduct non-lethal information operations and to counter hostile messaging.
Canada, too, is concerned about fake news and other hostile uses of social media. The House of Commons Standing Committee on Canadian Heritage recently examined this issue as part of a broader study of Canada’s changing media landscape. The Canadian government views the collection of reliable data and identification of international best practices for countering terrorist messaging as core elements of its counter-terrorism strategy. The Canadian Network for Research on Terrorism, Security and Society (TSAS) has been an important element in achieving these goals. Established in 2010 under the auspices of Public Safety Canada, TSAS’s national and international network of affiliated academics have been contributing to the global body of knowledge on terrorist use of social media and counter-narrative strategies.
Authorities in Germany, France and the Czech Republic have grown increasingly concerned about social media-based attacks on their political. In December 2016, the German Interior Ministry proposed creating a Centre of Defence Against Disinformation to tackle fake news on the internet and to promote a new culture of online behavior, including the rejection of the use of social media bot. Eight French news organizations, including Agence France-Presse (AFP), BFM TV, L’Express and Le Monde teamed up with Facebook and Google to launch new fact-checking tools designed to root out fake news. Any news report deemed to be fake by two of the project’s partners would be tagged respectively. The French newspaper Le Monde has also set up a fact-checking unit Les Décodeurs and plans to design a hoax-busting database which will enable readers to distinguish fake news sites from verified sites. The Czech government has announced the creation of the Centre Against Terrorism and Hybrid Threats, with 20 full-time specialists tasked with tackling disinformation—predominantly about migrants—spread by the Kremlin’s information warriors. Given the characteristics of the new global information environment, actions by governments and traditional media will not suffice to counter the “weaponization” of social media. Decisive action by the handful of social media companies that control this medium is critical to a successful response. Recently, major social media companies have launched several new initiatives. In December 2016, Facebook, Microsoft, Twitter and YouTube announced the creation of a shared database of “hashes”—unique digital “fingerprints”—for violent terrorist imagery, terrorist recruitment videos and other images that will be removed from these platforms. In June 2017, these four companies announced the creation of the Global Internet Forum to Counter Terrorism, an information-sharing platform among tech giants with the aim of making their services inhospitable to violent extremists. In April 2017, Facebook took action against or removed 30,000 fake accounts from its site in France leading up to the French presidential election. Twitter claims to have removed 235,000 accounts for promoting terrorism in the first six months of 2016. Some politicians argue that more could be done. The photo-sharing platform Instagram launched a keyword moderation tool that prevents abusive comments from being posted and curbs the effectiveness of online trolling by automatically hiding comments that contain inappropriate and/or offensive words as pre-determined by the account holder. The Google Chrome web browser introduced a new extension called First Draft NewsCheck, helping users with the authentication of images and videos and enabling the sharing of findings with other users. Google is also collaborating with YouTube on a program called the Redirect Method to target aspiring Daesh recruits and ultimately dissuade them from joining the group. Using keywords and phrases that people attracted to Daesh commonly search for, this program redirects users to Arabic- and English-language YouTube clips like testimonials from former extremists, imams denouncing Daesh’s corruption of Islam, and clips depicting the dysfunctional nature of Daesh’s so-called Caliphate.
While social media companies are taking some actions to remove terrorist-related content, there is growing pressure on them to do more in this area. In May 2017, the Home Affairs Select Committee of the British Parliament published a report which said that social media firms are “shamefully far” from tackling illegal and dangerous content and repeatedly “failing to remove illegal content when asked to do so.” The Committee urged the British government to consider requiring social media firms to contribute to the cost of the police’s Counter-Terrorism Internet referral unit as well as imposing “meaningful fines” for companies which failed to remove illegal content within a strict timeframe. Reportedly, the UK and France are already working on policies to create a new legal liability for tech companies that fail to take action against unacceptable content. In June 2017, German legislators passed the Network Enforcement Act (popularly known as the Facebook law) to fine social media and internet technology companies up to EUR 55 million if those companies do not remove malicious content within 24 hours of it being posted.
Despite all this activity, there remains uncertainty about the effectiveness of these new policies. For instance, some are skeptical of industry information-sharing platforms, noting that social media firms remain competitors and there is no commercial incentive for them to share information. As well, free speech advocates such as Joe McNamee, executive director of European Digital Rights, are concerned about proposals that give private companies the discretion and responsibility for deciding what content is good for the public interest; they believe that such initiatives could backfire. Social media operatives may also lack the expertise required to determine whether or not they are dealing with terrorists. For instance, Facebook mistakenly censored a group of supporters of Chechen independence—Independence for Chechnya!—labeling these government dissidents as terrorists.
V. Conclusions and Recommendations
Like every major technological invention, the explosion of social media presents both challenges and opportunities. Hostile non-state actors and aggressive authoritarian states have shown a remarkable ability and willingness to exploit this new medium to pursue their agenda. The Euro-Atlantic community’s response so far can be described as haphazard, uncoordinated and irresolute. To a degree, this has to do with ethical and legal constraints pertaining to democratic societies. Nevertheless, there are a number of steps that NATO member states should seriously consider in order to better adapt to the new realities of the Information Age.
The general public, and especially those in younger generations, need to be taught to be cautious about manipulation on social media. Techniques are being developed to recognize the use of trolls and bots, and these techniques should be widely shared—similar to those being implemented in Swedish primary schools, where efforts to improve digital competence also include teaching children how to differentiate between reliable and unreliable sources. With respect to protecting the electoral process, governments, political parties, and electoral commissions should study best practices such as the approach used by France’s new president, Emmanuel Macron, whose skilled technical team thwarted the Kremlin’s attempts to harm his campaign. Social media users should also be familiar with security measures to protect their private information. Schools and the mainstream media should promote the value of genuine, fact-based debate and critical thinking, encourage social media users to come out of their virtual bubbles, expand their interactions on social media and engage in constructive exchanges with people holding different views.
In the age of information overflow, people will continue to look for trusted information sources. Responsible media can remain competitive, provided that it embraces innovative technological solutions to help assess the veracity of social media messages with “breaking news” potential. For instance, the UK-based international news agency Reuters developed an algorithm based on how many people follow the source of the news and the structure of messages themselves. This gives Reuters enough confidence to tweet a breaking news story itself, thereby staying relevant in the fast-paced information environment. As Jamie Shea, NATO Deputy Assistant Secretary General for Emerging Security Challenges, puts it: “The [traditional] media must not be bullied into silence but focus on traditional reporting and fact checking. A disoriented public will turn back to quality journalism— provided it still exists. Governments must empower press councils to enforce objective standards in the media by exposing and penalizing outlets that deliberately convey fake news.”
NATO member states that have not yet done so should create or designate specific government units to conduct—in cooperation with social media companies—round-the-clock monitoring of detrimental uses of social media, exposing fake news and hostile propaganda, and countering them with facts. Academic research and think tanks specializing in online communications should be further supported in order to stay ahead of the curve. Existing NATO and EU capabilities such as NATO’s Public Diplomacy Division and the EU’s East Stratcom Task Force should be provided with additional financial and technological capabilities as well as human resources to continue providing credible online responses as often as possible (even if matching the speed of fake news reporting might never be feasible). Policy towards classified intelligence information should be revisited to allow public diplomacy officers to use less sensitive information, including satellite imagery, in order to refute disinformation.
Institutions should routinely revisit their social media policies, adjust the content and the format of their communications to the needs of mobile users (messages should be short, coherent, graphic, targeted and numerous), and incorporate social media aspects in training and exercises for their personnel. Defensive measures to protect the identity and home addresses of soldiers’ families should be put in place across the Alliance. In military headquarters, a capacity to utilize social media should be built into every level of command rather than reserved exclusively to public affairs and intelligence officers. With due caution, social media and messaging platforms could offer convenient and user-friendly options for command and control—the FBI-Apple decryption dispute suggests that commercial security protocols are at least as efficient as many governmental ones (Tunnicliffe & Tatham, 2017).
In addition to the heightened social media presence of those spreading a democratic, moderate and facts-based narrative, certain restrictive measures are also necessary to curtail the social media activity of terrorists and state-sponsored trolls. As the RAND Corporation has warned, “don’t expect to counter the firehose of falsehood with the squirt gun of truth” (Paul and Matthews). Cooperation with social media industry in order to remove the extremist contents, hate speech and fake news from online platforms should continue, and the most influential information warriors, for instance Russia’s chief propagandists, should be subjected to Western sanctions.
In the case of Daesh’s core-periphery social media structure, NATO StratCom experts suggest focusing on removing entire clusters of accounts associated with Daesh, whether they are active or inactive, in order to prevent idle accounts taking over propaganda broadcasting when the active ones are closed. This approach would increase the incremental transaction costs for terrorists’ social media activities, forcing them to continually rebuild their infrastructure from the ground up (Shaheen). These activities of security services also need to be better coordinated across the Euro-Atlantic community.
Since most social media tools are owned by private, multinational companies, cooperation with these companies needs to improve. National measures to take down unlawful content are often ineffective because, in most cases, this content is hosted beyond national borders. It is therefore important that the voluntary development and use of anti-trolling and fact-checking software as well as increasing network monitoring by industry be incentivized. To pre-empt excessive governmental regulations of the cyber domain, it would be preferable if social media companies were to adopt strict internal policies themselves. Social media companies should also continue revisiting some of their newly created tools to identify harmful content to make sure they are not counter-productive as well as adapting algorithms to boost good investigative journalism rather than sensationalist titles. While demanding that social media platforms such as Twitter and Facebook assume greater responsibilities in removing terrorist messaging and fake news, Western governments should do so in a constructive and cooperative manner. One must take into account the fact that Western companies do not have a monopoly over social media, and that users can quickly migrate to other platforms, such as the acclaimed, China-based WeChat (although it is currently mainly tailored for the Chinese market). Governments should help train social media operatives to increase their competencies in recognizing terrorist and extremist content and activities. 
Civil society is a powerful ally of democratic governments in fighting extremism and fake news. Support for grassroots initiatives such as Stopfake.org (to expose the Kremlin’s fake news) and the mobilization of credible local leaders as well as “elves” (the volunteer hunters of trolls) could give Western societies the edge in the information space.
While the West may have invented social media, their genesis never promised that their networks or users would adopt the best of Western values. Countering these new threats should be elevated to a high priority among NATO member states. Terrorist and other hostile uses of social media have already resulted in the loss of human life, and have threatened to weaken and divide the Western world. Yet, it is important for the Euro-Atlantic community to maintain the higher moral ground in social media use and to refrain from using the methods of its unscrupulous opponents. Openness, pluralism and inclusion are key to separating truth from falsehood. The author hopes that this report will contribute to a growing realization of the magnitude and importance of this challenge. 
 This report was produced by the Sub-Committee on Democratic Governance under the Committee on Democratic Governance, NATO Parliamentary Assembly, with Jane Cordy serving as lead rapporteur. Jane Cordy, a member of the Senate of Canada, has served as vice president of the NATO PA and is currently vice-chair of the Committee on Democratic Governance. This report was originally approved for publication by the NATO PA on October 7, 2017. We at JPRI are grateful for permission to reprint it here. The report has been lightly edited—primarily to reformat the introduction and bibliography, and to condense endnotes.
[Return to Text]
 “Social media” are defined by the following characteristics: users create their personal profiles/accounts, making them completely or partly public; user profiles and their generated content is networked. Various social media platforms have certain specificities. For instance, Twitter focuses on short messages, Instagram specializes in videos and pictures, and LinkedIn on professional/career information. Facebook is the most comprehensive platform. Some messaging platforms such as WhatsApp are also referred to as social media, although they are mostly used for chatting and file exchanges among a small group of people, often between two users. In 2005, only 5% of adult in the United States used at least one of these platforms; by 2011, that share grew to 50%, and currently it reaches almost 70%. Some 88% of young adults (ages 18-29) in the United States are on Facebook. Globally, there were about 2.7 billion social media users in January 2017 (37% of the world’s population), almost half a billion up from January 2016. Facebook alone has almost 2 billion users. Interestingly, the fastest growth is in developing countries.
[Return to Text]
 Daesh is an acronym derived from the original Arabic name of the terrorist organization variously translated as “Islamic State of Iraq and Syria,” “Islamic State of Iraq and al-Sham,” “Islamic State of Iraq and the Levant.”
[Return to Text]
 The founder of VKontakte Pavel Durov left the company in 2014, citing difficulties “to remain with those principles on which our social network is based.” Freedom House is an independent watchdog organization dedicated to the expansion of freedom and democracy around the world. Freedom House uses a rating system to assess political rights and civil liberties enjoyed by individuals in specific countries. The scores are assigned each year through evaluation by a team of in-house and external analysts and expert advisers from the academic, think tank, and human rights communities. The analysts use a broad range of sources, including news articles, academic analyses, reports from nongovernmental organizations, and individual professional contacts.
[Return to Text]
 According to a recent study by the University of Oxford, around 45% of highly active Twitter accounts in Russia are bots.
[Return to Text]
 Analysis and publications by the NATO Strategic Communications Centre of Excellence (StratCom) can be accessed at the following link. https://www.stratcomcoe.org/publications
[Return to Text]
 For instance, experts note that Facebook’s early efforts to debunk disinformation by marking a story “disputed” seem to have driven more traffic to these stories. Facebook was urged to call a spade a spade and change that designation to “false.”
[Return to Text]
 The architecture of Wikipedia is a case in point: its largely accurate content is a result of the ability of anyone to contribute material, and anyone to challenge that material by providing verifiable sources. During this open process, numerous revisions lead to reduced biases, inconsistencies and inaccuracies in Wikipedia’s content.
[Return to Text]
Adornato, Anthony C. “Forces at the Gate: Social Media’s Influence on Editorial and Production Decisions in Local Television Newsrooms.” Electronic News 10, no. 2 (June 2016): 87-104. doi: 10.1177/1931243116647768.
Bodine-Baron, Elizabeth, Todd Helmus, Madeline Magnuson, and Zev Winkelman. Examining ISIS Support and Opposition Networks on Twitter. Santa Monica, CA: RAND Corporation, 2016.
Calabresi, Massimo. “Inside Russia’s Social Media War on America.” Time, 18 May 2017.
Carafano, James Jay. “Twitter Kills: How Online Networks Became a National-Security Threat.” The Heritage Foundation Defense Commentary, 8 June 2015.
Duggan, Maeve, and Aaron Smith. The Political Environment on Social Media. Washington, D.C.: Pew Research Center, 25 October 2016.
“Extreme Tweeting.” The Economist, 19 November 2015.
Farwell, James P. “The Media Strategy of ISIS.” Survival: Global Politics and Strategy 56, no. 6 (November 2014): 49-55.
Giles, Keir. The Next Phase of Russian Information Warfare. Riga, Latvia: NATO Strategic Communications Centre of Excellence, 2016.
Gottfried, Jeffrey, and Elisa Shearer. News Use Across Social Media Platforms 2016. Washington, D.C.: Pew Research Center, 26 May 2016.
Gregory, Paul Roderick. “Under Russia’s New Extremism Laws, Liking My Writings On Ukraine Could Mean Jail Terms.” Forbes, 29 August 2016.
Guilbeault, Douglas, and Samuel Woolley. “How Twitter Bots Are Shaping the Election.” The Atlantic, 1 November 2016.
Inkster, Nigel. “Information Warfare and the US Presidential Election.” Survival: Global Politics and Strategy 58, no. 5 (September 2016): 23-32.
“Israel Is Using Social Media to Prevent Terrorist Attacks.” The Economist, 18 April 2016a.
Lange-Ionatamishvili, Elina, and Sanda Svetoka. “Strategic Communications and Social Media in the Russia Ukraine Conflict.” In Cyber War in Perspective: Russian Aggression Against Ukraine, edited by Kenneth Geers, 103-111. Tallin, Estonia: NATO Cooperative Cyber Defence Centre of Excellence, 2015.
Lee, Timothy B. “Facebook’s Fake News Problem, Explained.” Vox, 16 November 2016.
Lynch, Marc. “After Egypt: The Limits and Promise of Online Challenges to the Authoritarian Arab State.” Perspectives on Politics 9, no. 2 (June 2011): 301-310.
Margetts, Helen, Peter John, Scott Hale, and Taha Yasseri. Political Turbulence: How Social Media Shape Collective Action. Princeton, NJ: Princeton University Press, 2017.
Matejic, Nicole. “Content Wars: Daesh’s Sophisticated Use of Communications.” NATO Review Magazine, 2016.
Morelli, Vincent L., and Kristin Archick. “European Union Efforts to Counter Disinformation.” CRS Insight, 1 December 2016.
NATO StratCom. Internet Trolling as a Hybrid Warfare Tool: The Case of Latvia. Riga, Latvia: NATO Strategic Communications Centre of Excellence, 2015.
NATO StratCom. Social Media as a Tool of Hybrid Warfare. Riga, Latvia: NATO Strategic Communications Centre of Excellence, 2016a.
NATO StratCom. Daesh Recruitment: How the Group Attracts Supporters. Riga, Latvia: NATO Strategic Communications Centre of Excellence, 2016b.
NATO StratCom. New Trends in Social Media. Riga, Latvia: NATO Strategic Communications Centre of Excellence, 2016c.
Newman, Nic, with Richard Fletcher, David A. L. Levy, and Rasmus Kleis Nielson. Reuters Institute Digital News Report 2016. Oxford, UK: University of Oxford Reuters Institute for the Study of Journalism, 2016.
Nissen, Thomas Elkjer. #TheWeaponizationOfSocialMedia: @Characteristics_of_Contemporary_Conflicts. Royal Danish Defence College, 2015.
Paul, Christopher and Miriam Matthews. The Russian “Firehose of Falsehood” Propaganda Model: Why It Might Work and Options to Counter It. Santa Monica, CA: RAND Corporation, 2016.
Pettigrew, Erin. “How Facebook Saw Trump Coming When No One Else Did.” Medium, 9 November 2016.
Polonski, Vyacheslav. “Impact of Social Media on the Outcome of the EU Referendum.” In EU Referendum Analysis 2016: Media Voters and the Campaign, edited by Daniel Jackson, Einar Thorsen, and Dominic Wring, section 63. Bournemouth, UK: Bournemouth University Centre for the Study of Journalism, Culture and Community, July 2016.
Rettman, Andrew. “Russian Military Creates ‘Information Force,’” EU Observer, 23 February 2017.
Ruane, Kathleen Ann. The Advocacy of Terrorism on the Internet Freedom of Speech Issues and the Material Support Statutes. Congressional Research Service, 8 September 2016.
Schmitt, Eric. “U.S. Intensifies Effort to Blunt ISIS’ Message.” The New York Times, 16 February 2015.
Shaheen, Joseph. Network of Terror: How Daesh Uses Adaptive Social Networks to Spread Its Message. Riga, Latvia: NATO Strategic Communications Centre of Excellence, November 2015.
Silverman, Craig, and Lawrence Alexander. “How Teens In The Balkans Are Duping Trump Supporters With Fake News.” BuzzFeed News, 4 November 2016.
Thompson, Alex. “Journalists and Trump Voters Live in Separate Online Bubbles, MIT Analysis Shows.” VICE News, 8 December 2016.
Tunnicliffe, Ian, and Steve Tatham, S. Social Media—The Vital Ground: Can We Hold It? Carlisle, PA: U.S. Army War College, 2017.
“Tweetaganda.” The Economist, 10 September 2016b.
Wakefield, Jane. “Social Media ‘Outstrips TV’ as News Source for Young People.” BBC News, 15 June 2016.