Disinformation, Cybersecurity, and Online Influence

On October 10, 2019, the United States Embassy hosted a lecture and discussion entitled “Disinformation, Cybersecurity, and Online Influence: Lessons from Eastern Europe”.

My notes are intended to provide an introduction to the appropriate sources for more detailed analysis.

 

Comments on the Russian cyberwarfare campaign

The discussion began with a substantial introduction and explanation from Nina Jankowicz (Wilson Center, Washington DC) who studies the intersection of democracy and technology in Central and Eastern Europe.

Firstly, Jankowicz distinguishes the Russian population from the Russian government. She has no doubt that the Russian government is engaged in cyberwarfare against the West. This is not necessarily focused on a particular view; they may well use any groups of the ideological spectrum (e.g., by so-called GONGOs – government-organized NGOs) when they can be used to destabilise Western democracies.

Russian troll farms use fake experts to lend credence to their claims. Claims also include malinformation, that is to say, information which is true but spread with malign intent. It also includes the promotion of peripheral views which may not necessarily be pro-Kremlin but which help to destabilise the state, to sow suspicion of institutions or conventional news outlets, or to encourage distrust in the democratic process and institutions.

She adds that similar techniques have been utilized by Western states, most noticeably in the American election campaign and the UK Brexit referendum.

The Netherlands referendum in 2017 on the EU-Ukraine agreement was another example of Russian involvement, where it worked to undermine the Ukrainian positon and to promote misinformation. Wild claims that the deal would mean Dutch soldiers having to fight in Ukraine, or that Ukrainian neo-nazis would be free to attack the Netherlands, were widely trolled.

 

An inadequate response to tackling cyber-security

The current approach in the West is focused on knocking down trolls when they pop up. This is clearly ineffective and inadequate as is “whack-a-troll”:

Tackling the problem is challenging and needs a long term strategy, building education in digital and media literacy and thinking skills from an early age. But many of the appeals are emotional and not subject to rational analysis. Here people need to understand how the system works and how they are being manipulated.

Facebook is the worldwide major source of the problem. Facebook users are not their ‘clients’. Facebook’s clients are the advertisers; the users are merely a resource. This won’t change unless Facebook’s business model changes.

Facebook identifies the potential market for political and commercial exploitation of their “community of likes”:

How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool“, by Carole Cadwalladr and Emma Graham-Harrison, The Guardian, 17 March 2018

This was extensively used during the Brexit campaign in UK. It was used in the Ukrainian presidential elections as well. While Facebook employed a whole “war room” for EU elections it did not have a single representative in Ukraine, Facebook’s third fastest growing market worldwide — just to indicate how serious they are about changing their business model towards more ethical behaviour.

So far there is insufficient intellectual or ethical foundation to monitor the social consequences of innovation that is erroneously confused with progress.

Media and digital literacy is indispensable when countering disinformation and cyberwarfare. It enables individual media consumers to

  • check authors and sources, and to verify from other sources.
  • check text by cutting and pasting an extract into a search engine which will pull up examples of plagiarism or false information.
  • check pictures by reverse image search.
  • practice social media scepticism; to be aware of who of your friends posts unchecked stories and fake news; not to “like” or re-post or otherwise respond to unchecked stories yourself.
  • practice safe surfing and cyber hygiene; to always use two-factor authentication; to use a password manager to generate and employ hard-to-crack passwords.
  • think and check before you share; to never share something you haven’t read and never share material whose source you have not verified.

 

Future threats of surveillance capitalism

Secondly, future threats were explored at the meeting by Christopher Zotter who writes for the Austrian current affairs weekly, Profil, and Alexander Schatten of SBA Research, an Austrian research center for information security.

(1) We were shown historic examples of the manipulation of visual evidence from the pre-digital era where photographs were re-touched to create different impressions. Those pictures can be found on:

(2) We were warned that new threats arise from surveillance which leads to prediction and manipulation of individual behaviour, by politics and marketing. Social media is a form of surveillance capitalism where the consumer data are considered a resource for making more profit.

People are willingly (but perhaps not knowingly) installing their own surveillance kit. Amazingly a tool like Amazon’s Alexa, or new smart TVs with default built-in microphones which can listen to everything you say, are being accepted into peoples’ homes. “What can go wrong?” we were asked with grim irony.

(3) We were shown examples of real time re-enactment (“deep video portraits”) where software allows the manipulation of videos to create false visual images. A person can be made to apparently say anything and these “deep fakes” can be hard to identify. As this becomes commercially available we shall need new techniques to distinguish real from deep fake videos.

“It’s Getting Harder to Spot a Deep Fake Video”

Overall much food for thought and a warning. Much here that is well worth following up.

 

Further reading

 

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.