Facebook just removed three more misinformation networks on its platforms for engaging in foreign interference of African elections, this time on behalf of Russian financier Yevgeniy Prigozhin, who was previously indicted by the US Justice Department for meddling in the 2016 US elections.
Each of these networks created webs of inauthentic accounts to mislead others about who they were and what they were doing. The networks included:
- 66 Facebook accounts,
- 83 Facebook Pages,
- 37 Facebook Groups
- 12 Instagram accounts
Originating in Russia, the networks spent over $87,000 for Facebook ads, starting in April 2018 and continuing to October 2019 and focused on Madagascar, Central African Republic, Mozambique, DR Congo, Côte d’Ivoire, Cameroon, Sudan and Libya. The networks succeeded in accumulating:
- 1,144,000 accounts following one or more Pages
- 1,750 people following one or more Groups
- 32,850 people following one or more Instagram accounts
Researchers at the Stanford Internet Observatory, said the networks used a variety of techniques across the African countries. Some accounts supported a specific party or candidate, while others supported Russian deals for gold, diamonds and other natural resources. Russia appears to want to groom a new generation of African leaders and undercover agents, and push out former colonial powers such as France or the UK.
Previous Facebook-Based Meddling in African Elections
Earlier this year, Facebook announced that nefarious actors spent $800,000 on Facebook to influence African elections and it removed 265 fake Facebook and Instagram accounts, Facebook Pages, Groups and events involved in a coordinated attempt to influence political events and elections in Nigeria, Senegal, Togo, Angola, Niger, Tunisia, Latin America and Southeast Asia.
That time, Facebook said that an Israeli commercial entity, Archimedes Group, that advertised its deliberate efforts to conduct disinformation campaigns, used their network of fake Facebook accounts to run their Pages, disseminate their content, and artificially increase engagement.
Russian IRL Madagascar Election Manipulations
Russians are not just using Facebook to try and manipulate elections. The New York Times recently exposed an overt attempt at influencing Madagascar’s elections. Russian operativers used Facebook, of course, but they also paid people to show up to rallies or to run or not run for president. From the NY Times:
In some vital ways, the Madagascar operation mimicked the one in the United States. There was a disinformation campaign on social media and an attempt to bolster so-called spoiler candidates. The Russians even recruited an apocalyptic cult leader in a strategy to split the opposition vote and sink its chances.
In a comic twist of fate, Russia was not able to win with their initial candidate, who lost an early vote, so they switched to a stronger candidate that did eventuality win.
And why did Russia care about Madagascar? Seems that Russian financier Yevgeniy Prigozhin has interest in a chromium mine there and this was his way of keeping control of his asset.
What to Do About Facebook-Based Misinformation?
Social media misinformation campaigns are not new, nor are they confined to African countries. Russia used fake Facebook accounts in the USA elections and UK’s Brexit vote to misinform voters. So what can we do about fake news on fake Facebook accounts?
1. We can support better government regulation.
For example, the UK Government’s Online Harms White Paper recommends a package of online safety measures, including legislative and non-legislative measures to make companies more responsible for their users’ safety online, especially children and other vulnerable groups.
2. We can stop promoting Facebook in ICT4D
In addition to it’s fake account problem, Facebook has ongoing data privacy failures – over 21 major privacy scandals in 2018 alone. This brings to the fore another way we can combat what has become a liability in our work: we can stop promoting Facebook in ICT4D.
3. We can demand Facebook police its content
Recently Facebook announced that it would not restrain any political speech – even when that speech was obviously spreading misinformation. At the same time, Twitter took the opposite approach and banned all political ads. Which policy do you think is better for democracy?
4. We can just accept Facebook with all its issues
Finally, there is still one critical aspect that underlies this whole fake Facebook account issue. As Steve Song commented, on Facebook, of course, Facebook and its many subsidiaries is where most of our constituents start conversations and reach new constituents.
That’s why – for better or worse – many development organizations have Facebook chatbots, Facebook Groups, and a WhatsApp engagement strategy.
Very interesting as usual!
You just butchered (IMHO) the poor New York Times TWICE 🙂
“Hew York Times” […] – also not sure about “operativers” […] “NY Tomes”
Thankfully, you don’t read ICTworks for the sparkling spelling and gregarious grammar.
This is a really important issue – thanks for posting on it! One thing I would add is a need to improve critical digital literacy. It’s hard not to react to dramatic headlines, but searching for the confirmed evidence in each story and thinking critically about who wrote the piece and who benefits from it are essential skills we value too little. Furthermore, in this polarized political context, we should promote self-reflection – developing awareness of own biases – and try not to immediately shut off when somebody from the other side explains their views.