A year before the US presidential election, the world's largest social media companies are still not trying to prevent tampering with their platforms.
Researchers from the NATO Competence Center for Strategic Communication conducted a four-month experiment on Facebook, Twitter, YouTube and Instagram from May.
You have acquired social media engagements on 105 different posts around the world. Four social media platforms from Manipulation Service Providers (MSPs), a type of company that customers use to buy clicks and increase their social media presence can.
For only 300 euros, NATO StratCom bought 3,530 comments, 25,750 likes, 20,000 views and 5,100 followers on the four platforms.
Researchers were able to identify the accounts – 18,739 in total – that were used to send the purchased interactions. This in turn allowed them to assess which other sites these non-authentic accounts interacted with on behalf of other customers.
The results of the experiment are astounding: four weeks after the purchase, 4 out of 5 of the purchased commitments were still online and three weeks after a sample of counterfeit accounts was reported to the companies, 95% of the accounts were always still active.
The results, which are included in a report released today, have been shared with a small number of customers. Media such as BuzzFeed News believe that malicious and inauthentic activity activated by MSPs often go unnoticed and the risk is significant increase that attempts by malicious governmental and non-governmental actors who want to interfere in democratic processes are not effectively exposed and combated.
"Social media manipulation is the new frontier for opponents who influence elections, polarize public opinion, and seek to prevent legitimate political action from being debated," the report said. NATO's Strategic Communications Center of Excellence is an international military organization accredited by NATO. It is not part of the NATO command structure.
The vast majority of the interactions, based on the fake accounts identified by the researchers, were commercial and corporate and branded. However, StratCom observed that the same reports were linked to 721 political pages, including 52 official government profiles and the reports of two heads of state.
Experts fear that similar trends can be observed in the US elections as in Europe. Trevor Davis, a professor at George Washington University's Institute for Data, Democracy and Politics, told BuzzFeed News: "The reports identified and deemed fraudulent in the European Parliament elections have now been misused and relocated for the 2020 US presidential election, and specifically the democratic primaries. "
Davis added:" This does not appear to be in the name of a particular campaign. The goal could simply be to sow distrust and division. “
Interactions such as likes were also noted by NATO StratCom analysts on the part of leaders from major countries, political parties in the European Parliament and individual candidates at all levels in elections across Europe as well as on political sides in Russia , Ukraine and India. The researchers also identified political accounts focusing on politics in Armenia, Georgia, Israel, Taiwan, and Tunisia, suggesting that the use of MSPs is a global problem.
It is not known who is behind the interactions in these accounts. The owners of the sites that are being strengthened could pay MSPs for the engagement themselves, but it could also be driven by supporters or even opponents who try to defile a politician or political group.
Yuri Kadobnov / Getty Images
US President Donald Trump and Russian President Vladimir Putin shake hands at a joint press conference in Helsinki in July 2018. Russia used Facebook to influence the 2016 elections, the former special advisor Robert Müller concluded.
MSPs form the heart of a growing home industry, primarily from Russia, that is used to sell clicks and comments, and to increase engagement in social media. Their work is not technically illegal and they act openly.
The NATO StratCom report reveals what it calls the "black market" for manipulating social media.
Researchers identified hundreds of service providers, virtually all of Russian origin. Their activities range from using bots to display videos and retweeting posts on Twitter to more detailed accounts that require direct human involvement – and can stay online for years before they are discovered.
As part of the experiment, the researchers also created their own non-authentic accounts that were used to upload content that should be processed with MSPs. The report finds that social media platforms can better identify efforts to set up fake accounts. Facebook blocked 80% of the accounts created by NATO StratCom, Twitter 66% and Instagram 50%. However, YouTube has not blocked any of the profiles.
The four platforms have also changed significantly when removing non-authentic comments. YouTube was the only service to correct the number of views manipulated, with Instagram removing only 5% of the comments and not correcting fake preferences and views.
YouTube, which was owned by Google, was also the most expensive to buy, which the researchers found to be difficult to manipulate. For 10 euros, you can get 3,267 views on YouTube compared to more than 13,000 views on Instagram.
Twitter is considered the most effective platform to counter misuse of its services because the engagement it took took longer to appear on the website.
Ultimately, the MSPs performed the services obtained from the NATO StratCom researchers. Despite specific differences between the social media platforms, all four perform poorly overall according to the researchers' seven criteria.
The majority of the non-authentic accounts identified during the test were found on Instagram. But Instagram, Facebook, YouTube, and Twitter all failed to remove the specific accounts identified by NATO's StratCom researchers. One hundred such accounts were reported to each company and only 4.5% were removed, with Facebook removing 12, YouTube none, and Twitter and Instagram three each.
The report concludes that Facebook, Instagram, Twitter and YouTube are still not adequately addressing improper behavior on their platforms and the threat posed by the growing manipulation industry.
Existing approaches such as self-regulation do not work effectively, the report says.
A Facebook spokesman told BuzzFeed News: "Fake engagement tactics remain a challenge for the entire industry. We make massive investments every day to find and remove fake accounts and engagements. However, this is only part of our effort to prevent coordinated spurious behavior that has led to the elimination of over 50 sophisticated networks worldwide in the past year. "Facebook has filed several lawsuits against companies that sell spurious behavior on their platforms.
Yoel Roth, head of website integrity at Twitter, said:" Setting up fake accounts, paying for engagements, and deliberately Tampering with our service is prohibited. This is an ever-evolving challenge, but we have invested significant technical resources in this problem and are committed to improving it. In addition, we fully publish all incidents of state-supported operations in a public database and give ours every six days Distance manipulation figures for months in months in the Twitter transparency report. "
A YouTube spokesman said:" We take abuse of our systems, including trying to artificially increase the number of video views, very seriously YouTube evol kelt has been implementing and investing in proprietary technologies for well over a decade to prevent artificially inflating the number of video views. While no anti-spam system will ever be perfect, our teams are working very hard to limit spam calls to less than one percent of all calls. We have taken additional security measures to mitigate the impact of these views on all of our systems. We also periodically review and validate the views that videos receive and, if necessary, remove fraudulent views and take other measures against channel violation. “
The full NATO StratCom report can be found here.