The same sorts of organizations that once made their money performing “black SEO”—using fraudulent means to raise paying customers’ search engine ranks, often for illicit reasons—are now diving into a whole new sort of online manipulation. Researchers at security threat tracking company Recorded Future have found companies selling disinformation campaign capabilities similar to the ones used by Russian “troll factories” during the 2016 US presidential campaign and other state-sponsored information operations.
In a report issued this month, researchers from Recorded Future’s Insikt Group describe how they engaged two providers of advertising disinformation services to assess the threat posed by such operations. Both disinformation operators were advertising services on Russian-language underground forums alongside purveyors of hacking tools and other criminal activities. But one of the services also has a public Internet presence, offering less illicit marketing services through an open website.
“Both of these companies, their bread and butter is negative takedown stuff—discrediting your opponent or competitor,” Recorded Future Director of Analysts Roman Sannikov told Ars in an interview. “But they can also promote companies, using the same networks of social media accounts.”
To gain greater insight into how these services operate, Sannikov and others at Recorded Future approached the groups advertising trolling services as prospective customers. One group was engaged to create a positive social media spin for a fictitious company Recorded Future created, purportedly based in England, on behalf of the fictional company’s owner. The other was engaged by a fictional competitor to that company to attack its reputation on social media and besmirch its heretofore nonexistent reputation.
“The [initial] investigation only took about six weeks,” Sannikov said, “and most of that was trying to milk information from the actors about their activities.” Once they were paid and tasked, Sannikov explained, “it only took seven to ten business days” before the work was done.
Make me some friends
The group tasked with creating a positive social media presence advertised the following services:
Prices for placing articles varied based on the site targeted. The group’s contact, referred to as “Raskolnikov” in Recorded Future’s report on the research, said that he could publish as many articles as the researchers wanted—and even claimed to be able to place stories on the Financial Times and Buzzfeed (though with a very high price tag).
When the researchers pulled the trigger, Raskolnikov quickly demonstrated the group’s proficiency. “When we created this fake persona of this company, we initially tried to create a social media presence, and the social media platforms were essentially blocking the profiles we wanted to create,” Sannikov noted—because with rising concerns about disinformation operations, the social network operators have made it much more difficult to set up a business profile. “However,” he said, “the company that we hired to promote us was able to have a social media presence for us with likes and followers and all sorts of stuff within three or four days. It was surprising to us how quickly they could do that and make us look like an established company.”
The positive spin campaign also managed to gather more than 100 followers for the fictional company on each platform, including accounts with images and names from the fake corporation’s website. Based on the comment content, it appeared that the followers were a mix of bots or trolls spreading disinformation and real people who had been drawn into the fakery.
The effort also included the creation of generic news articles about how great the fictional company was. The researchers opted for two articles at lesser-known outlets—and within two weeks, after some rewrites because of the non-native English used in the articles, the fake company had articles placed. One was “a less-established media outlet,” according to the report, while the other was “a very reputable source that had published a newspaper for nearly a century.”
The total price tag for the positive spin effort was $1,850.
Let the hate flow
The second group, engaged through a contact the researchers labeled “Dr. Zhivago,” appeared to be an old hand at the disinformation game. The group had a more nuanced approach to mounting campaigns and had a very precise pricing scheme:
With the fictional company now having a positive social media presence, Sannikov’s team engaged Dr. Zhivago to tear it down, assaulting the company by questioning its business practices. He told the researchers the full effect of the campaign would take about a month or two to kick in because “a successful disinformation operation happens in phases by gradually introducing an intentionally false narrative in an organic manner,” the researchers wrote.
The first stage focused on placing articles on news websites. The pricelist for targeted sites was divided into “low profile,” “medium profile,” and “top level” sites—with Reuters.com, Newsmax, and Mashable listed in the “top level” available resources. Once articles were posted, “aged” accounts—older social media profiles less likely to be flagged as fraudulent—would repost the stories, and then other accounts would comment and repost.
The total price tag for the negative campaign was $4,200—still well within the budget of individuals or businesses looking to smear a competitor.
Both disinformation operators offered to go even harder at targets. Dr. Zhivago offered to file a complaint with law enforcement claiming that the targeted (fake) company was involved in human trafficking. Raskolnikov was quick to offer negative services as well, including “takedown” operations to ruin the reputation of a competitor or “sink an opponent in an election,” among other things.
Sannikov said that the proficiency demonstrated by the two groups “shows that this is something they do a lot of… they did this so efficiently and delivered on everything they claimed, which indicates this isn’t [the] first time they’ve done it.” He couldn’t say how widespread these types of services are and expressed, “Hopefully it isn’t [widespread], but we want to get information on these sorts of operations out there before this becomes the DDoS [Distributed Denial of Service attack] of the next generation.”