Welcome!
2017-10-30 22:26:03
Russian Influence Reached 126 Million Through Facebook Alone

WASHINGTON — Russian agents intending to sow discord among American citizens disseminated inflammatory posts that reached 126 million users on Facebook, published more than 131,000 messages on Twitter and uploaded over 1,000 videos to Google’s YouTube service, according to copies of prepared remarks from the companies that were obtained by The New York Times.

The detailed disclosures, sent to Congress on Monday by companies whose products are among the most widely used on the internet, came before a series of congressional hearings this week into how third parties used social networks and online services to influence millions of Americans before the 2016 presidential election.

The new information goes far beyond what the companies have revealed in the past and underline the breadth of the Kremlin’s efforts to lever open divisions in the United States using American technology platforms, especially Facebook. Multiple investigations of Russian meddling have loomed over the first 10 months of Mr. Trump’s presidency, with one leading to the indictments of Paul Manafort, Mr. Trump’s former campaign chief, and others on Monday.

In its prepared remarks sent to Congress, Facebook said the Internet Research Agency, a shadowy Russian company linked to the Kremlin, had posted roughly 80,000 pieces of divisive content that was shown to about 29 million people between January 2015 and August 2017. Those posts were then liked, shared and followed by others, spreading the messages to tens of millions more people. Facebook also said it had found and deleted more than 170 accounts on its photo-sharing app Instagram; those accounts had posted about 120,000 pieces of Russia-linked content.

Previously, Facebook had said it identified more than $100,000 in advertisements paid for by the Internet Research Agency.

The Russia-linked posts were “an insidious attempt to drive people apart,” Colin Stretch, the general counsel for Facebook who will appear at the hearings, said in his prepared remarks. He called the posts “deeply disturbing,” and noted they focused on race, religion, gun rights, and gay and transgender issues.

Facebook, Mr. Stretch said, was “determined to prevent it from happening again.”

The new information also illuminated when Facebook knew there had been Russian interference on its platform. Several times before the election last Nov. 8, Facebook said its security team discovered threats targeted at employees of the major American political parties from a group called APT28, an agency that United States law enforcement officials have previously linked to Russian military intelligence operations.

Facebook cautioned that the Russia-linked posts represented a minuscule amount of content compared with the billions of posts that flow through users’ News Feeds everyday. Between 2015 and 2017, people in the United States saw more than 11 trillion posts from Pages on Facebook.

Twitter, in its prepared remarks, said it had discovered more than 2,700 accounts on its service that were linked to the Internet Research Agency between September 2016 and November 2016. Those accounts, which Twitter has suspended, posted roughly 131,000 tweets over that period.

Outside of the activity of the Internet Research Agency, Twitter identified more than 36,000 automated accounts that posted 1.4 million election-related tweets linked to Russia over that three-month period. The tweets received approximately 288 million views, according to the company’s remarks.

Twitter noted that the 1.4 million Russia-linked election tweets represented less than three-quarters of one percent of all election-related tweets during that period.

Google, in its prepared statement, said it had also found evidence that the Internet Research Agency bought ads on its services and created YouTube channels to upload short videos about divisive social issues including law enforcement, race relations or Syria.

Google said it had found 18 channels that were “likely associated” with the Russian agents that posted political videos to YouTube. All told, those accounts — now suspended — uploaded more than 1,100 videos totaling 43 hours of content from 2015 through the summer of 2017. Google said, in general, those videos had very low view counts that added up to 309,000 views between the middle of 2015 and late 2016. Only three percent of the videos had more than 5,000 views and there was no evidence that the accounts had targeted American viewers, the company said.

The internet search giant also confirmed earlier reports that the Internet Research Agency had purchased search and display ads from it. Google said the group had bought $4,700 in ads but none of them had targeted users by their political leanings, which was a targeting tool that Google added before the election.

Google had been investigating a separate $53,000 in ad purchases with political material from Russian internet or building addresses, but discovered that those were not related to the Kremlin.

“While we found only limited activity on our services, we will continue to work to prevent all of it, because no amount of interference is acceptable,” wrote Richard Salgado, Google’s director of law enforcement and information security, and Kent Walker, Google’s general counsel. The two men were scheduled to testify at separate congressional committees on Tuesday and Wednesday.

For Facebook, Google and Twitter, the discovery of Russian influence by way of their sites has been a rude awakening. The companies had long positioned themselves as spreading information and connecting people for positive ends. Now the companies must grapple with how Russian agents used their technologies exactly as they were meant to be used — but for malevolent purposes.

That has led to thorny debates inside the companies. For Facebook, the problem is less straightforward than finding Russia-linked pages and taking down content. Executives worry about how stifling speech from non-American entities could set a precedent on the social network — and how it could potentially be used against other groups in the future.

So Facebook has focused on the issue of authenticity — or the fact that the Russian agencies did not identify themselves as such — as a reason for taking down the accounts.

“Many of these ads did not violate our content policies,” Elliot Schrage, vice president of policy and communications at Facebook, said in a company blog post earlier this month. “That means that for most of them, if they had been run by authentic individuals, anywhere, they could have remained on the platform.”

Earlier this month, Senators Amy Klobuchar and Mark Warner introduced a bipartisan bill to require internet companies to identify those who paid for political ads on the tech companies’ platforms.

Facebook has been promoting its strengthened advertising disclosure policies as an attempt to pre-empt the bipartisan bill. Last week, Facebook began rolling out new features that provide insight into who is paying for ads, and it will maintain a publicly viewable database of ads purchased on the network.

The company is also stepping up its counterintelligence and security measures. Facebook has said it is working with Twitter, Google and other companies to spot sophisticated threats earlier, and will continue to coordinate with law enforcement when appropriate. The company said it shuttered 5.8 million fake accounts in October 2016, and removed 30,000 accounts attempting to influence the French elections this year.

Google also said it plans to increase its transparency for political ads. The company is working to issue an annual report about who is buying political ads and how much they are spending.

The company also said it planned to create a publicly accessible database into what election ads ran on Google’s AdWords — for example, web search ads — and YouTube. Google said it will identify the advertisers paying for political ads within a link accessible from the ad.

But Google said it did not intend to take any further action against state-backed Russian news channel RT, which has built a massive online audience through YouTube. The American intelligence community has described RT as the Kremlin’s “principal international propaganda outlet”, but Google said the organization had not violated any of its policies or misused the service.

Last week, by contrast, Twitter said it would ban RT and Sputnik, another Kremlin-backed news organization, from advertising on its service.