Videos selling false accusations of voter fraud and Covid-19 solutions have attracted millions of insights on YouTube. Militant teams posing as online news sites are being installed on Facebook. Foreign trolls pose as American activists on Instagram to sow divisions around Black Lives Matter protests.
Four years after an election in which Russia and some far-right teams unleashed a wave of false, misleading and divisive online messages, Silicon Valley is wasting war on incorrect online information that may influence voting in November.
Social media corporations are grappling with an avalanche of misleading and divisive messages from political parties, foreign governments, and hate teams as months pass through this year’s presidential election, according to more than two dozen national security decision makers, experts in incorrect information, hate speech. fact-checking teams and technical executives. Matrix, as well as a review of thousands of social media posts through POLITICO.
The tactics, many of which point to deepen the divisions among Americans already traumatized by a fatal pandemic and loss of record tasks, echo years of efforts by the Russian government to stoke confusion over the U.S. presidential election. Content. But the attacks this time are much more insidious and complicated, with fakes to detect, more countries pushing secret systems, and a deluge of American equipment copying their methods.
And some of the misleading messages have been amplified through major media and top American politicians, adding President Donald Trump. In one case last week, he used his many social media to say, evidence, that postal votes would create “the wrong and fraudulent maxim election of history.”
Silicon Valley’s efforts to engage the new counterfeit bureaucracy have failed, according to researchers and some lawmakers. And demanding situations are only increasing.
“November will be like the Super Bowl of disinformation tactics,” said Graham Brookie, director of the Atlantic Council Digital Forensic Laboratory in Washington, DC, which tracks lies online. “You call it, the American election is going to have it.”
Anger over the inability of social media giants to win the Whac-A-Mole game on fake data, a recurring theme at last week’s Congress hearing with the most sensitive CEOs in technology, where Facebook boss Mark Zuckerberg tried to dismiss the court cases that benefited his company. misconceptions about the coronavirus pandemic. A clever example, said House antitrust president David Cicilline (D-R. I), the five hours it took Facebook to delete a video of Breitbart mistakenly calling hydroxychloroquine as a cure for Covid-19.
The message has been seen 20 million times and won more than 100,000 before being removed, Cicilline said.
“Doesn’t that mean, Mr. Zuckerberg, that your platform is so large that even with the right policies set, can’t it involve deadly content?” Cicillin asked.
Companies deny accusations that they have failed to combat misinformation, pointing to efforts to remove and store fake content, adding publications on Covid-19, a political public skills crisis.
Since the 2016 election Facebook, Twitter and Google have collectively spent tens of millions of dollars on new technology and personnel to track online falsehoods and stop them from spreading. They’ve issued policies against political ads that masquerade as regular content, updated internal rules on hate speech and removed millions of extremist and false posts so far this year. In July, Twitter banned thousands of accounts linked to the fringe QAnon conspiracy theory in the most sweeping action yet to stem its spread.
Google announced another effort Friday, saying it would begin penalizing Internet sites on September 1 that distribute pirated documents and advertisers participating in coordinated disinformation campaigns. If those policies had been in place in 2016, advertisers would not have posted screenshots of stolen emails that Russian hackers had removed from Hillary Clinton’s campaign.
But despite being among the richest corporations in the world, Internet giants still can’t monitor everything that’s published on their global networks. Companies also disagree on the scale of the challenge and how to solve it, giving incorrect information vendors the opportunity to disclose weaknesses in the platform’s promises.
National hot spots such as the Covid-19 fitness crisis and the Black Lives Matter movement have given artists more goals for planting divisions.
The difficulties are considerable: foreign interference campaigns have evolved, domestic teams are copying these techniques, and political campaigns have adapted their strategies.
At the same time, social media corporations are under pressure from a partisan review in Washington that makes their judgments on what to leave or eliminate have even more political burden: Trump and other Republicans accuse corporations of systematically censoring conservatives, while Democrats criticize them for allowing it as well. a lot of lies to circulate.
Researchers say it’s impossible to know how comprehensive the companies have been in removing bogus content because the platforms often put conditions on access to their data. Academics have had to sign non-disclosure agreements promising not to criticize the companies to gain access to that information, according to people who signed the documents and others who refused to do so.
Experts and policymakers warn that tactics are likely to become even more complex in the coming months, adding the imaginable use of so-called deepfauxs or fake videos created through synthetic intelligence, to create realistic photographs that will undermine the opposite side.
“As more data is accumulated, people are going to get better at manipulating communication to voters,” said Robby Mook, campaign manager for Hillary Clinton’s 2016 presidential bid and now a fellow at the Harvard Kennedy School.
Foreign interference campaigns evolve
Researcher Young Mie Kim was scrolling through Instagram in September when she came across a strangely familiar pattern of partisan posts across dozens of social media accounts.
Kim, a professor at the University of Wisconsin-Madison specializing in political communication on social media, noticed a number of the seemingly unrelated accounts using tactics favored by the Russia-linked Internet Research Agency, a group that U.S. national security agencies say carried out a multiyear misinformation effort aimed at disrupting the 2016 election — in part by stoking existing partisan hatred.
The new accounts, for example, pretended to be local activists or politicians and targeted their highly partisan messages at battleground states. One account, called “iowa.patriot,” attacked Elizabeth Warren. Another, “bernie.2020_,” accused Trump supporters of treason.
“It stood out immediately,” said Kim, who tracks covert Russian social media activity targeted at the U.S. “It was very prevalent.” Despite Facebook’s efforts, it appeared the IRA was still active on the platform. Her hunch was later confirmed by Graphika, a social media analytics firm that provides independent analysis for Facebook.
The social networking giant has taken action on at least some of these covert campaigns. A few weeks after Kim found the posts, Facebook removed 50 IRA-run Instagram accounts with a total of nearly 250,000 online followers — including many of those she had spotted, according to Graphika.
“We’re seeing a ramp up in enforcement,” Nathaniel Gleicher, Facebook’s head of cybersecurity policy, told POLITICO, noting that the company removed about 50 networks of falsified accounts last year, compared with just one in 2017.
Since October, Facebook, Twitter and YouTube have eliminated at least 10 campaigns that sell fake data involving accounts connected to authoritarian countries such as Russia, Iran and China that targeted others in the United States, Europe and elsewhere, according to the company’s statements.
But Kim said Russia’s tactics in the U.S. evolved faster than social media sites can identify and delete accounts. Facebook has 2.6 billion users, a gigantic universe in which bad actors can hide.
In 2016, the IRA’s tactics were often unsophisticated, like buying Facebook ads in Russian rubles or producing crude, easily identifiable fakes of campaign logos.
This time, Kim said, the group’s accounts are operating at a higher level: they have become better at impersonating both candidates and parties; they’ve moved from creating fake advocacy groups to impersonating actual organizations; and they’re using more seemingly nonpolitical and commercial accounts to broaden their appeal online without raising red flags to the platforms.
The Kremlin has already perfected these new approaches abroad. In a series of European votes, adding last year’s European Parliament elections and Catalonia’s independence referendum in 2017, Russian teams have attempted new disinformation tactics that are now being implemented before November, according to 3 EU and NATO policy makers interested in the analysis.
Kim said one likely reason for foreign governments to impersonate legitimate U.S. groups is that the social media companies are reluctant to police domestic political activism. While foreign interference in elections is illegal under U.S. law, the companies are on shakier ground if they take down posts or accounts put up by Americans.
Facebook’s Gleicher said his team of misinformation experts has been cautious about moving against U.S. accounts that post about the upcoming election because they do not want to limit users’ freedom of expression. When Facebook has taken down accounts, he said, it was because they misrepresented themselves, not because of what they posted.
Still, most forms of online political speech face only limited restrictions on the networks, according to the POLITICO review of posts. In invite-only groups on Facebook, YouTube channels with hundreds of thousands of views, and Twitter messages that have been shared by tens of thousands of people, partisan — often outright false — messages are shared widely by those interested in the outcome of November’s vote.
Russia also has more bray in the way it uses state-backed media, as does China, whose presence on Western social media has exploded since last year’s protests in Hong Kong. Russian television operations of RT and CGTN in China have used their numerous social networks to spread fake data and divisive messages.
Moscow- and Beijing-backed media have piggybacked on hashtags related to the Covid-19 pandemic and recent Black Lives Matter protests to flood Facebook, Twitter and YouTube with content stoking racial and political divisions.
Facebook began uploading tags to posts created through some state-backed media in June to let users know who the content is, but does not upload similar disclaimers when users themselves post links to the same state-backed content.
China has been particularly aggressive, with high-profile officials and ambassadorial accounts promoting conspiracy theories, mostly on Twitter, that the U.S. had created the coronavirus as a secret bioweapon.
Twitter eventually placed fact-checking disclaimers on several posts by Lijian Zhao, a spokesperson for the Chinese foreign ministry with more than 725,000 followers, who pushed that falsehood. But by then, the tweets had been shared thousands of times as the outbreak surged this spring.
“Russia is doing right now what Russia always does,” said Bret Schafer, a media and digital disinformation fellow at the German Marshall Fund of the United States’ Alliance for Securing Democracy, a Washington think tank. “But it’s the first time we’ve seen China fully engaged in a narrative battle that doesn’t directly affect Chinese interests.”
Other countries, Iran and Saudi Arabia, have also intensified disinformation activities with the United States over the past six months, according to two national security policymakers and one disinformation analyst, all of whom spoke under anonymity due to sensitivity. your job.
Copy of extremist groups.
American teams have noticed that foreign actors manage to promote lies online and have followed suit.
Disinformation experts say that since 2016, far-right activists and white supremacists have begun emulating the Kremlin as they shook the department and spread political messages to millions of social media users.
“In volume and compromise, national misinformation is the most widespread phenomenon. It’s close,” said Emerson Brooking, a resident researcher at the Atlantic Council’s Digital Forensic Investigation Laboratory.
Earlier this year, for example, “Western News Today” posts, a Facebook page that posed as a media outlet, began sharing racist links to VDARE content, an online page that the Southern Poverty Law Center had explained as encouraging hate against immigration.
Other accounts followed within minutes, posting the same racist content and linking VDARE and other far-right teams on several pages, a coordinated action Graphika said mimicked the Russian IRA tactics.
Previously, many of these hate teams had shared posts from their own social media accounts, but they achieved little or no success. Now, pretending to be others, they can simply spread their messages beyond their far-right bubbles online, said Chloe Colliver, head of the virtual studies unit at the Institute for Strategic Dialogue, a London-based expert group that tracks hate speech online. .
And by impersonating other online teams with little or no connection between them, teams that posted VDARE messages appeared to be marked as a coordinated campaign, according to Graphika.
Eventually, Facebook removed the accounts, along with others related to the QAnon movement, a conspiracy theory that describes Trump as fighting with elite paedophiles and a liberal “deep state.”
The company presses that the eliminations be directed towards a false statement, not opposed to a right-wing ideology. But Colliver said distinctions have become more difficult to make: far-right teams’ tactics have become increasingly sophisticated, making efforts difficult for those who run those political campaigns online.
“The dividing line is how to label foreign content in relation to domestic, state or non-state content,” he said.
In addition to targeted takedowns, tech companies have adopted broader policies to combat misinformation. Facebook, Twitter and YouTube have banned what they call manipulated media, for instance, to try to curtail deepfakes. They’ve also taken broad swipes at voting-related misinformation by banning content that deceives people about how and when to vote, and by promoting authoritative sources of information on voting.
“Elections are different now and so are we,” said Kevin McAlister, Facebook spokesman. “We have created new products, partnerships and policies to make sure these elections are safe, yet we are in an ongoing career with national players who are developing their tactics as our defenses.”
“We will continue to collaborate with law enforcement and industry peers to protect the integrity of our elections,” Google said in a statement.
Twitter is testing scenarios to anticipate incorrect information that can happen in long-term election cycles, the company said, learning from each of the elections since the 2016 U.S. race and tuning its platform accordingly.
“It’s an election year on Twitter: we’re a global service and our decisions reflect that,” said Jessica Herrera-Flanigan, vice president of public policy for the Americas.
Critics have said these policies are undermined by asymmetrical enforcement. Political leaders get approval on misleading messages that would be reported or deleted from other users, they say, Twitter specifically has become more competitive in acting on those messages.
Political campaigns are informed and adapted
It’s not just online extremists their tactics. U.S. political teams They also continue to seek tactics to circumvent the efforts of the sites to impose transparency in political advertising.
After the 2016 vote, corporations created and paid for political publicity knowledge bases to make it transparent when the electorate was attacked through party messages. Google and Facebook are now asking political advertisers around the world to reveal their identity before buying messages. The search giant has also stopped so-called microtargeting, or user demographics to target classified classified ads to express groups. Twitter went the further track by banning almost all cross-classified classified ads late last year.
But American political parties have discovered a way to dodge these policies, by creating partisan media, according to Russia’s 2016 handbook.
For Michigan voters, media like “The Gander” and “Grand Rapids Reporter” might first appear to be fundamental newsrooms filling the gap left by years of layoffs and lack of investment in local reports. Both post social media updates on life in the decisive state, combining a combination of political reports, biased to democratic or republican reasons, with stories about local communities.
However, those are components of national operations related to Republican or Democratic agents, according to a review of online posts, Facebook pages, and company archives. Bloomberg and Columbia Journalism Review first reported their links to national political components.
“The Gander” is one of 8 online publications that are components of Courier Newsroom, which is owned by ACRONYM, a Democratic-linked nonprofit that aims to spend $75 million on virtual classified ads to combat Trump’s election. . Similarly, “Grand Rapids Reporter” is just one of the classified advertisements of news sites across the country controlled by other people connected to the Republican Party, adding Brian Timpone, leader of one of the teams behind those media.
Both groups have focused on promoting partisan stories on Facebook, Instagram and Twitter. Their pages, collectively, have garnered tens of thousands of likes, comments and other interactions, according to Crowdtangle, a Facebook-owned tool that analyzes people’s engagement on social media.
But none of the organizations reveal their political affiliations on their Facebook pages, and the social media giant classifies them as “information and media” operations throughout the primary media as POLITICO and the Washington Post. It’s the same ranking that Facebook used in 2018 for a party site presented through then-President of the Intelligence House, Devin Nunes (D-California), even though it was funded through his campaign.
Steven Brill, co-CEO of NewsGuard, an analytics company that tracks misinformation, said his team has noticed a steady accumulation of paid messages from those fan-backed news sites in recent months, and expects more before the November election.
“They can avoid the regulations that Facebook and Twitter have opposed to political advertising because it looks like a glorious little independent local news operation,” he said. “You can only believe what will happen between now and November.”
And while social media policies have made classified political ads more transparent than in 2016, many classified party ads continue to operate without warning, for weeks.
On Facebook, more than a portion of the pages that served classified political ads during an era of 13 months through June 2019 hid the identities of their followers, according to a New York University study.
At Google, political classified ads from various parties that violated companies ran for months before being deleted, according to a corporate transparency report.
And on Twitter, which has officially banned all classified political ads, teams bypass regulations by paying so-called classified ads on disorders similar to party platforms, for example, for the promotion of the Second Amendment or abortion rights.
Caught in political dominance
And now, just a few months before the vote, social media platforms are also caught up in a content war between Republicans and Democrats, under pressure from campaigns, politicians, and the president himself. It is a microscopic point of attention that only began to increase in 2016.
To the left, Russia’s out-of-control interference in the new presidential race, which U.S. national security agencies concluded is aiming to help Trump, has impaired Democrats’ prospects on social media. On the right, corporate bias opposed to conservative prospects has led Republicans to call for moderation of political rhetoric, as well as a recent Trump executive order that threatens the legal responsibility of sites that show bias in allowing or cutting content.
Companies insist that political perspectives are not taken into account in their decisions and have in fact asked the federal government in recent years for a recommendation on what constitutes an online legal discourse. However, the First Amendment largely prevents the government from launching such calls, and Congressional efforts to legislate on regulations to track classified political ads on social media have been blocked due to a department between Republicans and Democrats on how to fix the problem. .
This partisan department would possibly even have a pawn in the disinformation war. Kim, a researcher at the University of Wisconsin-Madison, said she discovered evidence of foreign actors posing as American activists in an obvious effort to expand departments between left and right. They have posted incendiary messages, for example attacking the feminist movement or linking Trump supporters to Vladimir Putin, to sow anger between the left and the right.
Republicans and Democrats appear to only agree that social media companies are a big part of the problem. How they should fix the issue is the subject of a deep, partisan divide that was on full display at a House Energy and Commerce subcommittee hearing on disinformation in June.
“Social media companies need to step up and protect our civil rights, our human rights and our human lives, not to sit on the sideline as the nation drowns in a sea of disinformation,” said the subcommittee’s chair, Rep. Mike Doyle (D-Pa.). “Make no mistake, the future of our democracy is at stake and the status quo is unacceptable.”
Minutes later, his Republican co-chair, Rep. Bob Latta (R-Ohio), chimed in. “We should make every effort to ensure that companies are using the sword provided by Section 230 to take down offensive and lewd content,” he said, before adding: “But that they keep their power in check when it comes to censoring political speech.”
With Washington split on how to handle the problem — and both foreign and domestic groups gearing up for November’s vote — misinformation experts are left wondering how bad, and widespread, the online trickery will be later this year.
“I didn’t see a meaningful drop in misinformation between 2016 and 2018,” said Laura Edelson, a researcher at NYU who has tracked the spread of paid-for political messages across social networks during recent electoral cycles. “The next trial will be the 2020 election, and I’m not optimistic.”
CORRECTION: An earlier edition of this article distorted the operations of some subsidized U.S. media through politics and how they disclose their party affiliations. Some, such as Courier Newsroom, operate with party media and assign tasks to their political sponsors.