Silicon Valley is losing the battle against election misinformation

Videos peddling false claims about voter fraud and Covid-19 cures draw millions of views on YouTube. Partisan activist groups pretending to be online news sites set up shop on Facebook. Foreign trolls masquerade as U.S. activists on Instagram to sow divisions around the Black Lives Matter protests.

Four years after an election in which Russia and some far-right teams unleashed a wave of false, misleading and divisive online messages, Silicon Valley is wasting war on incorrect online information that may influence voting in November.

Social media corporations are grappling with an avalanche of misleading and divisive messages from political parties, foreign governments, and hate teams as months pass through this year’s presidential election, according to more than two dozen national security decision makers, experts in incorrect information, hate speech. fact-checking teams and technical executives. Matrix, as well as a review of thousands of social media posts through POLITICO.

The tactics, many of which point to deepen the divisions among Americans already traumatized by a fatal pandemic and loss of record tasks, echo years of efforts by the Russian government to stoke confusion over the U.S. presidential election. Content. But the attacks this time are much more insidious and complicated, with fakes to detect, more countries pushing secret systems, and a deluge of American equipment copying their methods.

And some of the misleading messages have been amplified through major media and top American politicians, adding President Donald Trump. In one case last week, he used his many social media to say, evidence, that postal votes would create “the wrong and fraudulent maxim election of history.”

Silicon Valley’s efforts to engage the new counterfeit bureaucracy have failed, according to researchers and some lawmakers. And demanding situations are only increasing.

“November will be like the Super Bowl of disinformation tactics,” said Graham Brookie, director of the Atlantic Council Digital Forensic Laboratory in Washington, DC, which tracks lies online. “You call it, the American election is going to have it.”

Anger over the inability of social media giants to win the Whac-A-Mole game on fake data, a recurring theme at last week’s Congress hearing with the most sensitive CEOs in technology, where Facebook boss Mark Zuckerberg tried to dismiss the court cases that benefited his company. misconceptions about the coronavirus pandemic. A clever example, said House antitrust president David Cicilline (D-R. I), the five hours it took Facebook to delete a video of Breitbart mistakenly calling hydroxychloroquine as a cure for Covid-19.

The message has been seen 20 million times and won more than 100,000 before being removed, Cicilline said.

“Doesn’t that mean, Mr. Zuckerberg, that your platform is so large that even with the right policies set, can’t it involve deadly content?” Cicillin asked.

Companies deny accusations that they have failed to combat misinformation, pointing to efforts to remove and store fake content, adding publications on Covid-19, a political public skills crisis.

Since the 2016 election, Facebook, Twitter and Google have jointly spent tens of millions of dollars on new technologies and the body of workers to track lies online and prevent them from spreading. They have issued opposing regulations to political classified ads disguised as normal content, updated internal regulations on hate speech, and removed millions of extremist and false messages so far this year. In July, Twitter banned thousands of accounts connected to QAnon’s marginal conspiracy theory in the ultimate radical movement to date to prevent its spread.

Google announced yet another effort Friday, saying it will begin penalizing websites on Sept. 1 that distribute hacked materials and advertisers who take part in coordinated misinformation campaigns. Had those policies been in place in 2016, advertisers wouldn’t have been able to post screenshots of the stolen emails that Russian hackers had swiped from Hillary Clinton’s campaign.

But despite being among the richest corporations in the world, Internet giants still can’t monitor everything that’s published on their global networks. Companies also disagree on the scale of the challenge and how to solve it, giving incorrect information vendors the opportunity to disclose weaknesses in the platform’s promises.

National hot spots such as the Covid-19 fitness crisis and the Black Lives Matter movement have given artists more goals for planting divisions.

The difficulties are considerable: foreign interference campaigns have evolved, domestic teams are copying these techniques, and political campaigns have adapted their strategies.

At the same time, social media corporations are under pressure from a partisan review in Washington that makes their judgments on what to leave or eliminate have even more political burden: Trump and other Republicans accuse corporations of systematically censoring conservatives, while Democrats criticize them for allowing it as well. a lot of lies to circulate.

Researchers say it’s impossible to know how comprehensive the companies have been in removing bogus content because the platforms often put conditions on access to their data. Academics have had to sign non-disclosure agreements promising not to criticize the companies to gain access to that information, according to people who signed the documents and others who refused to do so.

Experts and policymakers warn that tactics are likely to become even more complex in the coming months, adding the imaginable use of so-called deepfauxs or fake videos created through synthetic intelligence, to create realistic photographs that will undermine the opposite side.

“As more and more knowledge accumulates, others will manipulate communication with voters,” said Robby Mook, director of the crusade for Hillary Clinton’s presidential candidacy in 2016 and now a member of Harvard Kennedy School.

Foreign interference campaigns are evolving

Researcher Young Mie Kim posted on Instagram in September when she came across a strangely familiar trend of partisan posts on dozens of social media accounts.

Kim, a professor at the University of Wisconsin-Madison who specializes in political communication on social media, has discovered a number of probably unrelated account tactics favored through the Russian-linked Internet Research Agency, an organization that U.S. national security agencies say has undertaken a multi-year disinformation effort aimed at disrupting the 2016 election , in components through the sizing of existing hatred of components.

The new accounts, for example, claimed to be activists or politicians and pointed their highly partisan messages in the states of the battlefield. An account, called “iowa.patriot,” attacked Elizabeth Warren. Another, “bernie.2020,” accused Trump’s treason supporters.

“He stood out without delay,” said Kim, who follows the covert activity of Russian social media against the United States. “It became widespread.” Despite Facebook’s efforts, it was learned that the IRA is still active on the platform. Later, his instinct was shown through Graphika, a social media analytics company that provides independent research for Facebook.

The social media giant has taken action in at least some of those secret campaigns. Weeks after Kim discovered the posts, Facebook deleted 50 Instagram accounts through the IRA with a total of approximately 250,000 fans online, many of them the ones he had seen, according to Graphika.

“We’re seeing an increase in the app,” Nathaniel Gleicher, Facebook’s head of cybersecurity policy, told POLITICO Nathaniel Gleicher, facebook’s cybersecurity policy chief, noting that the company removed nearly 50 counterfeit account networks last year, compared to one in 2017.

Since October, Facebook, Twitter and YouTube have eliminated at least 10 campaigns that sell fake data involving accounts connected to authoritarian countries such as Russia, Iran and China that targeted others in the United States, Europe and elsewhere, according to the company’s statements.

But Kim said Russia’s tactics in the U.S. evolved faster than social media sites can identify and delete accounts. Facebook has 2.6 billion users, a gigantic universe in which bad actors can hide.

In 2016, IRA tactics were not sophisticated, such as buying classified Facebook ads on Russian rubles or generating raw and easily identifiable forgeries of cross logos.

This time, Kim said, the group’s accounts are running at a higher level: they have become bigger by pretending to be applicants and parties; have gone from creating fake defense teams to printing genuine organizations; and use advertising and non-political accounts to expand their online appeal without reporting platforms.

The Kremlin has already honed these new approaches abroad. In a spate of European votes — most notably last year’s European Parliament election and the 2017 Catalan independence referendum — Russian groups tried out new disinformation tactics that are now being deployed ahead of November, according to three policymakers from the EU and NATO who were involved in those analyses.

Kim said one of the most likely reasons for foreign governments posing as valid U.S. teams is that social media corporations are reluctant to national political activism. While foreign interference in elections is illegal under U.S. law, corporations are on a more fragile floor if they cut publications or accounts created through Americans.

Facebook’s Gleicher said his team of experts on incorrect information had been cautious about U.S. accounts that published articles about the upcoming elections because they don’t need to restrict users’ freedom of expression. When Facebook deleted accounts, it said, because they were misrepresented, not by what they posted.

Still, most forms of online political speech face only limited restrictions on the networks, according to the POLITICO review of posts. In invite-only groups on Facebook, YouTube channels with hundreds of thousands of views, and Twitter messages that have been shared by tens of thousands of people, partisan — often outright false — messages are shared widely by those interested in the outcome of November’s vote.

Russia has also become more brazen in how it uses state-backed media outlets — as has China, whose presence on Western social media has skyrocketed since last year’s Hong Kong protests. Both Russia’s RT and China’s CGTN television operations have made use of their large social media followings to spread false information and divisive messages.

Moscow- and Beijing-backed media have piggybacked on hashtags related to the Covid-19 pandemic and recent Black Lives Matter protests to flood Facebook, Twitter and YouTube with content stoking racial and political divisions.

Facebook began adding labels to posts created by some state-backed media outlets in June to let users know who is behind the content, though does not add similar disclaimers when users themselves post links to the same state-backed content.

China has been particularly aggressive, with high-profile officials and ambassadorial accounts promoting conspiracy theories, mostly on Twitter, that the U.S. had created the coronavirus as a secret bioweapon.

Twitter eventually placed fact-checking disclaimers on several posts by Lijian Zhao, a spokesperson for the Chinese foreign ministry with more than 725,000 followers, who pushed that falsehood. But by then, the tweets had been shared thousands of times as the outbreak surged this spring.

“Russia is doing right now what Russia always does,” said Bret Schafer, a media and digital disinformation fellow at the German Marshall Fund of the United States’ Alliance for Securing Democracy, a Washington think tank. “But it’s the first time we’ve seen China fully engaged in a narrative battle that doesn’t directly affect Chinese interests.”

Other countries, including Iran and Saudi Arabia, similarly have upped their misinformation activity aimed at the U.S. over the last six months, according to two national security policy makers and a misinformation analyst, all of whom spoke on the condition of anonymity because of the sensitivity of their work.

Domestic extremist groups copycatting

U.S. groups have watched the foreign actors succeed in peddling falsehoods online, and followed suit.

Misinformation experts say that since 2016, far-right and white supremacist activists have begun to mimick the Kremlin’s strategies as they stoke division and push political messages to millions of social media users.

“In volume and compromise, national misinformation is the most widespread phenomenon. It’s close,” said Emerson Brooking, a resident researcher at the Atlantic Council’s Digital Forensic Investigation Laboratory.

Earlier this year, for example, “Western News Today” posts, a Facebook page that posed as a media outlet, began sharing racist links to VDARE content, an online page that the Southern Poverty Law Center had explained as encouraging hate against immigration.

Other accounts followed within minutes, posting the same racist content and linking VDARE and other far-right teams on several pages, a coordinated action Graphika said mimicked the Russian IRA tactics.

Previously, many of these hate teams had shared posts from their own social media accounts, but they achieved little or no success. Now, pretending to be others, they can simply spread their messages beyond their far-right bubbles online, said Chloe Colliver, head of the virtual studies unit at the Institute for Strategic Dialogue, a London-based expert group that tracks hate speech online. .

And by impersonating other online teams with little or no connection between them, teams that posted VDARE messages appeared to be marked as a coordinated campaign, according to Graphika.

Eventually, Facebook removed the accounts, along with others related to the QAnon movement, a conspiracy theory that describes Trump as fighting with elite paedophiles and a liberal “deep state.”

The company presses that the eliminations be directed towards a false statement, not opposed to a right-wing ideology. But Colliver said distinctions have become more difficult to make: far-right teams’ tactics have become increasingly sophisticated, making efforts difficult for those who run those political campaigns online.

“The dividing line is how to label foreign content in relation to domestic, state or non-state content,” he said.

In addition to specific eliminations, generation corporations have followed broader policies to combat misinformation. Facebook, Twitter and YouTube have banned what they call manipulated media, for example, to reduce deep counterfeiting. They also analyzed incorrect voting-related information by banning content that tricks others into how and when to vote, and by selling authorized voting information resources.

“Elections are different now and so are we,” said Kevin McAlister, Facebook spokesman. “We have created new products, partnerships and policies to make sure these elections are safe, yet we are in an ongoing career with national players who are developing their tactics as our defenses.”

“We will continue to paint with police and industry colleagues about the integrity of our elections,” Google said in a statement.

Twitter is testing scenarios to anticipate incorrect information that can happen in long-term election cycles, the company said, learning from each of the elections since the 2016 U.S. race and tuning its platform accordingly.

“It’s always an election year on Twitter — we are a global service and our decisions reflect that,” said Jessica Herrera-Flanigan, vice president of public policy for the Americas.

Critics have said those policies are undermined by uneven enforcement. Political leaders get a pass on misleading posts that would be flagged or removed from other users, they argue, though Twitter in particular has become more aggressive in taking action on such posts.

Political campaigns learn and adapt

It’s not just online extremists improving their tactics. U.S. political groups also keep finding ways to get around the sites’ efforts to force transparency in political advertising.

Following the 2016 vote, the companies created databases of political ads and who paid for them to make it clear when voters were targeted with partisan messaging. Google and Facebook now require political advertisers around the world to prove their identities before purchasing messages. The search giant also stopped the use of so-called microtargeting, or using demographic data on users to pinpoint ads to specific groups. Twitter has gone the furthest — banning nearly all campaign ads late last year.

But American political parties have found a way to dodge those policies — by creating partisan news organizations, following Russia’s 2016 playbook.

For voters in Michigan, media outlets like “The Gander” and “Grand Rapids Reporter” may first appear to be grassroots newsrooms filling the void left by years of layoffs and under-investment in local reporting. Both publish daily updates on social media about life in the swing state, mixing a blend of political reporting — biased toward either Democratic or Republican causes — with stories about local communities.

However, those are components of national operations related to Republican or Democratic agents, according to a review of online posts, Facebook pages, and company archives. Bloomberg and Columbia Journalism Review first reported their links to national political components.

“The Gander” is one of 8 online publications that are components of Courier Newsroom, which is owned by ACRONYM, a Democratic-linked nonprofit that aims to spend $75 million on virtual classified ads to combat Trump’s election. . Similarly, “Grand Rapids Reporter” is just one of the classified advertisements of news sites across the country controlled by other people connected to the Republican Party, adding Brian Timpone, leader of one of the teams behind those media.

Both teams focused on selling partisan stories on Facebook, Instagram and Twitter. Its pages, collectively, have obtained tens of thousands of likes, comments and other interactions, according to Crowdtangle, a tool owned by Facebook that analyzes people’s commitment on social media.

But none of the organizations reveal their political affiliations on their Facebook pages, and the social media giant classifies them as “information and media” operations throughout the primary media as POLITICO and the Washington Post. It’s the same ranking that Facebook used in 2018 for a party site presented through then-President of the Intelligence House, Devin Nunes (D-California), even though it was funded through his campaign.

Steven Brill, co-CEO of NewsGuard, an analytics company that tracks misinformation, said his team has noticed a steady accumulation of paid messages from those fan-backed news sites in recent months, and expects more before the November election.

“They can avoid the regulations that Facebook and Twitter have opposed to political advertising because it looks like a glorious little independent local news operation,” he said. “You can only believe what will happen between now and November.”

And while social media policies have made classified political ads more transparent than in 2016, many classified party ads continue to operate without warning, for weeks.

On Facebook, more than a portion of the pages that served classified political ads during an era of 13 months through June 2019 hid the identities of their followers, according to a New York University study.

At Google, political classified ads from various parties that violated companies ran for months before being deleted, according to a corporate transparency report.

And on Twitter, which has officially banned all classified political ads, teams bypass regulations by paying so-called classified ads on disorders similar to party platforms, for example, for the promotion of the Second Amendment or abortion rights.

Caught in political dominance

And now, just a few months before the vote, social media platforms are also caught up in a content war between Republicans and Democrats, under pressure from campaigns, politicians, and the president himself. It is a microscopic point of attention that only began to increase in 2016.

To the left, Russia’s out-of-control interference in the new presidential race, which U.S. national security agencies concluded is aiming to help Trump, has impaired Democrats’ prospects on social media. On the right, corporate bias opposed to conservative prospects has led Republicans to call for moderation of political rhetoric, as well as a recent Trump executive order that threatens the legal responsibility of sites that show bias in allowing or cutting content.

Companies insist that political perspectives are not taken into account in their decisions and have in fact asked the federal government in recent years for a recommendation on what constitutes an online legal discourse. However, the First Amendment largely prevents the government from launching such calls, and Congressional efforts to legislate on regulations to track classified political ads on social media have been blocked due to a department between Republicans and Democrats on how to fix the problem. .

This partisan department would possibly even have a pawn in the disinformation war. Kim, a researcher at the University of Wisconsin-Madison, said she discovered evidence of foreign actors posing as American activists in an obvious effort to expand departments between left and right. They have posted incendiary messages, for example attacking the feminist movement or linking Trump supporters to Vladimir Putin, to sow anger between the left and the right.

Republicans and Democrats just seem to agree that social media corporations are a big component of the problem. There is a deep-component department on how they deserve the problem, which was fully stated at a hearing of the House’s Energy and Commerce Misinformation Subcommittee in June.

“Social media companies need to step up and protect our civil rights, our human rights and our human lives, not to sit on the sideline as the nation drowns in a sea of disinformation,” said the subcommittee’s chair, Rep. Mike Doyle (D-Pa.). “Make no mistake, the future of our democracy is at stake and the status quo is unacceptable.”

Minutes later, his Republican co-chairman, Rep. Bob Latta (R-Ohio), called. “We’ll have to do everything imaginable to make sure corporations use the sword provided by Section 230 to eliminate offensive and obscene content,” he said, before adding, “But let them keep their strength under control when it comes to censoring political discourse.” “.

With Washington divided on how to deal with the challenge, and foreign and domestic teams for the November vote, experts in incorrect information wonder how serious and widespread online deception will be later this year.

“I haven’t noticed a significant drop in misinformation between 2016 and 2018,” said Laura Edelson, a New York University researcher who has tracked the spread of paid political messages on social media over the past election cycles. “The next test will be the 2020 election, and I’m optimistic.”

CORRECTION: An earlier edition of this article distorted the operations of some subsidized U.S. media through politics and how they disclose their party affiliations. Some, such as Courier Newsroom, operate with party media and assign tasks to their political sponsors.

Leave a Comment

Your email address will not be published. Required fields are marked *