MEXICO CITY — This spring, a doctored image claiming that the wife of the leading Mexican presidential candidate was the granddaughter of a Nazi ricocheted across Facebook and its messaging service, WhatsApp.
The post, shared 8,000 times before it was disproved, was part of a flood of fabricated stories that have spread on Facebook and its other services, including Instagram, ahead of Mexico’s July 1 presidential election — the country’s own version of the divisive misinformation that sought to influence the 2016 campaign across the border.
Determined to prevent a repeat of the abuses of its platform ahead of the U.S. midterm elections in November, Facebook has poured resources into election integrity, hiring thousands of content moderators and fact-checkers, deploying artificial intelligence, and conducting large sweeps of problematic accounts. Each new election is a test: Facebook’s security and civic teams are actively tracking 50 different elections in 2018 — and triaging for those deemed “high risk” — amounting to a national election practically every week.
The Mexican election reflects the constantly mutating ways social media can be weaponized against democracy — and the immensity of Facebook’s global challenge.
Most of Facebook’s users live in countries like Mexico, where government corruption is endemic, distrust of the mainstream media is widespread, viral memes and WhatsApp messages are often perceived to be as credible as news stories, and the forces manipulating debate online are internal, tied to domestic political parties and other local actors.
“The hardest part is where to draw the line between a legitimate political campaign and domestic information operations,” said Guy Rosen, a top security executive at Facebook. “It’s a balance we need to figure out how to strike.”
Facebook said it is aware of many problematic pages in Mexico, such as the shadowy page that first posted the image of the wife of front-runner Andrés Manuel López Obrador, a leftist populist who threatens to unseat the party that has dominated Mexican politics for the past century. “Amor a México,” or “Love for Mexico,” has changed its name three times during the campaign season and was at one time run by a Mexican supporter of former president Felipe Calderón, according to Washington Post research and that by Animal Político, a local news organization. (Until she dropped out last month, Calderón’s wife was also running for president.)
In a talk for security experts in May, Facebook security chief Alex Stamos called such domestic disinformation operations the “biggest growth category” for election-related threats that the company is confronting. These groups, he said, are copying Russian operatives’ tactics to “manipulate their own political sphere, often for the benefit of the ruling party.”
This area is also the trickiest: While most democracies bar foreign governments from meddling in elections, Facebook sees internal operators as much harder to crack down on because of the free-speech issues involved.
After fact-checkers repeatedly flagged Amor a México’s content as problematic, Facebook this month punished the page with its highest-level “demotion,” dramatically reducing its likes, shares and other interactions to 17,000 on June 3 from 121,000 four days prior. Still, Amor a México has doubled its followers to 300,000 in the past few months. Facebook said it was investigating the page but declined to share any information about it.
In interviews, executives conceded that determining the origin and motivation of many page operators is too great an effort for a private company to manage. Instead, the focus is on limiting the reach of serial offenders, punishing behaviors without often being able to get to the source. The brunt of Facebook’s news vetting in Mexico falls to a small group of third-party fact-checkers, whose job is to play whack-a-mole — debunking one story at a time, with each taking several days to disprove.
Facebook’s limited forensics around false news in Mexico show how its aspiration to keep elections honest globally is still out of reach for the social network, despite the prominent role its service has come to play in many societies.
“This is the scale of [Facebook’s] challenge,” said Nathaniel Persily, a Stanford Law School professor and an expert on social media and politics. “It is almost impossible to wrap your mind around.”
The company did not have fact-checking partners outside the United States and Europe until March, when it funded a group in Mexico. Until last month, fewer than a dozen fact-checkers were tasked with debunking Mexican disinformation for the country’s 84 million Facebook users, along with tens of millions who use WhatsApp. In addition, several of the tools Facebook is launching in the United States, such as identifying the publishers of political ads and verifying pages with large followings, will not be operational before Mexico’s election.
One scalable product — first launched in the Alabama Senate race last year — that the company plans to deploy in the days before the Mexican election is a dashboard to monitor potentially false stories as they bubble up.
“It’s not fair to have a high set of standards in one country and not in another,” said Esteban Illades, editor of the Mexican magazine Nexos and author of a recent book about the country’s disinformation landscape. “The biggest challenge for Facebook in Mexico is not Russia, and it is not Macedonian teenagers. It is our broken system.”
Widespread manipulation of Facebook’s service during the 2016 U.S. presidential election woke the company up to the ways the social network could be abused by malicious operators in countries, like Macedonia, who profit off sensational news, and by Russian agents seeking to sow division in U.S. society, thereby imperiling the democratic process.
In response, chief executive Mark Zuckerberg gave himself the Sisyphean task of upholding the integrity of democracy around the world. Executives coordinate across countries to routinely conduct drills of pre-election disaster scenarios, such as last-minute hacks of a candidate’s account. Facebook has recruited dozens of subject matter experts, including former National Security Agency analysts, and has hired 15,000 moderators and security professionals to scan objectionable content, including potentially false stories. The company has purged thousands of fake accounts before elections in Germany, Italy and France.
Facebook for years pitched its service to government officials as a way to make the democratic process more transparent, and trained them in techniques to build audiences and engage voters. Diego Bassante, who left his post as an Ecuadoran diplomat in 2014 to join Facebook, was the first Latin American hire in what was then a tiny division focused on helping candidates and governments across the world become power users of the service. He helped the mayor of Buenos Aires broadcast on Facebook Live, one of the first times a Latin American politician had done so.
In Mexico, he led workshops for politicians and candidates. This year, Bassante got permission from Mexico’s electoral commission for the platform to broadcast the presidential debates, which were viewed by 11.8 million people, the company said.
In a region where politicians seldom interact with the public, experts said these efforts have helped facilitate a new form of transparency.
But after the 2016 U.S. election, when Facebook was reeling from the Russia controversy, a dramatic shift took place. All employees dealing with elections — even those in countries without Russian interference — had to consider what could go wrong, a directive that came straight from Zuckerberg.
“We in the region said, ‘Oh shoot, our job description just changed,’ ” Bassante recalled in Facebook’s Mexico City office, where he sits near a digital clock with neon-orange numbers that counts down — in days, hours, minutes and seconds — to the election.
In the highly charged contest, experts say there’s a thriving underground economy of political trolls for hire, groups allegedly funded by local candidates that spend their days flooding social media with sensational stories and attacks.
Bassante’s small staff coordinates with data scientists at Facebook’s Menlo Park, Calif., headquarters and with content moderators in Austin. From an office decorated with local art and posters celebrating nerd culture, they have a weekly call with executives managing elections around the region. Facebook has no full-time security officials in Mexico; aAll the security work is done remotely, such as a threat report prepared ahead of the Mexican election.
One morning last month, Bassante was conducting a manual sweep of accounts that impersonate political candidates and violate Facebook’s real-name policy. The company wasn’t using its artificial intelligence technology, he said, “because we want to be very careful not to accidentally take down a page.”
As part of its deal with electoral authorities to broadcast the debates, Facebook ran a national advertising campaign around news literacy, publishing in newspapers an infographic called “How to Spot Fake News,” similar to ads Facebook ran in India ahead of a big election there.
Critics argue that Facebook may have developed too-cozy relationships with candidates and governments in weak democracies, opening the door to bad actors who abuse its service, said Monika Glowacki, a researcher for the Oxford Computational Propaganda Project, who is writing a case study about Mexico. “They invited them in,” she said.
And executives are aware that a broader crackdown can create thorny political questions when Facebook also cultivates relationships with officials, a strategy the company has doubled down on since the U.S. election.
Unlike in the United States, where Facebook’s AI systems automatically route most stories to fact-checking organizations, Facebook relies on ordinary people in Mexico to spot questionable posts. Many people flag stories as false simply because they disagree with them, executives said.
Fake news operators, experts said, get a boost from deep-seated cynicism born out of the fact that major newspapers and TV stations in Mexico have long been allied with the government. Officials, political campaigns and even drug cartels frequently pay journalists to write positive stories. One reason social media has exploded in Mexico — a country where every adult with an Internet connection is on Facebook — is because it is seen as a venue for alternative sources of information, an antidote to the climate of distrust.
Facebook and the local branch of Al Jazeera have funded Mexico’s first independent fact-checking organization, Verificado 2018, which means “verified.” Launched in March, it has roughly a dozen employees, mostly in their mid-20s, who so far have debunked 310 Facebook posts. (In late May, Facebook announced a new fact-checking partnership with the Agence France-Presse news agency in Latin America.)
Facebook has given checkers around the world customized software to input questionable stories, which are routed to Facebook’s data scientists in Menlo Park. But it wasn’t until this month that the company enabled Verificado to feed content other than news articles, such as memes and images, into the system. That’s critical to evaluating bogus information, because many falsehoods have evolved past text and links to images accompanied by extended captions, a format that is harder for software to spot.
With stories Verificado is able to check, such as the fabricated one about López Obrador’s wife, its team writes an article disproving the story. Facebook’s technology then blocks the made-up story from going into the newsfeed, decreasing its reach by 80 percent.
In such a cynical news environment, many “people don’t know the difference between an image and a news story,” said one fact-checker, Maria Jose Lopez.
Facebook’s tool can’t receive data from WhatsApp, where, fact-checkers say, violent or conspiratorial content spreads widely before reaching other social channels. To get around that, Verificado set up a WhatsApp hotline number to which people can report questionable stories.
For example, for weeks Verificado had been receiving messages about a video, in circulation on WhatsApp, of a crowd burning a man alive in the southern state of Tabasco. Accompanying text blamed López Obrador supporters. It took Verificado several weeks to produce an article about the incident. Because they couldn’t find witnesses, they cited local news stories that said the man was attacked for stealing a motorcycle.
Verificado isn’t able to issue verdicts on all stories. Earlier this year, a story alleged that Mexican drug kingpin Joaquín “El Chapo” Guzmán had been ordered by the government to kill López Obrador. The story was shared 80,000 times, including by a popular Facebook account that had a record of supporting the leftist candidate. Verificado said it didn’t attempt to research the story because it would have been too difficult to reach Guzmán, who is imprisoned in New York.
Illades, the magazine editor and author, said he was stunned to recently find out that Facebook was not verifying who pays for political ads in Mexico — a safeguard the company is introducing in the United States and Canada. Doing so in Mexico could have a big effect on transparency in the upcoming election, he said.
Executives said they were working as quickly as possible to support a growing ecosystem of fact-checking groups around the world. Since March, Facebook has helped fund groups in India, Colombia, Brazil, Indonesia and the Philippines.
Most countries staunchly oppose foreign intervention in democratic elections. On the other hand, deciding what motivates locals to distort political debate in their own countries, and sometimes hire themselves out to do so, is a thornier call, executives said.
Facebook clamps down on “coordinated inauthentic behavior,” said Nathaniel Gleicher, the company’s recently appointed head of cybersecurity policy and a former director in President Barack Obama’s National Security Council. But that allows for plenty of gray area: It prohibits fake accounts, but real people are permitted to post false information.
Recently, Gleicher said, Facebook decided to take a stronger stand against users who engage in or are deemed to be serial offenders — more murky lines.
Fostering such lack of clarity, Gleicher said, was the strategy of Facebook’s enemies. “From everything I’ve seen and everything I’ve worked on, this behavior is designed to exploit grayness.”
Kevin Sieff contributed to this report.