Part 3 of a series on MSM and fake news.
Name the influential, international group that:
- Abetted Russian interference in American elections.
- Inflamed vigilantes to murder innocents in India.
- Incited deadly religious violence in Sri Lanka.
- Instigated genocide in Myanmar.
- Encouraged ethnic cleansing in Ethiopia.
- Livestreamed a massacre in New Zealand.
- Enabled online sex trafficking recruitment worldwide.
Hint: Most of us are members of this group. Facebook had a brief Arab Spring fling with democracy long ago. Since then, it’s become the go-to tool for racists, religious extremists, and authoritarian regimes. The platform is programmed that way.
“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”
Facebook Executives Shut Down Efforts to Make the Site Less Divisive, Wall Street Journal (2020)
Shocking, right? But not to Facebook. Their 2016 research had already told them “64% of all extremist group joins are due to our recommendation tools… Our recommendation systems grow the problem.”
In 2019, again, another Facebook leak revealed that “bad actors have learned how to easily exploit the systems… The structure of our platform and our ranking algorithms… give[s] them an enormous reach.”
People don’t just fall into radicalizing rabbit holes. Facebook pushes them.
The largest page on FB posting African American content is run out of Kosovo. That’s so weird! And genuinely horrifying.
How Communities Are Exploited On Our Platforms: A Final Look at the “Troll Fam” Pages, Facebook (2019)
The Atlantic’s Adrienne LaFrance says, “Facebook is not a media company. It’s a Doomsday Machine,” which she defines as “a device built with the sole purpose of destroying all human life.”
Ex-Facebook data-scientist Sophie Zhang told Buzzfeed she “found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions.” In a Guardian interview, she elaborated, “this activity is sufficiently valuable for their autocratic ambitions that they feel the need to do it so blatantly that they aren’t even bothering to hide.”
And this month, Facebook whistleblower Frances Haugen said, “I thought I knew how bad misinformation was. Then I learned what it was doing in countries that don’t speak English.”
Facebook is an agent of government propaganda, targeted harassment, terrorist recruitment, emotional manipulation, and genocide — a world-historic weapon that lives not underground, but in a Disneyland-inspired campus in Menlo Park, California.
Facebook Is a Doomsday Machine, The Atlantic (2020)
Farther, faster, deeper
Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.
The spread of true and false news online, Science (2018)
Facebook’s founding motto was “Move fast and break things.” Among the things social media has broken are journalism, democracy, and truth.
A 2018 study, published in Science, analyzed “verified true and false news stories” posted on Twitter. In 126K tweets, between 2006 and 2017, “it took the truth about six times as long as falsehood to reach 1500 people.” Fake news traveled six times faster and 100 times farther. “The top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people.”
Several tools on Iffy’s Disinfo Dashboard show the extent of mis/disinfo on the major social platforms. In May 2021:
- 54% of the top publishers on Facebook, as listed by NewsWhip and rated by Media Bias/Fact Check, had low or mixed factual ratings.
- 57% of the most shared sources on Twitter were low-quality, as calculated by CoVaxxy.
Iffy’s study of PolitiFact accuracy rulings for news sources placed social media dead last. As of June 2021, PolitiFact had checked 2,248 claims made on Facebook, Instagram, Twitter, and other social media. Only 85 of those (3.8%) were true or mostly true (data).
Assume everything on social media is wrong. Odds are, it is.
Fact-checked claims on social media are not only usually wrong but also a major waste of fact-checkers’ time. Of 100 Snopes fact-checks (from the last half of May), two-thirds were of social media posts. In comparison, only 25 of those 100 Snopes checks were of mainstream media articles and five of stories from fake-news sites (data).
An insatiable habit
The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can’t fix the problem.
How Facebook got addicted to spreading misinformation, MIT Technology Review (2021)
At the top, I said social media has proved unwilling to change. Karen Hao, in the MIT Technology Review, explains why:
- “Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth.…”
- “If a model reduces engagement too much, it’s discarded.…”
- “The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough ‘to help prevent our platform from being used to foment division and incite offline violence.'”
Hao’s article did prompt Facebook to finally deal with their disinfo issue — not with AI but with PR. Her editor’s Twitter thread is a series of “observations on the PR strategy Facebook has adopted in response,” documenting how a major corporation tries to discredit unfavorable press.
Two weeks later, a Facebook VP informed the world they’d found the problem: It’s your fault. “We need to look at ourselves in the mirror, and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.”
The past two years have shown that without sufficient safeguards, people will misuse these tools to interfere in elections, spread misinformation, and incite violence.
Mark Zuckerberg, A Blueprint for Content Governance and Enforcement (2018)
Such prescience. He was absolutely right. Safeguards stayed insufficient, so misused tools interfered in elections, incited violence, and spread misinformation.
The German Marshall Fund’s Digital New Deal project “monitors engagement on social media with deceptive sites that masquerade as journalism.” Since 2018, engagement with deceptive sites (based on NewsGuard ratings) has more than doubled. “The level of engagement with deceptive content on Twitter and Facebook hit record highs in 2020.”
In 2020, Consumer Reports submitted “increasingly outrageous ads, probing the boundaries of what Facebook would approve.” Seven of these test ads, all with “false or dangerous information,” ran through the platform’s screening system. “Facebook approved them all.”
Complex systems fools
Humans are bad at predicting the performance of complex systems, even programmers. Especially the programmers. Our ability to create large & complex systems fools us into believing that we’re also entitled to understand them. I call it the Creator Bias, and it’s our number-one occupational disease.
Carlos Bueno, Mature Optimization Handbook (2013)
A Facebook engineer wrote that for the company almost a decade ago.
When people propose government regulation or corporate responsibility as solutions, they assume social media has the skills to stop fake news and conspiracy fantasies. History doesn’t support that hypothesis.
Facebook’s “engagement-based ranking,” and similar algos, “perpetuate biases and affect society in ways that are barely understood by their creators,” says Roddy Lindsay, who helped develop the company’s News Feed. “Facebook has had more than 15 years to demonstrate that algorithmic personal feeds can be built responsibly; if it hasn’t happened by now, it’s not going to happen.”
If social platforms are fighting lies and hate with AI, they’re losing. These headlines are from 2021:
- “Facebook still has Holocaust denial content three months after Mark Zuckerberg pledged to remove it” —USA Today
- “YouTube’s algorithm pushes hateful content and misinformation” —Politico
- “Facebook and YouTube spent a year fighting covid misinformation. It’s still spreading.” —The Washington Post
- “Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead” —Wall Street Journal
- “AI still sucks at moderating hate speech” —MIT Technology Review (also by Karen Hao, I’m a fan)
Forget advanced technologies, Facebook can’t even count. For years they “miscalculated the average time users spent watching videos on its platform,” overestimating the average by 60–80% (Wall Street Journal). Another major miscount, again in their favor, is detailed in a 2018 suit by the state of California: “For 18 to 34-year-olds, Facebook represents to advertisers a Potential Reach of 100 million people. But there are only 76 million 18 to 34-year-olds in the U.S.”
Social media likely lacks the will and the skills to fix their fake-news problems. The malgorithms that fuel the widespread fires of mis/disinfo have grown too large, too wild, and too beyond the abilities of their creators to contain them.
‘Cause you can’t, you won’t, and you don’t stop
Well, you can’t, you won’t, and you don’t stop
Sure Shot, Beastie Boys, Ill Communication
Humans don’t change. Social media won’t change. Our only hope for change is a media that controls its role in spreading messages of lies and hate. I’ll detail the cause and potential cure in the final two posts.
Editors: Josef Verbanac and Claire Golding.