The Washington PostDemocracy Dies in Darkness

Lawmakers’ latest idea to fix Facebook: Regulate the algorithm

Whistleblower Frances Haugen says the software that decides what we see in our social feeds is hurting us all. But reforming it won’t be easy.

Analysis by
Staff writer|
October 12, 2021 at 9:00 a.m. EDT
Former Facebook employee Frances Haugen told lawmakers Oct. 5 what policies the company could adopt to make its products safer. (Video: The Washington Post, Photo: Matt McClain/The Washington Post)
10 min

On Facebook, you decide whom to befriend, which pages to follow, which groups to join. But once you’ve done that, it’s Facebook that decides which of their posts you see each time you open your feed — and which you don’t.

The software that makes those decisions for each user, based on a secret ranking formula devised by Facebook that includes more than 10,000 factors, is commonly referred to as “the news feed algorithm,” or sometimes just “the algorithm.” On a social network with nearly 3 billion users, that algorithm arguably has more influence over what people read, watch and share online than any government or media mogul.

It’s the invisible hand that helps to make sure you see your close friend’s wedding photos at the top of your feed, rather than a forgotten high school classmate’s post about what they had for lunch today. But because Facebook’s primary goal is to grab and hold your attention, critics say, it’s also prone to feed you that high school classmate’s post of a meme that demonizes people you disagree with, rather than, say, a balanced news story — or an engrossing conspiracy theory rather than a dry, scientific debunking.

That type of highly personalized, attention-seeking algorithm — and others much like it on apps such as TikTok, YouTube, Twitter and Facebook-owned Instagram — is what Facebook whistleblower Frances Haugen identified as the crux of the threat that social media poses to society.

“One of the consequences of how Facebook is picking out that content today is that it’s optimizing for content that gets engagement, or reaction,” Haugen said on the CBS show “60 Minutes.” “But its own research is showing that content that is hateful, that is divisive, that is polarizing — it’s easier to inspire people to anger than it is to other emotions.”

Amid a broader backlash against Big Tech, Haugen’s testimony and disclosures have brought fresh urgency to debates over how to rein in social media and Facebook in particular. And as lawmakers and advocates cast about for solutions, there’s growing interest in an approach that’s relatively new on the policy scene: regulating algorithms themselves, or at least making companies more responsible for their effects. The big question is whether that can be accomplished without ruining what people still like about social media — or running afoul of the First Amendment.

In the past year, at least five bills have been introduced or reintroduced in Congress that focus explicitly on the software programs that decide what people see on social media platforms. Beyond the United States, efforts to regulate such algorithms are advancing in the European Union, Britain and China.

“It’s heartening to see Congress finally beginning to focus on the heart of the problem,” Rep. Tom Malinowski (D-N.J.), who co-authored a bill to regulate algorithms, said in a phone interview last week. “The heart of the problem is not that there’s bad stuff posted on the Internet. It’s that social networks are designed to make the bad stuff spread.”

Facebook whistleblower’s revelations could usher in tech’s ‘Big Tobacco moment,’ lawmakers say

That marks a shift from earlier congressional hearings about Facebook, which tended to focus on what’s known as content moderation: social networks’ decisions to ban or allow certain types of posts. Those arguments tended toward stalemates, as lawmakers on the left wanted tech giants to crack down more aggressively on hate speech, conspiracy theories and falsehoods, while those on the right wanted to tie the tech giants’ hands to prevent what they claim is a form of censorship. Both were hemmed in by the First Amendment, which constrains the government’s power to regulate companies’ speech policies.

Some lawmakers and advocates are hopeful that swiveling the spotlight to the underlying design and incentives of social networks, including their recommendation systems, will illuminate common ground between the parties. These approaches take to heart the distinction between free speech, which is enshrined in the Constitution, and what researcher Renee DiResta has called “free reach,” which is not.

Feed-ranking algorithms have their benefits. At their best, they show people posts that they’re likely to find interesting, surprising or valuable, and that they might not have encountered otherwise — while filtering out the noise of humdrum updates or tedious self-promotion. They allow posts from people lacking large followings to nonetheless reach wide audiences with important messages without going through established media gatekeepers. Some researchers say they’ve been instrumental to some degree in fueling social movements, from the Arab Spring to Black Lives Matter.

Yet their dark sides have gradually drawn more attention.

Among the internal research findings that Haugen publicized were some that suggested Instagram’s algorithm exploits teen girls’ insecurities to show them posts related to extreme dieting and even self-harm. (Experts say more research is needed to fully understand how Instagram affects mental health.) Another set of documents argues that changes made to Facebook’s news feed algorithm in 2018 and 2019, touted as encouraging “meaningful social interactions” between users, had the side effect of systematically promoting posts that sparked arguments and outrage.

That wasn’t Facebook’s intent, Haugen said. The intent, she explained, was to nudge its users to interact with one another more, which chief executive Mark Zuckerberg saw as critical to keeping the social network relevant as younger users gravitated to rivals such as Snapchat. Facebook offered a different rationale, saying its intent was to boost users’ well-being amid concern over the effects of passive “screen time.” Both agree that the company’s algorithm change included boosting posts that sparked comments, as opposed to just likes or views.

When researchers began to uncover the alarming side effects, those findings were downplayed and ignored by higher-ups, Haugen said — perhaps, she alleges, because the company had tied some of its performance bonuses to increasing the metrics associated with the change. Facebook has declined to comment on that particular allegation.

Analysis: Facebook and YouTube’s vaccine misinformation problem is simpler than it seems

One way to regulate algorithms without directly regulating online speech would be to amend Section 230 of the Communications Decency Act, which shields websites and apps from being sued for hosting or moderating content posted by users. Several bills propose removing that protection for certain categories of harmful content that platforms promote via their algorithms, while keeping it in place for content they merely host without amplifying.

Forcing tech companies to be more careful about what they amplify might sound straightforward. But it poses a challenge to tech companies because the ranking algorithms themselves, while sophisticated, generally aren’t smart enough yet to fully grasp the message of every post. So the threat of being sued for even a couple of narrow types of illegal content could force platforms to adjust their systems on a more fundamental level. For instance, they might find it prudent to build in human oversight of what gets amplified, or perhaps move away from automatically personalized feeds altogether.

To some critics, that would be a win. Roddy Lindsay, a former Facebook data scientist who worked on the company’s algorithms, argued in a New York Times op-ed this week that Section 230 reform should go further. He proposes eliminating the liability shield for any content that social platforms amplify via personalized recommendation software. The idea echoes Haugen’s own suggestion. Both Lindsay and Haugen say companies such as Facebook would respond by abandoning their recommendation algorithms and reverting to feeds that simply show users every post from the people they follow.

Nick Clegg, Facebook’s vice president for global affairs and communications, argued against that idea Sunday on ABC’s “This Week.”

“If we were just to sort of across the board remove the algorithm, the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content,” Clegg said. “Why? Because those algorithmic systems precisely are designed like a great, sort of giant spam filter to identify and deprecate and downgrade bad content.”

More than Facebook, social video platforms such as TikTok and YouTube rely on algorithms to elevate their users’ cleverest, best-produced videos over the mountains of amateurish efforts. It’s hard to imagine TikTok without its “For You” page, which draws heavily on a user’s viewing history to serve up videos tailored to their interests, including new spins on memes they’ve seen in the past.

The bill proposed by Malinowski and Rep. Anna G. Eshoo (D-Calif.) would take a more cautious approach, removing Section 230 protection only when platforms’ opaque algorithms promote content related to civil rights violations or international terrorism.

“We tried to design a remedy that’s narrowly tailored to the problem,” Malinowski said. “We’re not trying to kill the Internet. We’re not trying to end Facebook or YouTube.”

Opinion: Congress must decide: Will it protect social media profits, or democracy?

Along similar lines, Sen. Amy Klobuchar (D-Minn.) introduced a bill in July to remove the liability shield when platforms promote medical misinformation during a public health emergency.

From the other side of the aisle, Sen. Marco Rubio (R-Fla.) introduced a bill in June that would remove tech companies’ liability shield when they either promote or “censor” certain political viewpoints. While that bill has gained little traction, it reflects Republicans’ interest in limiting platforms’ content-moderation power along with their algorithms.

Any proposal to change Section 230 stirs controversy in tech policy circles. When Congress last amended it, in 2018, the goal was to curb online sex trafficking, but sex workers and researchers said the practical effect was to push online service providers toward heavy-handed crackdowns on an already vulnerable group.

“I’m generally concerned about reforms to Section 230,” said Allie Funk, a senior research analyst at the nonprofit Freedom House and co-author of its recent annual report on global Internet freedom. “What we’ve seen around the world is when we tweak protections against intermediary liability, you often have companies erring on the side of censorship and removing political, social and religious speech, particularly of those in marginalized communities.”

Funk argued that social media’s ills would be better addressed through a combination of stronger consumer privacy protections, competition policies that limit dominant platforms’ market power and transparency requirements.

Evan Greer, director of the nonprofit advocacy group Fight for the Future, worries that the Eshoo-Malinowski bill and others like it would force social networks such as Facebook to retreat to amplifying only sanitized content from whitelisted corporate partners. She argues that the underlying problem with social media companies is their business model, which relies on aggressive profiling of users to target them with content and ads. The solution to manipulative algorithms, she said, is to pass a data privacy law “strong enough to effectively kill this business model.

Post Reports: What do we do about Facebook?

Other ideas to regulate algorithms would leave Section 230 intact. A bipartisan bill called the Filter Bubble Transparency Act, which Haugen endorsed in her testimony, would require the largest social platforms to better explain their algorithms to consumers and to offer everyone the option of a feed that isn’t manipulated by ranking software.

“The more transparency consumers have with respect to how social media and other Internet platforms prioritize content on their services, the better,” Sen. John Thune (R-S.D.), one of the co-authors, said when the bill was reintroduced in June.

A pair of Democratic lawmakers, Rep. Doris Matsui (Calif.) and Sen. Edward J. Markey (Mass.), introduced the Algorithmic Justice and Online Platform Transparency Act in May. It would prohibit algorithms that discriminate on the basis of race, age, gender and other protected classes, not just on social media but in arenas such as housing and job ads. It would also require online platforms to submit descriptions of their algorithms for Federal Trade Commission review and to publish public reports on their content-moderation practices.

Daphne Keller, who directs the Program on Platform Regulation at Stanford University’s Cyber Policy Center, has thrown cold water on the idea of regulating what types of speech that platforms can amplify, arguing that bills such as Eshoo and Malinowski’s would probably violate the First Amendment.

“Every time a court has looked at an attempt to limit the distribution of particular kinds of speech, they’ve said, ‘This is exactly the same as if we had banned that speech outright. We recognize no distinction,’ ” Keller said.

Proposals to limit algorithmic amplification altogether, such as Lindsay’s, might fare better than those that target specific categories of content, Keller added, but then social media companies might argue that their algorithms are protected under their First Amendment right to set editorial policy.

Perspective: Why outlawing harmful social media content would face an uphill legal battle

That isn’t an issue in China, where regulators are launching a three-year campaign to regulate algorithms for fairness, transparency and alignment with the government’s socialist ideals.

In Europe, the proposed Digital Services Act includes transparency provisions that would require platforms to disclose information about their algorithms and content-moderation practices to regulators and independent researchers.

One of the more creative approaches to the algorithm issue focuses on giving social media users the power to choose their own ranking system. Scholars Francis Fukuyama of Stanford and Barak Richman of Duke University propose requiring dominant networks such as Facebook to allow outside software developers to build and offer “middleware” — third-party programs that do the work of ranking users’ feeds and filtering content they don’t want to see. That would leave Facebook’s basic business model intact but diffuse its power over discourse, while giving people the power to opt for algorithms that don’t necessarily optimize for the growth and engagement metrics to which Facebook seems wedded.

Facebook, for its part, notes that it already offers users of its main app the option to revert to a mostly reverse-chronological news feed. Clegg also announced Sunday that the company will reduce the amount of politics in users’ feeds in favor of more content from their friends. And the company has said it would welcome some forms of tech regulation, potentially including privacy laws and Section 230 changes — just not the kind that would outlaw its business model or ranking algorithms.