Advertisement

Facebook says the good it does outweighs the bad. But how many ‘likes’ make up for the hate?

You can’t maintain a social network of 3 billion people without a few algorithms, and Facebook has them in spades. There are algorithms that calculate what ads you are most likely to click on, algorithms that calculate which groups you are most likely to join and algorithms that decide which of your friends’ births, marriages, divorces and deaths merit prime placement at the top of your newsfeed – and which you won’t mind missing.

On Wednesday, in response to the growing advertiser boycott over Facebook’s failure to address hate speech, the executive Nick Clegg described a new kind of Facebook algorithm – one that calculates the social network’s moral worth. Writing for the advertising industry trade publication Ad Age, Clegg attempted to argue that the good on Facebook outweighs the bad.

Related: How hate speech campaigners found Facebook’s weak spot

“Focusing on hate speech and other types of harmful content on social media is necessary and understandable, but it is worth remembering that the vast majority of those billions of conversations are positive,” the former deputy prime minister wrote. “Look at what happened when the coronavirus pandemic took hold. Billions of people used Facebook to stay connected when they were physically apart.”

This is not the first time that a Facebook exec has hinted at such attempts to calculate the incalculable. (One imagines Clegg totting up the balance sheet at the end of the quarter: “I see that in the red we have this murder of a security officer allegedly carried out by extremists who met and coordinated their attack on Facebook but here’s one for the black: an adorable grandmother just liked a photo posted by her grandson who lives 500 miles away.”)

On 2 June, facing an unprecedented public protest by Facebook employees, Mark Zuckerberg told his staff that even if they disagreed with some of his decisions he hoped they agreed that “the net impact of the different things that we’re doing in the world is positive”, according to a transcript published by Vox. “I really believe it is,” he added.

As with all of Facebook’s algorithms, there is no transparency on how Facebook arrived at this net positive impact. We can only look at the outcomes and attempt to reverse engineer the decisions that produced them.

Take, for example, the campaign of genocide against the Rohingya minority in Myanmar. I don’t know exactly how Facebook accounts for its role in inciting the violence and ethnic cleansing that forced more than 700,000 Rohingya to flee the country as refugees, but I do know that no one at Facebook was fired over its deadly failures. No one resigned. No one staged a “virtual walkout”. No one put together a hastily arranged press appearance to quell outrage from advertisers.

It’s clear that according to Facebook’s moral calculus, the lives of people in the global south do not count for as much as the lives of people in its own country, but one need not struggle to find violence and harm from Facebook here, either. Let’s not forget that the Unite the Right rally in Charlottesville, Virginia, where Heather Heyer was murdered, started as a Facebook event.

A demonstrator holds up a photo of Heather Heyer during a demonstration against racism in Los Angeles. The Unite the Right rally where Heyer was killed started as a Facebook event.
A demonstrator holds up a photo of Heather Heyer during a demonstration against racism in Los Angeles. The Unite the Right rally where Heyer was killed started as a Facebook event. Photograph: Mike Nelson/EPA

Heyer’s killer has been convicted and sent to prison, but how does Facebook evaluate its role in the event? Does the calculation change at all when you consider that just a few weeks before Charlottesville, I sent Facebook a spreadsheet with links to 175 neo-Nazi, white nationalist and neo-Confederate hate groups that were using its platform to recruit and organize? And that Facebook had declined to take any action against the vast majority of them until after Heyer’s murder, when it belatedly cleaned house?

How many sewing circles or bird watching groups or kickball teams using Facebook tools does it take to make up for that?

When I read Clegg’s generic paean to the “grandparents and grandchildren, brothers and sisters, friends and neighbors” who use Facebook’s tools to communicate this morning, I couldn’t help but think about my own grandmother.

My grandmother was 105 and slowly dying of kidney failure when I went home for Thanksgiving together last year. It was a joyful and dreadful holiday as we crammed as many special meals and celebrations into four days as we could. On Thanksgiving day, I vividly remember my grandmother declaring that those of us who chose not to eat the sausage in our traditional Chinese-American sticky rice stuffing “don’t know how to live” – and feeling happy that it could never be said of my grandmother that she didn’t know how to live.

I also remember, just as vividly, glancing at my phone in my parents’ kitchen, and seeing that I had just been emailed another violent and racist rape threat. I remember looking away as soon as I could, as if that would wipe what I’d just seen from my mind. I remember thinking that I had to pretend nothing had happened, that I shouldn’t ruin Thanksgiving.

In mid-November, I had reported on the continued presence of white nationalist organizations on Facebook. I had again provided Facebook with a list of groups that were using its platform despite its stated policies against white nationalist hate, and Facebook had again failed to take action against them. What ensued was a vicious, weeks-long campaign of racist and sexist harassment, coordinated by white nationalist organizations (and amplified by Breitbart News).

The neo-Nazis and white nationalists I had written about published articles with my photograph that described me as a “racial molotov cocktail” with “the cunning of the Jew and the meticulous mathematical mind of a Chink”. They encouraged their followers to go after me too, and I received a steady stream of racist vitriol on Twitter, on Facebook and by email. I tried to ignore it as much as I could. I tried not to ruin Thanksgiving. The worst were the messages that referenced my family, or imagined my rape.

The only existential threat to a $650bn multinational corporation is a threat to its revenues

(I’ve struggled with whether I should admit to being affected by what happened – journalists aren’t supposed to show weakness after all. I’ve worked hard to feel “fine” about all of this, but I have yet to reach a point where I can remember what happened without feeling my heart rate raise, without feeling an unwanted surge of adrenaline start to course through my body.)

I’m not saying that Facebook is solely responsible for the actions of every hate-addled individual who harassed me, let alone for the decisions made by Heyer’s murderer or Myanmar’s military.

But I do think that Facebook played a role in creating the conditions necessary for those things to happen. I think that not because I am a bitter and cynical reporter who is chasing clicks with outrage, but because over and over and over again reporters, researchers and activists have documented the real and devastating costs of Facebook’s algorithmic negligence and record of accommodating hate.

So when I hear Facebook touting all the good it has supposedly done for the world, I want to know just how it’s making that accounting, because I’m not prepared to say that it’s enough.

Hate is an existential threat to the people it targets, but it’s no threat at all to Facebook. The only existential threat to a $650bn multinational corporation is a threat to its revenues. That’s where the real calculations are taking place right now at Facebook. When hate hurt people, Facebook did nothing. Now that it’s hurting Facebook, we’ll see what it really values.