Advertisement

Twitter’s photo algorithm prioritised white faces over black ones, company says it’s ‘got more analysis to do’

 (Marten Bjork)
(Marten Bjork)

Twitter’s photo algorithm showed evidence of racial bias over the weekend.

The company said it was grateful that the issue had come to light and acknowledged it had more to do to fix the racism in its systems.

Users found that when posting photos of black people and white people side by side in one image, the white person would overwhelmingly be chosen as the cropped preview on the timeline.

The issue was discovered when a Twitter user posted about Zoom’s facial recognition technology which was removing his black colleague’s head when the colleague was using a virtual background.

When he tweeted about the issue, he noticed that Twitter was also prioritising his own white face instead of his colleagues.

Other users attempted to recreate the experiment with other faces, including those of US Senate majority leader Mitch McConnell and former US president Barack Obama.

Twitter’s algorithm consistently prioritised Senator McConnell’s face as the preview image.

The issue also occurred when posting images of black cartoon characters versus white cartoon characters, and even dark-furred dogs against light-coloured dogs.

However, the issue seemed reduced on Tweetdeck, a dashboard manager for Twitter, which appeared to be focused less on the content of images.

Twitter’s chief design officer Dantley Davis tweeted a similar comparison, but clarified that this was “not a scientific test as it's an isolated example” and said that the company was “investigating the [neural network].”

A neural network is an artificially intelligent system that Twitter uses to decide the photos it displays on the timeline.

In 2018, the company explained that it performs a form of saliency detection, according to a blog post from 2018.

This means the company attempts to show the things that people are most likely to look at – such as “faces, text, animals, but also other objects and regions of high contrast”.

This kind of algorithm is used on all images as they are cropped and posted in real time – albeit an algorithm that has been cut-down since Twitter is “only interested in roughly knowing where the most salient regions are”.

“Unfortunately, the neural networks used to predict saliency are too slow to run in production, since we need to process every image uploaded to Twitter and enable cropping without impacting the ability to share in real time” the post says.

Twitter had previously used face detection for its algorithm, but it would often miss faces and mistakenly detect faces when there were none there.

“Thanks to everyone who raised this. We tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. We'll open source our work so others can review and replicate,” tweeted Liz Kelley, who works on Twitter’s communication team.

The algorithm’s flaw is “a very important question”, tweeted Twitter’s chief technology officer Parag Agrawal.

“To address it, we did analysis on our model when we shipped it, but needs continuous improvement. Love this public, open, and rigorous test – and eager to learn from this,” he continued.

Other social media companies, such as Instagram which is owned by Facebook, have been criticised for racially biased algorithms.

Technology in self-driving car algorithms mean that they are also more likely to drive into black people.

An algorithm that was used across the US to decide how likely prisoners are to reoffend was also biased against black people.

Recently, a black man was wrongfully arrested based on a flawed match from a facial recognition algorithm. Experts say the results of all these algorithm biases will exacerbate racial inequality.

Last year, the National Institute of Standards and Technology tested 189 algorithms from 99 developers and found that black and Asian faces were 10 to 100 times more likely to be falsely identified by the algorithms compared to white faces.

Read more

Facebook algorithm recommending Holocaust denial and fascist content, report finds

Mark Zuckerberg reportedly intervened after employee posted message defending police and questioning racial bias after Jacob Blake shooting

Police wrongfully arrest black man after facial recognition software mistook him for shoplifter