Advertisement

Widespread face mask use could make facial recognition less accurate

A commuter wearing a mask in Waterloo
A commuter wearing a mask in Waterloo

As more people began to wear masks in public to help prevent the spread of coronavirus, iPhone owners quickly spotted a problem. With people’s noses and mouths covered, the phone’s facial recognition system used to unlock its devices stopped functioning, frustrating users.

Apple issued a quick fix. Now iPhones instantly recognise if a person is wearing a mask, and asks them to enter a passcode, instead of attempting the facial recognition system. But the original inconvenience demonstrated a greater problem.

In recent weeks, security officials and technology companies have been left wondering about the future of the multi-billion dollar facial recognition surveillance industry, while civil rights advocates fear it could make an already imperfect system more susceptible to false matches.

Facial recognition works by analysing a photo or video of a face to read its geometry and collecting hundreds of unique markers, like the spacing of a person’s eyes, the size of their forehead and the length of their face.

Once this is mapped, the system creates a unique code which could be matched against a police watchlist, to identify someone as they arrive at a border, or to unlock a phone.

Like Apple’s Face ID, which requires eyes, nose and mouth to be visible in order for it to work, many algorithms that are trained on publicly available datasets tend to focus on all of our facial features, typically with emphasis on the lower region around the lips. It is these algorithms, largely used by smartphones, that will continue to fail while people are wearing face masks, experts say.

Some of the big players may be less impacted by not being able to scan the lower face, including Amazon’s AWS Rekognition service, which continues to work on footage with people wearing masks, says Dr Seena Rejal, founder of London-headquartered machine learning company, Shapes AI.

Amazon has repeatedly refused to share what images Rekognition, which has been sold to police forces and governments around the world, is trained on, stating only that they were “legally” obtained.

"One surmises that they had used a lot of training data from security forces, where the faces were often concealed and thus, their model learnt to focus substantially on the upper region of the faces as well," Dr Rejal says.

But Jeff Bezos' machine learning unit has other problems to contend with. A study published last week claimed that Amazon’s facial recognition continues to present false matches, raising questions about its use by law enforcement.

The US Customs and Border Patrol recently told Wired that its facial recognition systems continue to perform well even when individuals were wearing masks. The system was built by Japanese technology giant NEC, which also developed one being used by the Metropolitan Police, which has reportedly been stalled in the wake of Covid-19.

Several other private companies claim that their facial recognition services are still effective, many of which hail from the east and claim an advantage over western companies, who are new to crowds covering their faces. In Asia, where populations have regularly worn masks during flu season, companies have needed to account for a loss of features when developing their system.

Russia’s NTechlabs says it has refined its algorithm for Covid-19 and China’s SenseTime, recently blacklisted by the US over its role in helping the Chinese government allegedly oppress Uighur muslims, claims to have reworked its product for controlling access to buildings and workplaces for people wearing face masks.

SenseTime says its system analyses eyes, eyebrows, and the bridge of the nose to create the perfect match.

But as Ray Walsh, digital privacy expert at ProPrivacy points out, there are more identifying features than just our faces to focus on.

“Of course, a full face covering that does everything in its power to completely disguise a face is going to prevent facial recognition from working. However, new algorithms are being developed that use things like gait recognition to successfully single people out - meaning that a combination of tracking technologies could one day work simultaneously to provide the police with new more invasive identification capabilities.

“The idea that algorithms could be tested on masked selfies is problematic, but it is also the only effective way to further train and develop algorithms to be able to identify people in masks.”

This research is already underway. Researchers are using 1,200 pictures taken from Instagram to create an open source “COVID19 Mask Image Dataset” to train facial recognition.

In addition, researchers in China compiled a database with more than 5,000 masked photos they took from social media.

Nikolai Grunin, an employee at NtechLab, a Russian facial recognition company
Nikolai Grunin, an employee at NtechLab, a Russian facial recognition company

One interesting workaround could be using the existing libraries of synthetic data, which can overlay virtual masks on faces.

“Using these, one may get a plethora of faces with masks and the same faces without masks on the order of hundreds of thousands, almost overnight,” Dr Rejal says.

The only issue would be controlling factors like lighting and the angle of the face or pose this way.

Dr Rejal does not believe that masks will be the undoing of security-grade facial recognition systems and years of research.

“We already have solutions that can do facial recognition with masks,” he says.

“AI researchers kind of know the ways to tackle this problem with good, applicable accuracy, and while we are not vouching for this to be perfect in terms of detecting true negatives, it can be nearly perfect in predicting true positives and negating false positives.”

But others argue that continuing to use facial recognition while people are wearing masks could result in inaccuracies and false matches.

“I don't see that any company will stop selling facial recognition, but I think the rate of false identifications will rise dramatically,” says Kostyantyn Shysh, co-founder of Traces AI, which uses other signals like clothing and hairstyle to track suspects and missing people.

“Traditional facial recognition technologies have been heavily criticized for resulting in false positives, discrimination, and prejudice - particularly towards minorities who were under-represented during the developmental stages for those algorithms,” says ProPrivacy’s Walsh.

“The fact that these kinds of technologies have been shown to result in false positives regularly in the past is highly concerning, particularly because of the prejudice and discrimination this leads to. This reveals that any use of algorithms that are trained to identify people in masks is likely to have a massively high failure rate, making it unfit for purpose.”