With many of us trying to get used to juggling home working with home schooling, and getting used to the exigencies imposed by lockdown and the accompanying worries about the health of our families and friends, we could be forgiven for having a face like thunder.
But could our faces soon be able to tell people much more than how we’re feeling?
With the recent spike in terror attacks in the UK, the Met announced on 24th January 2020 that it will begin using live recognition technology (“LFR”) on a much wider scale than ever before in an effort to keep our streets safe. But how does this technology work? How will it be used? Is it reliable? What risk does it pose to our privacy? In what circumstances is it lawful to use it?
What is live facial recognition?
Live facial recognition is a type of software which reads the geometrical dimensions of your face, such as the distance between your eyes, the length of your nose and the breadth of your cheekbones, in order to create a unique biometric code for your face.
This biometric code is then run against a database or “watchlist” of wanted individuals or those who pose a risk of harm to themselves or others. If there are any matches, the software will suggest these to be verified by a human operator.
How is live facial recognition used?
There have been a number of trials of live recognition technology in the UK over the last few years. However, it is only now that it is being implemented as part of official police strategy.
The Met intends to use live recognition technology in cameras focused on particular areas throughout the capital. Anyone who passes through those areas will have their face captured by live recognition technology cameras, which will then run each face against the Met’s watchlist, with any matches being alerted to officers at the scene. An officer can then decide whether to approach that person or not.
The Met has stated that people will be alerted when entering an live recognition technology zone and any faces which do not generate a match will be “automatically and immediately deleted”. Any faces which generate a match will be stored for 31 days or, if an arrest is made, until any investigation or judicial process is concluded.
If deemed successful, it is likely that live recognition technology will be rolled out to cities and towns throughout the country.
What are the risks?
On the face of it, live recognition technology sounds like an effective way of monitoring dangerous individuals and being able to respond quickly to incidents such as attempted terror attacks and other serious/violent crimes.
However, if live recognition technology is deemed a success and as the technology itself becomes less “state of the art” and more affordable, there may well be a desire to begin using it on a wider basis to identify the perpetrators of more minor offences such as shoplifting.
This could lead to the deployment of live recognition technology cameras in most public places, giving law enforcement an unprecedented amount of access into our daily lives in the name of crime prevention. Whilst some might argue “you have nothing to fear if you have nothing to hide”, questions should be asked to determine at what stage the use of live recognition technology would be intrusive and a violation of our right to privacy.
There are also legitimate concerns about what else government authorities might want/try to do with all this live recognition technology-generated data. Trust in government and law enforcement agencies remains low, and widespread implementation of live recognition technology might be met with anger, resistance and concerns that the public is being spied on.
For example, in the Chinese city of Suzhou, citizens were publicly shamed on a social media account operated by local government for committing “uncivilised behaviour” such as wearing their pyjamas whilst walking outside, lying on park benches in an “uncivilised manner” and handing out flyers. LFR was used to identify the “offending” citizens and photos of their “crimes” were posted online together with their name, ID number and other personal details.
Also in China, live recognition technology has been used to systematically identify and monitor members of the minority Muslim Uighur population, based on their appearance. The government keeps records of the Uighurs’ movements for “search and review” and argues this is to combat religious extremism. However, there are widespread concerns that China is using live recognition technology for racial profiling with a view to suppressing the Uighurs and/or removing them from Chinese society, either through incarceration or “re-education camps”.
How reliable is live facial recognition?
Aside from the ethical questions which live recognition technology presents, one must also consider whether it works i.e. can live recognition technology accurately match faces against a watchlist?
There are concerns that live recognition technology fails in both a negative and a positive sense. In other words, it can incorrectly match two faces which should not generate a match and it can fail to match two faces which should generate a match. One test conducted in the US in 2018 resulted in 28 members of the US Congress being matched with individuals on a law enforcement watchlist.
Further, some forms of live recognition technology have been reported to show racial bias, with Amazon’s live recognition technology technology “Rekognition” coming under particular criticism for disproportionate failures to identify matches between women and people of colour. Basically, Rekognition was much more likely to think women or people of colour looked the same as each other. The detrimental implications of using such technology for law enforcement is obvious.
Legal Limitations
In the UK, the Information Commissioner’s Office (ICO), the data protection regulator, released an opinion on the use of live recognition technology in October 2019. In addition to calling for a statutory code of practice for use of live recognition technology, the ICO stated that the use of live recognition technology for law enforcement purposes amounts to the sensitive processing of special categories of personal data in all circumstances. This is the case even where a photo of someone’s face is deleted shortly after it is run against a watchlist and generates no matches.
The Data Protection Act 2018 states that, in order to carry out any type of sensitive processing for law enforcement purposes, a party must either:
- obtain the consent of all data subjects to the processing; or
- the processing is “strictly necessary” for law enforcement purposes.
In both cases, the party would also have to have an appropriate policy document in place in respect of such sensitive processing. This would most likely take the form of a data protection impact assessment (DPIA) which evaluated the risks to the rights and freedoms of individuals by deploying live recognition technology against the benefits that using live recognition technology would bring.
Given the high threshold for consent under the GDPR, it seems unlikely that a government authority could argue that data subjects have given valid consent to the use of live recognition technology. As such, it is likely that it would have to prove that the use of live recognition technology was “strictly necessary”.
When is using live facial recognition “strictly necessary”?
The ICO has stated that a party must “consider the proportionality of the sensitive processing and the availability of viable alternatives to live recognition technology”. The ICO is of the opinion that using live recognition technology is more likely to be considered “strictly necessary” where it is:
- targeted;
- intelligence led; and
- time limited.
For example, if the Met has specific intelligence that suspected terrorists are likely to attempt to blow up Tower Bridge on 1st March, then deploying live recognition technology at and around Tower Bridge in the days leading up to 1st March could be deemed to be “strictly necessary”.
The ICO has also indicated that the effectiveness of live recognition technology in achieving the authority’s goals must also be taken into account, as well as the scope and quality of the watchlist against which individuals’ faces are being matched and what steps have been taken to eliminate bias in the underlying live recognition technology technology.
Conclusion
It will be interesting to see how effective the Met considers live recognition technology to be as a means to enforcing the law over the coming months and years. If live recognition technology is a success, we could see it being rolled out to other towns and cities across the country and being used in a wider set of circumstances.
However, we should always be mindful of the threats to our rights which the use of live recognition technology poses, both inadvertently in terms of racial bias within the technology itself and directly in terms of its intrusion into our daily lives and the temptation of government authorities to use it for other, more covert means.
As per the ICO’s comments, it will be important to continue to encourage public engagement and debate on live recognition technology in the months and years ahead, as it could ultimately affect all of us if rolled out on a wider level. People should be made aware of how live recognition technology is used and its possible shortcomings – the more we learn about this new technology the more effectively we will be able to legislate and regulate its use in the future.