2020-04-30
Legal Articles

Live Facial Recognition Technology

Home / Knowledge base / Live Facial Recognition Technology

Posted by Patrick McCallum on 30 April 2020

With many of us trying to get used to juggling home working with home schooling, and getting used to the exigencies imposed by lockdown and the accompanying worries about the health of our families and friends, we could be forgiven for having a face like thunder.

But could our faces soon be able to tell people much more than how we’re feeling?

With the recent spike in terror attacks in the UK, the Met announced on 24th January 2020 that it will begin using live recognition technology (“LFR”) on a much wider scale than ever before in an effort to keep our streets safe. But how does this technology work? How will it be used? Is it reliable? What risk does it pose to our privacy? In what circumstances is it lawful to use it?

What is LFR?

LFR is a type of software which reads the geometrical dimensions of your face, such as the distance between your eyes, the length of your nose and the breadth of your cheekbones, in order to create a unique biometric code for your face.

This biometric code is then run against a database or “watchlist” of wanted individuals or those who pose a risk of harm to themselves or others. If there are any matches, the software will suggest these to be verified by a human operator.

How is LFR used?

There have been a number of trials of LFR in the UK over the last few years. However, it is only now that it is being implemented as part of official police strategy.

The Met intends to use LFR in cameras focused on particular areas throughout the capital. Anyone who passes through those areas will have their face captured by LFR cameras, which will then run each face against the Met’s watchlist, with any matches being alerted to officers at the scene. An officer can then decide whether to approach that person or not.

The Met has stated that people will be alerted when entering an LFR zone and any faces which do not generate a match will be “automatically and immediately deleted”. Any faces which generate a match will be stored for 31 days or, if an arrest is made, until any investigation or judicial process is concluded.

If deemed successful, it is likely that LFR will be rolled out to cities and towns throughout the country.

What are the risks?

On the face of it, LFR sounds like an effective way of monitoring dangerous individuals and being able to respond quickly to incidents such as attempted terror attacks and other serious/violent crimes.

However, if LFR is deemed a success and as the technology itself becomes less “state of the art” and more affordable, there may well be a desire to begin using it on a wider basis to identify the perpetrators of more minor offences such as shoplifting.

This could lead to the deployment of LFR cameras in most public places, giving law enforcement an unprecedented amount of access into our daily lives in the name of crime prevention. Whilst some might argue “you have nothing to fear if you have nothing to hide”, questions should be asked to determine at what stage the use of LFR would be intrusive and a violation of our right to privacy.

There are also legitimate concerns about what else government authorities might want/try to do with all this LFR-generated data. Trust in government and law enforcement agencies remains low, and widespread implementation of LFR might be met with anger, resistance and concerns that the public is being spied on.

For example, in the Chinese city of Suzhou, citizens were publicly shamed on a social media account operated by local government for committing “uncivilised behaviour” such as wearing their pyjamas whilst walking outside, lying on park benches in an “uncivilised manner” and handing out flyers. LFR was used to identify the “offending” citizens and photos of their “crimes” were posted online together with their name, ID number and other personal details.

Also in China, LFR has been used to systematically identify and monitor members of the minority Muslim Uighur population, based on their appearance. The government keeps records of the Uighurs’ movements for “search and review” and argues this is to combat religious extremism. However, there are widespread concerns that China is using LFR for racial profiling with a view to suppressing the Uighurs and/or removing them from Chinese society, either through incarceration or “re-education camps”.   

How reliable is LFR?

Aside from the ethical questions which LFR presents, one must also consider whether it works i.e. can LFR accurately match faces against a watchlist?

There are concerns that LFR fails in both a negative and a positive sense. In other words, it can incorrectly match two faces which should not generate a match and it can fail to match two faces which should generate a match. One test conducted in the US in 2018 resulted in 28 members of the US Congress being matched with individuals on a law enforcement watchlist.

Further, some forms of LFR have been reported to show racial bias, with Amazon’s LFR technology “Rekognition” coming under particular criticism for disproportionate failures to identify matches between women and people of colour. Basically, Rekognition was much more likely to think women or people of colour looked the same as each other. The detrimental implications of using such technology for law enforcement is obvious.

Legal Limitations

In the UK, the Information Commissioner’s Office (ICO), the data protection regulator, released an opinion on the use of LFR in October 2019. In addition to calling for a statutory code of practice for use of LFR, the ICO stated that the use of LFR for law enforcement purposes amounts to the sensitive processing of special categories of personal data in all circumstances. This is the case even where a photo of someone’s face is deleted shortly after it is run against a watchlist and generates no matches.

The Data Protection Act 2018 states that, in order to carry out any type of sensitive processing for law enforcement purposes, a party must either:

  • obtain the consent of all data subjects to the processing; or
  • the processing is “strictly necessary” for law enforcement purposes.

In both cases, the party would also have to have an appropriate policy document in place in respect of such sensitive processing. This would most likely take the form of a data protection impact assessment (DPIA) which evaluated the risks to the rights and freedoms of individuals by deploying LFR against the benefits that using LFR would bring.

Given the high threshold for consent under the GDPR, it seems unlikely that a government authority could argue that data subjects have given valid consent to the use of LFR. As such, it is likely that it would have to prove that the use of LFR was “strictly necessary”.

When is using LFR “strictly necessary”?

The ICO has stated that a party must “consider the proportionality of the sensitive processing and the availability of viable alternatives to LFR”. The ICO is of the opinion that using LFR is more likely to be considered “strictly necessary” where it is:

  • targeted;
  • intelligence led; and
  • time limited.

For example, if the Met has specific intelligence that suspected terrorists are likely to attempt to blow up Tower Bridge on 1st March, then deploying LFR at and around Tower Bridge in the days leading up to 1st March could be deemed to be “strictly necessary”.

The ICO has also indicated that the effectiveness of LFR in achieving the authority’s goals must also be taken into account, as well as the scope and quality of the watchlist against which individuals’ faces are being matched and what steps have been taken to eliminate bias in the underlying LFR technology.

Conclusion

It will be interesting to see how effective the Met considers LFR to be as a means to enforcing the law over the coming months and years. If LFR is a success, we could see it being rolled out to other towns and cities across the country and being used in a wider set of circumstances.

However, we should always be mindful of the threats to our rights which the use of LFR poses, both inadvertently in terms of racial bias within the technology itself and directly in terms of its intrusion into our daily lives and the temptation of government authorities to use it for other, more covert means.

As per the ICO’s comments, it will be important to continue to encourage public engagement and debate on LFR in the months and years ahead, as it could ultimately affect all of us if rolled out on a wider level. People should be made aware of how LFR is used and its possible shortcomings – the more we learn about this new technology the more effectively we will be able to legislate and regulate its use in the future.

About the author

Patrick is a solicitor in the commercial team who helps clients with their commercial contracts in both a business-to-business and business-to-consumer context.

Patrick McCallum

Patrick is a solicitor in the commercial team who helps clients with their commercial contracts in both a business-to-business and business-to-consumer context.

Recent articles

07 August 2020 Protecting your chances of getting paid; retention of title clauses

A retention of title clause is a term within a contract for the sale of goods which states that the seller retains ownership of the goods until specified obligations are fulfilled by the buyer.

Read article
05 August 2020 Privilege: Protecting your business communications

Privilege can entitle a party involved in court proceedings to withhold a document from their opponent or to deny access to regulators and enforcement agencies.

Read article
30 July 2020 Rethinking the landlord / tenant relationship

We have been following the travails of the high street for over 12 months where changing shopping habits, business rates and rent increases have been contributing to a growing strain on many landlord / tenant relationships.

Read article
Contact
How can we help?
01926 732512
CALL BACK