London Cops Switch on Facial Recognition |

London Cops Switch on Facial Recognition

Post the Clearview controversy that brought AI regulation into the forefront, this move is sure to raise the hackles of privacy advocates


Facial recognition cameras in the London would soon start scanning crowds based on watch lists that the police gather and monitor from time to time, the city’s Metropolitan Police have announced in a step that flies in the face of the debate around regulating AI based on gather data without consent.

“The use of live facial recognition technology will be intelligence-led and deployed to specific locations in London. This will help tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and help protect the vulnerable,” says a post on the Met Police website.

Attempting to sweeten the pill, the cops say that the technology would only target specific locations and will come with a bespoke watch list of individuals wanted in cases of serious and violent offences. They also claim that the cameras will be “clearly signposted” with officers handing out leaflets about the activity to passers-by.

In a press statement, assistant commissioner Nick Ephgrave suggests that his department had taken a balanced approach in order to track criminals and stop crime. “Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he says.

However, there is no mention in the statement about the potential racial risks that such a technology brings with it. Given that AI learning is based on data sets provided manually, there is an increased possibility of facial recognition leading to further trouble for vulnerable groups already facing such discrimination.

While local-level protests against this move have already begun surfacing with Liberty, a not-for-profit at the forefront of a campaign against racial discrimination in the UK, claiming in a series of tweets that the police failed to consider the human rights impact of the move and that its use would not pass key legal tests of being “necessary in a democratic society.”

However, the moot point now is what the rest of the world’s law enforcers would do, now that London has set the ball rolling. In recent times, public opinion saw a vertical split following disclosures by tech start-up Clearview.AI which had handed over millions of facial records scraped from Facebook to several police establishments in the United States.

Even the tech community is divided over use of facial recognition with Google’s Sundar Pichai supporting a temporary ban on its use publicly while Microsoft believes that such a ban could stymie the progress of a technology that has the potential of helping humanity.

TAGS: facial recognition, London, Police, Artificial Intelligence, Privacy, Human Rights