London’s Metropolitan Police has announced controversial plans to use live facial recognition technology to improve officers’ ability to identify suspects and police the British capital.
The Met said in a statement Friday the technology will be deployed to places where data indicates people responsible for serious and violent crimes, such as gun and knife attacks and child sexual exploitation, are most likely to be located. Clearly marked cameras will be focused on small, targeted areas to scan people’s faces as they walk by, it added.
“As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London,” assistant commissioner Nick Ephgrave said in a statement. “We are using a tried-and-tested technology. Similar technology is already widely used across the UK, in the private sector,” he added.
The technology, which is made by Japanese company NEC, is a standalone system not linked to any other imaging system, such as closed-circuit television, body worn video or automatic number plate recognition, the Met said.
The decision follows an October investigation into live facial recognition technology by the UK’s Information Commissioner’s Office, which raised serious concerns over privacy and accuracy. It flagged evidence that the technology discriminates against women and people of color — an issue that’s been documented by Federal researchers in the United States, where several cities have banned use of the technology.
to the future
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is that it will help it tackle serious crime, including serious violence, gun and knife crime and child sexual exploitation, and that it will “help protect the vulnerable.”
However, its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data sets used to train AI algorithms.
So in fact there’s a risk that police use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr. Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate.”
A U.K. man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.
The decision comes as the European Union considers following some U.S. cities in banning the technology’s use altogether, underscoring the U.K.’s unusual level of openness to novel forms of surveillance.
London’s Metropolitan Police Service said Friday that it will start using the technology in operations across the capital from that day. The equipment, developed by Japan’s NEC Corp., will be deployed in neighborhoods where intelligence suggests officers are likely to find serious offenders, police said.
“The public rightly expect us to use widely available technology to stop criminals,” Assistant Commissioner Nick Ephgrave said in a statement.
Mr Ephgrave said the system could also be used to find missing children or vulnerable adults.
Trials of the cameras have already taken place on 10 occasions in locations such as Stratford’s Westfield shopping centre and the West End of London.
The Met said it tested the system during these trials using police staff whose images were stored in the database. The results suggested that 70% of wanted suspects would be identified walking past the cameras, while only one in 1,000 people generated a false alert. But most people scanned are not on a watchlist and so most matches are false alarms. An independent review of six of these deployments found that only eight out of 42 matches were “verifiably correct”.Campaigners have warned that accuracy may be worse for black and minority ethnic people, because the software is trained on predominantly white faces.