Facial Recognition Trial at London Bridge Flags Children | CityAM

by Chief Editor
The system scans faces within a designated “recognition zone”

The British Transport Police’s novel live facial recognition trial has barely begun at London Bridge, and concerns are already mounting after AI monitoring systems incorrectly flagged children as potential offenders.

Cameras went live at the station on 11 February as part of a six-month pilot set to identify people wanted for serious offences as they pass through major stations.

The system scans faces within a designated “recognition zone”, and compares them against a pre-set watchlist.

If the algorithm suggests a match, an officer then reviews the alert before deciding whether to intervene.

But previous AI monitoring trials on the tube have raised concerns about accuracy. During earlier deployments aimed at tackling fare evasion at stations including Willesden Green, children walking closely behind parents were mistakenly flagged as potential offenders.

The latest system uses NEC’s Neoface algorithm, which the National Physical Laboratory independently tested.

Police say they are operating it under recommended settings designed to “minimise the likelihood of any false positive indication and adverse impact on equitability”, and that anyone not on the watchlist will not be identified and their biometric data will be deleted immediately.

Chief superintendent Chris Casey, who is overseeing the pilot, said: “This is a trial of the technology to assess how it performs in a railway setting. The initiative follows a significant amount of research and planning and forms part of BTP’s commitment to using innovative technology to make the railways a hostile place for individuals wanted for serious criminal offences.”

Passengers who prefer not to pass through the recognition zone will be offered alternative routes, and signage has been installed at participating stations.

Growing scrutiny

The Met Police used the technology 231 times last year, scanning around four million faces and making 801 arrests “specifically as a result of LFR,” according to submissions made during a High Court judicial review.

Representing the Met, Anya Proops KC told the court that locating wanted individuals in London was “akin to looking for stray needles in an enormous, exceptionally dense haystack”.

He also said that the privacy intrusion was “only minimal” because data from non-matches is deleted “a fraction of a second” after capture.

However, campaign group Big Brother Watch and London resident Shaun Thompson are challenging the Met’s use of the technology.

Thompson was stopped in London Bridge in 2024 after being incorrectly identified by a police van equipped with facial recognition cameras. He was held for around 30 minutes before being released.

Matthew Feeney of Big Brother Watch said: “We all want train passengers to travel safely, but subjecting law-abiding passengers to mass biometric surveillance is a disproportionate and disturbing response.”

“Facial recognition technology remains unregulated in the UK and police forces are writing their own rules.”

Elsewhere, a 2020 Court of Appeal ruling found that South Wales Police had used live facial recognition unlawfully, citing insufficient safeguards.

While forces now operate under a framework combining common law powers, data protection rules and human rights legislation, there is no single statute governing its deployment.

The Home Office consultation on how facial recognition should be regulated has yet even as trials continue.

Ministers have indicated that AI will play a central role in plans to modernise policing.

You may also like

Leave a Comment