The British software engineer and midwife accused of being CRIMINALS by faulty AI facial recognition software and why YOU should be worried by its rise

Alvi Choudhury, 26, was 115 miles from Milton Keynes, a city he had never visited, when an artificial intelligence (AI) system flagged him up for committing a crime there.

In spite of his protests of innocence, the software engineer was arrested at his home in Southampton by officers from Hampshire police, put in handcuffs and held in custody for ten hours.

It was not until the early hours of the next day, January 8, that they admitted he had been wrongly identified as a suspect by an automated facial recognition system that had matched him to CCTV images of a curly-haired Asian burglar who had stolen £3,000 from a Buddhist meditation centre in Bedfordshire a month earlier.

‘I was very angry because the kid looked about ten years younger than me,’ says Alvi, who sports a beard. ‘Everything was different. Skin was lighter. Suspect looked 18 years old. His nose was bigger. He had no facial hair. His eyes were different. His lips were smaller than mine.

‘I just assumed that the investigative officer saw that I was a brown person with curly hair and decided to arrest me.’

Worryingly, Alvi’s story is not unique.

For midwife Rennea Nelson – who was six months pregnant at the time – a facial recognition error in a B&M store in Romford, Essex, last year left her fearing for the health of her unborn baby.

‘I just walked in with my husband, Charles, and an alarm went off,’ she says. ‘Then someone from the staff came running towards me shouting, “You’re a thief! You’re a shoplifter!” 

‘It was traumatising and degrading. I had a high-risk pregnancy because I’d lost an earlier baby. I’d been told to avoid stressful situations and here was this man shouting at me, telling me my face was on a system that flagged up shoplifters. He was accusing me of stealing in front of other customers.’

Alvi Choudhury, 26, was 115 miles from Milton Keynes, a city he had never visited, when an artificial intelligence (AI) system flagged him up for committing a crime there

Alvi Choudhury, 26, was 115 miles from Milton Keynes, a city he had never visited, when an artificial intelligence (AI) system flagged him up for committing a crime there

And anti-knife campaigner Shaun Thompson, 39, was minding his own business, walking down Borough High Street near London Bridge on February 3, 2024, when he was stopped by police officers who demanded identity documents, repeatedly asked him to provide fingerprint scans, and inspected him for tattoos and scars.

Shaun, who was born in Jamaica but has lived in London since he was five, says an officer took him to one side. ‘He told me they had facial recognition in the area and that I had walked past his camera and been flagged up as being wanted,’ he says.

‘I asked him what I was wanted for and he said that is what they were trying to find out. He said if they could take my details they would figure it out. I was confused at this. None of what he was saying made any sense. I just knew I had not done anything to justify being stopped by the police.’

This incident led to Shaun, alongside fellow claimant Silkie Carlo, to bring a challenge to the High Court arguing that the Metropolitan Police’s use of live facial recognition breached their right to privacy protected by Article 8 of the European Convention on Human Rights.

Last week the privacy campaigners lost – the court ruled that the force’s use of the technology does not breach the law. Indeed, in welcoming the ruling, Policing Minister Sarah Jones said the technology would be rolled out across the country with ‘record investment’, because ‘there can be no true liberty when people live in fear of crime in their communities’.

Yet Alvi, Rennea and Shaun are among increasing numbers of black and Asian people who have been thrown up as ‘false positives’ by AI facial recognition systems. These systems examine images of people either passing police ‘Live Facial Recognition (LFR)’ camera vans in real time, or retrospectively from crime scene images, and compare them with pictures of wanted suspects on watchlists.

The technology measures facial characteristics. These form part of an individual’s biometrics, which also includes fingerprints and irises. Because these identifiers are unique to the individual, the technology is (largely) effective but also controversial.

‘Live Facial Recognition is the equivalent of having your fingerprints scanned as you walk down the street, without your consent,’ warns Ruth Ehrlich, director of external relations for Liberty.

A camera on top of a Live Facial Recognition (LFR) van during a demonstration of facial recognition technology by Surrey and Sussex Police at Surrey Police headquarters last year

A camera on top of a Live Facial Recognition (LFR) van during a demonstration of facial recognition technology by Surrey and Sussex Police at Surrey Police headquarters last year

In spite of his protests of innocence, the software engineer was arrested at his home in Southampton by officers, put in handcuffs and held in custody for ten hours

In spite of his protests of innocence, the software engineer was arrested at his home in Southampton by officers, put in handcuffs and held in custody for ten hours

The technology used by British police is provided by German firm Cognitech. It performs around 25,000 comparisons each month against the Police National Database, which holds 20 million ‘mugshots’ and images captured by CCTV and cameras where offences have taken place.

It is almost 100 per cent accurate if you are white – but less so if you happen to be black or brown. In Rennea’s case, the technology that wrongfully flagged her up was provided by private UK-based company Facewatch, whose shoplifter-identifying software is spreading into shops, malls and supermarkets.

Home Office research published in December highlighted significant inaccuracies for ethnic groups, showing that retrospective AI facial recognition generated higher rates of false positives for black people (at 5.5 per cent) and Asians (4 per cent), compared with white people at just 0.04 per cent. Astonishingly, among black women, the failure rate rose to 9.9 per cent – 100 times more than for white women.

The research, conducted by the government-funded National Physical Laboratory – which helps set the standards by which everything is measured, from atomic clocks to biometrics – elicited a powerful response from the Association of Police and Crime Commissioners, who described it as shedding light ‘on a concerning in-built bias’.

The Association said: ‘In some circumstances, it is more likely to incorrectly match black and Asian people than their white counterparts.

‘It seems clear that technology has been deployed into operational policing without adequate safeguards in place.

‘Although there is no evidence of adverse impact in any individual case, that is more by luck than design. System failures have been known for some time, yet these were not shared with those communities affected, nor with leading sector stakeholders.’

Alvi, Rennea and Shaun might disagree that ‘there is no evidence of adverse impact in any individual case’. For them, the impact was real.

Alvi was arrested while working at the home he shares with his parents. As an IT professional, he was shocked that the technology threw up false positives in 4 per cent of Asian faces.

‘No tech company would ever put a system into production with a failure rate of one in 25,’ he says. ‘That’s horrific. It is filled with bugs. They said they had officers visually review it. That is even more concerning because that is probably racial discrimination.

‘You’ve probably just seen two brown people, even though they have completely different features and said, “Yeah, they look close enough. Let’s arrest them”.’

Met police facial recognition cameras watch over Christmas shoppers outside Tottenham Court Road station, on 1 December 2025, in London

Met police facial recognition cameras watch over Christmas shoppers outside Tottenham Court Road station, on 1 December 2025, in London

It was not until the early hours of the next day, January 8, that they admitted Alvi (pictured) had been wrongly identified as a suspect by an automated facial recognition system that had matched him to CCTV images of a curly-haired Asian burglar who had stolen £3,000 from a Buddhist meditation centre in Bedfordshire a month earlier

It was not until the early hours of the next day, January 8, that they admitted Alvi (pictured) had been wrongly identified as a suspect by an automated facial recognition system that had matched him to CCTV images of a curly-haired Asian burglar who had stolen £3,000 from a Buddhist meditation centre in Bedfordshire a month earlier

Thames Valley Police, who had asked Hampshire police to arrest Alvi, say that another man, Eduard Zlatineanu, 23, had been arrested on the day of the crime and pleaded guilty at Aylesbury Crown Court just five days after Alvi’s arrest.

‘If they had done any actual detective work, they would have crossed me out straight away, even if their facial recognition system identified me as a suspect,’ says Alvi. Once he was taken in for interview, it took just 10 minutes for detectives to decide he was not the man in the CCTV footage. ‘When I was released, police were laughing because they saw the footage and it was clearly two different people.’

Thames Valley Police admitted its mistake and said: ‘While we apologise for the distress caused to the complainant in this case, their arrest was based on the investigating officers’ own visual assessment that the individual matched the suspect in CCTV footage following a retrospective facial recognition match, and was not influenced by racial profiling.’

Nobody is 100 per cent sure why there are so many errors among black and brown people, but Jake Hurfurt, an investigator with the privacy group Big Brother Watch, says it is probably down to the way the AI systems are ‘trained’.

‘These machine learning algorithms are trained on massive data sets of mostly white faces,’ he says. ‘So it’s going to be better at identifying those.’ Campaigners and academics also say conscious or unconscious racial bias could play a part in determining how the AI results are interpreted or in how watchlists of suspects are compiled.

Dr Daragh Murray, reader in international law and human rights at Queen Mary School of Law, says we should all be worried about being wrongly identified by facial recognition systems, no matter the colour of our skin.

‘False positives are a problem for two reasons,’ he says. ‘First, individuals are wrongfully arrested or detained. This can have significant impacts in terms of reputation and employment. Although false positives may seem low, they will increase significantly as facial recognition is rolled out. 

‘Second, it fundamentally undermines trust in the police and their use of new technology. Our policing model is built on community policing. It is dependent on trust.’

Rennea, who works at the Queen’s Hospital in Romford, says it is not clear exactly why she was wrongly flagged up. She had been into the store weeks earlier, and that is when she had been filmed without her knowledge. 

‘They had my face, but when they reviewed the CCTV footage, they admitted that it was the person who had entered the store before me who had done the shoplifting,’ she says.

‘I don’t know why they put my face on the Facewatch list instead of hers, but it has been a very upsetting experience.’

A Live Facial Recognition (LFR) van is deployed on Briggate in Leeds, as West Yorkshire Police use the facial recognition technology for the first time in Yorkshire in November 2025

A Live Facial Recognition (LFR) van is deployed on Briggate in Leeds, as West Yorkshire Police use the facial recognition technology for the first time in Yorkshire in November 2025

Rennea says she has since turned down the offer of a £20 compensation voucher from B&M and reported the company to the office of the Information Commissioner, the data protection watchdog. When she asked for copies of the footage and Facewatch biometrics, she was told it would cost her £800.

A spokesperson for B&M said: ‘This was a case of human error and we apologised to Ms Nelson at the time. Her image was removed from the Facewatch system immediately once the error was identified.’

‘UK retailers are experiencing over 1,600 incidents of violence and aggressive shoplifting in their stores every day and over 36 incidents a day involving a weapon.’

The digital collection of biometric information can be a massive resource in fighting crime – and even the wariest of privacy campaigner does not want to ban it completely. Increasingly, police forces report capturing wanted criminals during Live Facial Recognition sweeps of city centres, outside football matches, rock concerts and even political rallies.

However, questions arise when this information is collected, stored and used without consent. Police forces say they dispose of biometric information once they have finished conducting sweeps. 

But, campaigners ask, what of future – possibly authoritarian – governments and law enforcement agencies? 

Yes if – probably when – AI facial recognition cameras replace ‘dumb’ CCTV ones on the high street, anonymity will be a thing of the past.

In January, Home Secretary Shabana Mahmood announced that the number of LFR surveillance vans would be increased from 10 to 50 nationally – a move that rang alarm bells for tech watchdog Professor William Webster, the country’s Biometrics and Surveillance Camera Commissioner.

‘At the moment, there isn’t a specific piece of legislation that allows police to use the technology, but neither are they barred from using it,’ he says. 

‘But I’ve been arguing for many years that any police force that uses face recognition will find themselves in a court of law because they will misidentify somebody and be legally challenged. This is now happening, and the only way to fix it is by having a piece of legislation that covers how they may, and may not, use it.’

The Commissioner, and Campaign groups such as Big Brother Watch, Liberty and Privacy International, have all expressed concerns about the technology during a consultation process as part of the Police Reform Bill, which came to an end in February. They want their objections to crystalise into policy when the Bill is published later this year.

‘Your fingerprints and your face are both biometric identifiers. And we don’t live in a society where the average person is happy to show their passport, give their fingerprints, have their identity checked, just for going to the shops,’ says Ruth Ehrlich of Liberty.

The extent to which the technology can be used, and how law enforcement scoops up our biometrics is coming under increasing scrutiny, not least since Shaun and Big Brother Watch’s High Court challenge was dismissed this month.

Shaun and Silkie Carlo, director of Big Brother Watch, argued that ‘the force’s policy on where facial recognition can be deployed is so permissive that its use of the technology is not in accordance with the law’.

They pointed out that in 2025, the Met scanned 4.2 million people’s faces for biometric checks using LFR – without asking for permission. The technology, they argued, ‘treats the general public like suspects in a permanent police line-up’.

Shaun, who had a troubled youth, is now a dedicated member of Street Fathers, a voluntary organisation in south London that mentors young people, provides youth centres, music and art workshops. He also gives up his spare time to go on patrols, chatting to young people about the dangers of gang culture, and persuading them to hand over knives.

In a statement to the court, he said: ‘I am concerned about the use of this technology on the streets of London. My main concern is that it is a flawed technology that will lead to more people being mistakenly identified and stopped by the police. It’s intrusive. What happened to me could have happened to anyone.’

Yet the court’s ruling suggests that AI facial recognition is likely to become a ubiquitous part of our lives. But one question still hangs over this whole issue: ‘Why should any of this concern me if I haven’t done anything wrong?’

Well, as Liberty’s Ruth Ehrlich points out: ‘Alvi, Rennea and Shaun hadn’t done anything wrong. And look what happened to them.’

Source link

Related Posts

Load More Posts Loading...No More Posts.