A report released Thursday from Cambridge University claims the United Kingdom police’s use of facial recognition technology (FRT) has breached numerous ethical and human rights obligations.
Published in the Minderoo Centre for Technology & Democracy, the report audited the use of FRT technology by the Metropolitan Police and the South Wales Police. Three main issues were outlined in the report: privacy, discrimination, and accountability. Privacy is a protected right under the Human Rights Act 1998 and can only be interfered with where it “is in accordance with the law and is necessary in a democratic society.” However, the report concluded that FRT usage was “very broad in scope”and therefore may not be in accordance with the requirements outlined in the Human Rights Act.
The second issue regarded discrimination, a common concern with large-scale FRT projects due to potential biases within the AI technology. Under the Equality Act 2010, a public authority “must, in the exercise of its functions, have due regard to the need to eliminate discrimination.” However, the report concluded that:
The deployments were not transparently evaluated for bias in the technology or discrimination in its usage. For example, the Metropolitan Police did not publish an evaluation of the racial or gender bias in the technology before their live facial recognition trials. They also did not publish demographic data on the resulting arrests, making it hard to evaluate if the technology perpetuates racial profiling.
The final issue discussed was concerns regarding accountability and oversight. The report found that “[t]here were also no clear redress measures for people harmed by the use of facial recognition.” Furthermore, “the ethics body overseeing South Wales Police’s [FRT] trials had no independent experts in human rights or data protection based on the available meeting notes.”
Overall, whilst there is certainly a use of FRT in modern societies, as the report concluded, “we must ask what values we want to embed in technology.”