COMMENT – The Dangers of Facial Recognition Technologies
Kate Gouveia Pereira, Toronto Metropolitan University – Toronto, Canada
COMMENT – The Dangers of Facial Recognition Technologies
The task of identifying criminals through physical traits has long been pursued by law enforcement, but advances in artificial intelligence (AI) are creating dangerous new opportunities for detection. In the early twentieth century, criminologist Cesare Lombroso popularized the “biological positivism” approach that attempted to connect physical traits and criminality. Such use of “criminal anthropology” has been used for decades with negative human rights consequences, including fostering discrimination and dehumanization. These damaging effects may now continue with the use of AI technologies such as facial recognition, where people can program their discriminatory beliefs into seemingly neutral technologies. I argue against the widespread use of facial recognition technology because it is susceptible to prejudices, faults, and discrepancies that threaten human rights.
The task of identifying criminals through physical traits such as facial features has long been pursued by law enforcement, but advances in artificial intelligence (AI) are creating new opportunities for detection. Facial recognition technology involves matching an image of a person with another via recognition software to determine if the person in both pictures is, in fact, the same individual (Gates, 2015). This technology is often used to track people through surveillance cameras, capturing the image of a person and then matching it to a database of images to identify them (Gates, 2015). Before today’s technology existed, photographs were used to identify criminals and prison escapees. Computer-assisted facial recognition began in the 1960s, sometimes for controversial experimental programs such as a Stanford University study that sought to connect homosexuality with certain facial features (see Wang, 2018). As AI technology becomes more advanced – and more utilized by law enforcement and criminal justice systems – scholars warn that these technologies are prone to many of the same errors and biases that humans have. In this critical reflection, I argue against the widespread use of facial recognition technology because it is susceptible to prejudices, faults, and discrepancies that threaten human rights – including rights to freedom from discrimination, to equality before the law, to be considered innocent until proven guilty, and to privacy (United Nations, 1948, Articles 2, 7, 11, 12).
The desire to prevent crime and identify criminals has historically led to problematic attempts to link biological features with criminality. An important example of this work comes from criminologist Cesare Lombroso, who popularized the “biological positivism” approach that attempted to classify humans based on their physical traits, including showing that there was a connection between these physical traits and criminality (White et al., 2017). Lombroso believed that criminals were born, and that certain physical features were representative of different stages of evolution (White et al., 2017). He attempted to identify such criminals by using a form of “criminal anthropology” in which he claimed that certain features in the face, which he argued were more like apes, had links to a person’s criminal inclination (Lombroso, 1911, p. 5). Lombroso argued that murderers and perverts were more likely to have large lips – which he observed is a trait common among Black people (Lombroso, 1911, p. 16). Other traits Lombroso indicated as suspect are big ears, tattoos, extra fingers or toes, and abnormal teeth (White et al., 2017).
Lombroso’s ideas were met with enthusiasm when they were developed in the early twentieth century, but a deeper look at biological positivism shows that these practices are deeply discriminatory and dehumanizing. In fact, these kinds of “criminal anthropology” have been used for decades, with negative human rights consequences. Many of the “criminal” traits identified by Lombroso were often present among people from lower socio-economic classes; the inability to access adequate dental and medical care, for instance, could explain things like irregular teeth patterns (Dunnage, 2017). Yet these stereotypes affected many different law enforcement and criminal justice systems, including in Italy. In 1958, the Scuola Superiore di Polizia police academy published guidelines about how tattoos symbolized “corruption” and “degeneracy” and could be used to identify criminals (Dunnage, 2017). Such stereotypes, fostered in part by Lombroso’s work as a criminologist, continue to impact policing today – including through the widespread use of racial profiling. A 2007 survey in Toronto, Canada, showed that Black citizens were more likely to be stopped by police than people of other backgrounds in the city (Wortley & Owusu-Bempal, 2011).
The damaging and lasting effects of biological positivism and Lombrosian ideologies can now be exacerbated by the development of AI technologies such as facial recognition, where humans can program their discriminatory beliefs into seemingly neutral technologies. Unlike humans, computers lack the capacity for common sense and rely completely on the data they have been programmed with – even if that data is influenced by human bias. For instance, research conducted at MIT Media Lab shows that facial recognition technology is 35% less accurate in identifying faces of females of color than on faces of white males, which means Black females could be more vulnerable to misidentification and being accused of crimes they did not commit (see Wang, 2018, p. 30). In cities like Toronto, where racial profiling is still used by law enforcement, the inclusion of racially biased AI could lead to further discrimination and damage the already fragile relationship between the Black community and the police (Wortley & Owusu-Bempal, 2011). Much like the Stanford experiment that aimed to identify homosexuals or the Italian connection between tattoos and criminality, a concern about AI use is that humans will use certain physical traits as “indicators” of perceived social deviance, thereby making harmful assumptions about innocence and guilt.
Despite these concerns, AI is increasingly used by law enforcement organizations – sometimes in partnership with private entities. Recently, the facial recognition app “Clearview AI” was banned in Canada for collecting more than three billion photos from the Internet without user consent and putting them into a police database, raising concerns about privacy and police overreach (Hill, 2021). This facial recognition app was used by 2,400 American law enforcement agencies and dozens of Canadian law agencies (Hill, 2021). It is notable that Black people are overrepresented in police databases, and that many law enforcement agencies do not have regulations in place to remove mugshots of innocent people in these databases once they have been acquitted (Bacchini & Lorusso, 2019). My concern is that the growing use of AI technology could cause more people to be racially profiled and accused of crimes they did not commit.
Facial recognition technologies are dangerous because they suffer from the same biases as their human programmers and can reinforce discriminatory law enforcement practices. While the promise of quick and efficient crime-solving with the help of physical traits has been appealing since Lombroso’s popularization of biological positivism, the truth is that these practices are discriminatory and run counter to human rights norms associated with equality and justice. AI technologies risk taking racist theories and applying them to today’s world on a massive scale, reinforcing stereotypes that foster racial discrimination and perhaps harm other groups, such as LGBTQ+ individuals and members of marginalized ethnic groups. It is imperative that we recognize facial recognition’s potential for harm and work to prevent people from being wrongfully targeted by law enforcement.
References
Bacchini, F., & Lorusso, L. (2019). Race, again: how face recognition technology reinforces racial discrimination. Journal of Information, Communication and Ethics in Society, 17(3): 321-335.
Dunnage, J. (2017). The legacy of Cesare Lombroso and criminal anthropology in the post-war Italian police: a study of the culture, narrative and memory of a post-fascist institution. Journal of Modern Italian Studies, 22(3): 365-384.
Gates, K. (2015). Can Computers Be Racist? Juniata Voices, 15, 5-17.
Hill, K. (2021, February 3). Clearview Facial Recognition App Ruled Illegal in Canada. The New York Times. Retrieved from https://www.nytimes.com/2021/02/03/technology/clearview-ai-illegal-canada.html.
Lombroso, G. (1911). Criminal Man, According to the Classification of Cesare Lombroso. New York and London: The Knickerbocker Press. Retrieved from http://www.gutenberg.org/files/29895/29895-h/29895-h.htm.
United Nations. (1948). Universal Declaration of Human Rights. Retrieved from https://www.un.org/en/about-us/universal-declaration-of-human-rights.
Wang, J. (2018). What’s in Your Face? Discrimination in Facial Recognition Technology. Georgetown University, Graduate School of Arts & Sciences, Thesis Paper.
White, R. D., Haines, F., & Eisler, L. D. (2017). Crime & Criminology: An Introduction to Theory, Third Canadian Edition. Oxford: Oxford University Press.
Wortley, S., & Owusu-Bempah, A. (2011). The usual suspects: police stop and search practices
in Canada. Policing & Society, 21(4): 395-407.
© Copyright 2025 Righting Wrongs: A Journal of Human Rights. All rights reserved.
Righting Wrongs: A Journal of Human Rights is an academic journal that provides space for undergraduate students to explore human rights issues, challenge current actions and frameworks, and engage in problem-solving aimed at tackling some of the world’s most pressing issues. This open-access journal is available online at www.webster.edu/rightingwrongs.