• Subcribe to Our RSS Feed

Human Rights in the Age of AI: Potential Challenges of an Emerging Technology

May 1, 2025 by     No Comments    Posted under: Volume XV, Issue 1

Petroslava Bratanova, Sciences Po – Paris, France

Human Rights Challenges in the Age of AI: Potential Challenges of an Emerging Technology

[Download PDF]

The study examines the potential threats to human rights posed by artificial intelligence (AI). It analyzes case studies, scholarly literature, and legal frameworks to identify key trends and patterns in AI-induced human rights violations. It focuses on issues such as data bias, the “black box” phenomenon, “false positives,” lack of transparency, and systemic reinforcement of societal inequalities. Findings indicate that a range of human rights can be affected by AI, including rights to adequate housing and freedom of expression. While AI has demonstrated significant benefits, its integration into decision-making processes must be guided by frameworks to mitigate risks to human rights.

Artificial intelligence (AI) has evolved into a technology that has revolutionized multiple sectors of society, including policing, banking, and healthcare. It has improved efficiency and reduced labor needs, enabling these sectors to allocate resources more effectively and ensure faster responsiveness. Unlike simple algorithms (where there is a set of predefined rules that do not develop or change over time), AI learns and constantly develops its algorithms during its operation. AI is shown what it should predict or classify by developers, and then AI identifies data patterns and correlations that connect different features with specific potential outcomes (European Crime Prevention Network, 2022).

Despite its positive attributes, AI poses potential risks to human rights – in part because it is developing more rapidly than regulatory frameworks can adapt (Gellers & Gunkel, 2022). This lack of regulation is increasingly important as AI becomes integrated into decision-making that determines people’s access to welfare, education, and justice. Critics contend that AI suffers from inherent data bias, lack of transparency and accountability, and inaccurate predictions. These shortcomings have the potential to violate fundamental human rights norms such as equality and non-discrimination. They contribute to violations of civil and political rights, including rights to privacy, freedom of peaceful assembly and expression, the right to presumption of innocence, and the right to a fair trial. Social, economic, and cultural rights at stake include rights to housing, employment, welfare, and essential services. This research highlights the need for further investigation of how AI can impact human rights. While AI has demonstrated significant benefits, its integration into decision-making processes must be guided by frameworks to mitigate risks to human rights.

Foundational Issues in AI Systems

The ways in which AI functions lead to several foundational problems that impact human rights, including violating core norms of equality and non-discrimination (United Nations, 1948, Articles 1-2).[1] First, biases in AI can occur if it was trained on data that reflects human bias or is skewed. Some argue that AI mirrors society and its stereotypes, much like the natural cognitive process used by humans themselves, and sometimes such generalizations can even be helpful when making informed decisions (Schauer, 2006). AI does not have any moral standpoint because it reproduces patterns, but perpetuating generalizations becomes harmful when it applies assumptions in ways that unfairly marginalize certain groups. Algorithm-based predictive policing, for instance, has been compared to what law enforcement officers do when they use their “intuition” to stop and search people in the field (Sarre & Livings, n.d.; Redmayne, 2005). The difference between biased decisions made by people and those made by AI is the scale, scope, and systemic impact they have. While human biases are inherently limited to the individuals making them, AI operates on a far broader scale and has the potential to affect far larger groups of people.

Second, another problem is that even if the data used to train AI is unbiased, AI can make biased decisions. Sometimes this relates to the “black box” phenomenon (IBM Data & AI Team, 2025); that is, the reasoning behind how AI identifies data patterns and correlations is not transparent to humans, which raises ethical concerns and complicates the ability to anticipate mistakes made by AI. For instance, we may not be able to trace the reason behind why someone got rejected from university, or by an employer (Blouin, 2021). Even when AI does not explicitly take factors such as race into account, those identity markers can be reflected by other indicators. For example, data about a neighborhood can reflect ethnicity or income level because some minorities or socio-economic classes are segregated in specific areas (Ferris et al., 2020). Finally, a “positive loophole” happens when a self-reinforcing cycle allows biased AI predictions, such as associating certain demographics with criminality, to validate and perpetuate the systemic biases, thereby strengthening the inaccurate associations over time (European Crime Prevention Network, 2022).

Advocates of AI use in law enforcement argue that it can be highly beneficial for identifying perpetrators and preventing crimes, while critics point to its error rates and potential to do harm. The Santa Cruz Police Department in California saw a 19% reduction in theft due to AI use (Cimphony, 2023), supporting claims that AI can benefit law enforcement. Proponents use a utilitarian argument that justifies AI use, even if it sometimes gives “false positives” (identifying innocent people as criminals), because it can save more lives and prevent more crimes than traditional police work alone. Yet AI’s success does not diminish its error rates and the damage it can create for impacted communities; one study on the accuracy of predictive policy software showed only a 0.6% success rate for predicting robberies in 2018 (Sankin & Mattu, 2023). AI has a poor performance record within underrepresented populations, leading to detrimental impacts varying from limiting opportunities and access to basic services, to “false positives” and wrongful terminations (Larkin, 2024). The insufficient representation of certain groups in training data, for instance, might mean AI lacks accuracy in identifying minorities (Blouin, 2021). An early trial of facial recognition software by the London Metropolitan Police Department found that 98% of identifications were incorrect (Santow, 2020). This potential for error – and its disproportionate effect on marginalized groups – was illustrated in a U.S. case against Robert Williams, a Black American who was wrongly accused of theft based on a security tape where facial recognition system misidentified him (Hill, 2020). Ultimately, the use of AI in law enforcement is usually seen as legitimate if its use is proportionate and justified (Feldstein, 2019a), which requires careful consideration of how AI use impacts the enjoyment of fundamental human rights.

Threats to Civil and Political Rights

The issues outlined above can lead to the violation of civil and political rights, especially when AI is used for state surveillance. Systems such as facial recognition and so-called “smart policing” are used by 51% of liberal democracies to monitor migration and counter terrorism (Feldstein, 2019b). Authoritarian regimes are more likely to use AI surveillance unlawfully, such as to control and repress their citizens. Civil and political rights under threat include rights to privacy, freedom of peaceful assembly and expression, the right to presumption of innocence, and the right to a fair trial.

When misused, AI can facilitate the violation of the right to privacy – that is, arbitrary or unlawful interference in one’s privacy and personal life (United Nations, 1948, Article 12; United Nations Human Rights, 1966a, Article 17). AI use in the United States illustrates how AI can threaten rights even in liberal democracies. While the right to privacy is not explicitly listed in the U.S. Constitution, it is derived from other listed rights and affirmed through a series of case laws (Linder, n.d.). In Katz v. The United States, the Court extended the interpretation of the Fourth Amendment to include a “reasonable expectation of privacy” (Cornell Law School, n.d.) that prohibits “the wrongful intrusion into one’s private activities” (Stimmel Law, n.d.). Yet today AI has incredible access to data, thanks to the personal data provided by social media (Cataleta, 2020). Research shows how the Washington D.C. Metropolitan Police Department used AI-driven online surveillance tools to monitor individuals’ social media activity and track public protests from 2014 to 2022 (Dyson et al., 2024). The AI software Voyager was used to create 55,000 fake social media accounts and obtain information from approximately 1.2 million user profiles, which was provided to the D.C. police without their consent and against their expectation (Dyson et al., 2024). Another AI software, Dataminr (2025), provides police with information derived from social media about planned protests, their participants, locations, and timing (Dyson et al., 2024). During U.S. President Donald Trump’s first administration, the police used Sprinklr AI software to monitor specific political views by searching social media for the hashtags #ResistTrump, #ResistFacism, and #Anticapitalist (Dyson et al., 2024).

Notably, sometimes the violation of privacy rights intersects with abuses of freedom of peaceful assembly and expression. The rightto freedom of peaceful assembly, including the right to protest, is protected by international human rights (United Nations, 1948, Article 20; United Nations Human Rights, 1966a, Article 21) and the First Amendment of the U.S. Constitution. AI-enhanced state surveillance can discourage people from engaging in protests and other forms of political participation, ultimately suppressing their political rights. Moreover, this surveillance also violates the right to freedom of expression, also stipulated in human rights frameworks (United Nations, 1948, Article 19; United Nations Human Rights, 1966a, Article 19) and the U.S. First Amendment. By monitoring critics of the government, AI surveillance creates a chilling effect and discourages individuals from expressing their political beliefs.

AI can also pose a threat to the right to presumption of innocence, which is legallyenshrined in Article 14(2) of the International Covenant on Civil and Political Rights (see United Nations Human Rights, 1966a) and Article 6(2) of the European Convention on Human Rights (see European Court of Human Rights, 1950). Predictive policing disproportionately affects already-marginalized groups, violating the principles of equality and nondiscrimination, and prompts the police to examine people on the presumption of suspicion, often before any crime is even committed (Gless, 2018). It is important to consider predictive policing and how it derives from the theory that crime is contagious (Berkowitz & Macaulay, 1971). Statistical data from 40 U.S. cities showed that there were unusually high levels of crime after high-scale events such as the assassination of President John F. Kennedy, leading some to argue that certain individuals are predisposed to criminal violence if social restraints against it are diminished (Berkowitz & Macaulay, 1971). Predictive policing stems from this idea of crime as contagion, training AI with historical crime data to develop specific correlations connecting factors (such as socio-economic background, and location) to predict areas, times, or circumstances where crimes are likely to occur (European Crime Prevention Network, 2022). This is problematic because the system creates a “positive loophole” due to the over-policing of certain neighborhoods, particularly those with higher populations of marginalized groups; there are more arrests because there is more police presence, not because these neighborhoods are more prone to crime (European Crime Prevention Network, 2022). Yet these outcomes reinforce the algorithm’s perception that predictions were initially correct, causing AI to continue making associations such as linking socio-economic background with criminality.

AI use can also threaten the right to a fair trial, which guarantees the right of individuals to a fair and impartial hearing by an independent tribunal (European Court of Human Rights, 1950, Article 6; United Nations, 1966, Article 14). In Malaysia since 2020, courts in Sabah and Sarawak have used AI to assist in trials of drug possession under Section 12(2) of the Dangerous Drug Act (Lim & Gong, 2020; e-Kehakiman Sabah & Sarawak, 2023). The AI software identifies patterns in cases and produces sentencing recommendations, which the judge can choose to comply with or not (Lim & Gong, 2020). Some claim that AI use in the judicial sector can be beneficial for making the system more transparent (e-Kehakiman Sabah & Sarawak, 2023) and reducing bias (Kleinberg et al., 2018), overcoming challenges by processing large amounts of data without the same human bias constraints (Callahan, 2023). Yet human cognition is shaped by cognitive shortcuts and social influences, and AI is trained with historical data of human decisions. If we accept that there exists some level of bias in judicial decisions, then AI will learn this pattern and reflect it in its predictions, reaffirming bias rather than eliminating it (Javed & Li, 2024). Data shows that stereotypes in law enforcement can be activated without conscious awareness, which can lead to biased judgments (Graham & Lowery, 2004). This can be especially harmful in an ethnically and religiously diverse society like Malaysia (Noor & Manantan, 2022), where there are significant levels of discrimination (Komas, 2023). Research from the United States shows that when judges use AI predictions to justify their decisions, they can use AI as a scapegoat to avoid political fallout from controversial sentences and further entrench systemic bias (Esthappan, 2024). Further research shows that it is mathematically difficult to determine fairness, since fairness is an abstract notion with many nuances (Hao, 2019; Lim & Gong, 2020), and relevant factors may therefore be overlooked by AI and significantly impact judicial decisions.

Threats to Economic, Social, and Cultural Rights

Economic, social, and cultural rights are also threatened by AI use, especially in relation to rights to housing, work, and welfare (Gellers & Gunkel, 2022). The right to an adequate standard of living includes the right to housing (United Nations, 1948, Article 25; United Nations Human Rights, 1966b, Article 11). State parties are obliged to ensure non-discrimination based on race, ethnicity, and socio-economic background (Compton & Hohmann, 2023). This includes immediate state actions such as repealing discriminative laws, as well as longer-term provisions to provide housing to marginalized social groups through subsidies and removing impediments (Compton & Hohmann, 2023). But racial and socio-economic inequalities are perpetuated when AI is used by public or private actors (such as banks, landlords, or credit agencies) to restrict minorities’ access to housing, financing, or insurance. AI creates housing obstacles for marginalized communities in several ways. Some platforms monetize processes that were traditionally free, such as having to pay for endorsements from previous landlords (Compton & Hohmann, 2023). Such financial requirements negatively impact the poor as they seek housing. AI can also render tenants more vulnerable to being unjustly evicted because some systems recommend eviction in cases of delayed payments and other issues without considering the circumstances – or because those recommendations are based on erroneous data (Compton & Hohmann, 2023). Further, AI smart home technologies often monitor tenants without transparency or consent, thereby violating rights to privacy in one’s home (Compton & Hohmann, 2023).

Research highlights how AI data can perpetuate discrimination in housing. Chris Robinson, a 75-year-old applicant for a California senior living community, was wrongly denied housing by an AI-based screening program that flagged him as a “higher-risk renter” because it associated him with a littering conviction (Burns, 2023). Yet research indicates that having a criminal record, especially for a minor conviction, does not reliably predict how someone will perform as a tenant (Walter et al., 2017). Furthermore, landlords apply very rigid criteria without considering the specific circumstances of each case. One study found that landlords reject all applicants with criminal records, regardless of the severity of the crime, and AI can confuse the presence of eviction filings with actual completed evictions (So, 2022). Because housing is often the first step formerly incarcerated people take to rebuild their lives, restricted access to housing can lead to reoffending crimes (Burns, 2023). All these problems may lead to violations of the principle of non-discrimination, which is central to the right to adequate housing. Even though the companies providing AI services contend that the ultimate decisions are made by landlords (see United States District Court, District of Connecticut, 2021), a study found that landlords most often adopt the AI recommendations without further examining the case and regardless of errors (Burns, 2023). In Robinson’s case, AI made a mistake; the conviction belonged to another man in Texas, but Robinson still lost the apartment (Burns, 2023).

AI can also pose threats to the right to work (United Nations, 1948, Article 23; United Nations Human Rights, 1966b, Article 6) and the right to welfare – such as social services and security “in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control” (United Nations, 1948, Article 25.1). This can be seen in the case of Amazon’s use of AI for processing job applications, which demonstrated a bias favoring men because it was trained on data from previously hired employees. Since the tech industry is male-dominated, AI developed a correlation between successful applications and male applicants (Dastin, 2018). Once again, the bias built into the AI system leads to potential violations of human rights, this time impacting one’s ability to secure employment. Court cases underscore AI’s threats to human rights, such as a case against the Dutch welfare fraud detection system, SyRI (Van Bekkum & Borgesius, 2021). In addition to obstructing access to welfare, the court ruled that SyRI violated the right to privacy due to its lack of transparency, excessive data collection, lack of public disclosure and consent, and bias against people of lower socio-economic status (Van Bekkum & Borgesius, 2021). The SyRI case is important because it is one of the first high-profile cases where a government was held accountable for using AI in a way that violated fundamental rights.

Conclusion

AI systems have become increasingly complex and deeply embedded in decision-making processes. While AI offers tangible benefits, such as reducing crime rates and improving access to services, it also poses risks to fundamental human rights. The fact that AI violates the central principles of non-discrimination and equality means its negative effects impact both civil and political rights, as well as social, economic, and cultural rights. The creation of “positive loopholes” that lack transparency about how algorithms make inferences and reach decisions, combined with the use of biased data to train AI systems, leads to amplifying existing inequalities and perpetuating systemic injustices. With the increasing autonomy of AI decision-making and plans for even deeper integration of AI in key social systems, this research highlights the need for further investigation of how AI can impact human rights. We must consider the evolving landscape of AI technologies and ensuing regulatory frameworks, since AI will undoubtedly continue to shape the future.

References

Berkowitz, L., & Macaulay, J. (1971). “The Contagion of Criminal Violence.” Sociometry, 34(2): 238-260.

Blouin, L. (2021). “Can We Make Artificial Intelligence More Ethical?” University of Michigan-Dearborn. Retrieved from https://umdearborn.edu/news/can-we-make-artificial-intelligence-more-ethical.

Burns, R. (2023, June 29). “Artificial Intelligence Is Making the Housing Crisis Worse.” The Lever. Retrieved from https://www.levernews.com/artificial-intelligence-is-making-the-housing-crisis-worse/.

Callahan, M. (2023, February 24). “Algorithms Were Supposed to Reduce Bias in Criminal Justice – Do They?” The Brink. Boston University. Retrieved from https://www.bu.edu/articles/2023/do-algorithms-reduce-bias-in-criminal-justice/.

Cataleta, M. S. (2020). “Humane Artificial Intelligence: The Fragility of Human Rights Facing AI.” East-West Center. Retrieved from http://www.jstor.org/stable/resrep25514.

Cimphony. (2023). “AI Predictive Policing Accuracy: 2024 Analysis.” Retrieved from https://www.cimphony.ai/insights/ai-predictive-policing-accuracy-2024-analysis.

Cornell Law School (n.d.). “Invasion of Privacy.” Legal Information Institute. Retrieved from https://www.law.cornell.edu/wex/invasion_of_privacy.

Compton, C., & Hohmann, J. M. (2023). “AI and the Right to Housing.” In Human Rights and Artificial Intelligence: A Deskbook, edited by A. Quintavalla and J. Temperman, pp. 355-370. Oxford: Oxford University Press.

Dastin, J. (2018, October 18). “INSIGHT – Amazon scraps secret AI recruiting tool that showed bias against women.” Reuters. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/.

Dataminr. (2025, January 9). Homepage. Retrieved from https://www.dataminr.com/.

Dyson, I., Milner, Y., & Griffiths, H. (2024, April 30). “Documents Reveal How DC Police Surveil Social Media Profiles and Protest Activity.” Brennan Center. Retrieved from https://www.brennancenter.org/our-work/analysis-opinion/documents-reveal-how-dc-police-surveil-social-media-profiles-and-protest.

e-Kehakiman Sabah and Sarawak. (2023, May 1). “Artificial Intelligence (AI).” Retrieved from https://ekss-portal.kehakiman.gov.my/portals/web/home/article_view/0/5/1.

Esthappan, S. (2024). “Assessing the Risks of Risk Assessments: Institutional Tensions and Data Driven Judicial Decision-Making in U.S. Pretrial Hearings.” Social Problems.

European Court of Human Rights. (1950). European Convention on Human Rights. Retrieved from https://www.echr.coe.int/documents/d/echr/convention_ENG.

European Crime Prevention Network (EUCPN). (2022). “Artificial Intelligence and Predictive Policing: Risks and Challenges.” Retrieved from https://eucpn.org/sites/default/files/document/files/PP%20%282%29.pdf.

Feldstein, S. (2019a). “Findings and Three Key Insights.” In The Global Expansion of AI Surveillance, pp. 7-11. Carnegie Endowment for International Peace. Retrieved from http://www.jstor.org/stable/resrep20995.5.

Feldstein, S. (2019b, September 17). “The Global Expansion of AI Surveillance.” Carnegie Endowment for International Peace. Retrieved from https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847.

Ferris, G., Min, B., & Nayak-Oliver, M. (2020). “Automating Injustice: The Use of Artificial Intelligence and Automated Decision-making Systems in Criminal Justice in Europe.” Fair Trials. Retrieved from https://www.fairtrials.org/app/uploads/2021/11/Automating_Injustice.pdf.

Gellers, J. C., & Gunkel, D. J. (2022, April 1). “Artificial Intelligence and International Human Rights Law: Implications for Humans and Technology in the 21st Century and Beyond.” In Handbook on the Politics and Governance of Big Data and Artificial Intelligence, edited by A. Zwitter & O. J. Gstrein, pp. 430-455. Cheltenham: Edward Elgar Publishing.

Gless, S. (2018). “Predictive Policing: In Defense of ‘True Positives.’” In Being Profiled: Cogitas Ergo Sum. 10 Years of Profiling of the European Citizen, edited by E. Bayamlıoğlu, I. Baraliuc, L. Janssens, & M. Hildebrandt, pp. 76-83. Amsterdam:Amsterdam University Press.

Graham, S., & Lowery, B. S. (2004). “Priming Unconscious Racial Stereotypes about Adolescent Offenders.” Law and Human Behavior, 28(5): 483–504.

Hao, K. (2019, February 4). “This Is How AI Bias Really Happens – and Why It’s so Hard to Fix.” Technology Review. Retrieved from https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-reallyhappensand-why-its-so-hard-to-fix/.

Hill, K. (2020, June 20). “Wrongfully Accused by an Algorithm.” The New York Times. Retrieved from https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html.

IBM Data & AI Team. (2025, January 3). “Shedding light on AI bias with real world examples.” IBM. Retrieved from https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples.

Javed, K., & Li, J. (2024). “Artificial Intelligence in Judicial Adjudication: Semantic Biasness Classification and Identification in Legal Judgement (SBCILJ).” Heliyon, 10(9). e30184. Retrieved from https://doi.org/10.1016/j.heliyon.2024.e30184.

Kleinberg, J., Ludwig, J., Mullainathan, S., & Rambachan, A. (2018). “Algorithmic Fairness.” AEA Papers and Proceedings, 108: 22-27.

Komas, P. (2023, March 21). “Launch of the Malaysia Racism Report 2022.” KOMAS. Retrieved from https://komas.org/launch-of-the-malaysia-racism-report-2022/.

Larkin, Z. (2024, September 30). “AI Bias – What Is It and How to Avoid It?” Levity. Retrieved from https://levity.ai/blog/ai-bias-how-to-avoid.

Lim, C., & Gong, R. (2020, August). “AI Sentencing in Sabah and Sarawak.” Khazanah Research Institute. Retrieved from https://www.krinstitute.org/assets/contentMS/img/template/editor/200821%20AI%20in%20the%20Courts%20v3_02092020.pdf.

Linder, D. O. (n.d.). “The Right of Privacy: Is It Protected by the Constitution?” University of Missouri-Kansas City School of Law. Retrieved from http://law2.umkc.edu/faculty/projects/ftrials/conlaw/rightofprivacy.html.

Marcu, B. I. (2024, June 20.) “The World’s First Binding Treaty on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: Regulation of AI in Broad Strokes.” Future of Privacy Forum. Retrieved from https://fpf.org/blog/the-worlds-first-binding-treaty-on-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-regulation-of-ai-in-broad-strokes/.

Noor, E., & Manantan, M. B. (2022). “Artificial Intelligence.” In Raising Standards: Data and Artificial Intelligence in South Asia, pp. 87-136. Asia Society.

Redmayne, M. (2005). “Review: Profiles, Probabilities, and Stereotypes, by F. Schauer.” Journal of Law and Society, 32(2): 333-339.

Sankin, A., & Mattu, S. (2023, October 2). “Predictive Policing Software Terrible at Predicting Crimes.” The Markup. Retrieved from https://themarkup.org/prediction-bias/2023/10/02/predictive-policing-software-terrible-at-predicting-crimes.

Santow, E. (2020). “Can artificial intelligence be trusted with our human rights?” AQ: Australian Quarterly, 91(4): 10-17.

Sarre, R., & Livings, B. (n.d.). “Artificial Intelligence and the Administration of Criminal Justice: Predictive Policing and Predictive Justice.” Australia Report. Retrieved from https://www.penal.org/sites/default/files/files/A-13-2023_0.pdf.

Schauer, F. 2006. Profiles, Probabilities, and Sterotypes. Cambridge: Harvard University Press.

So, W. (2022). “Which Information Matters? Measuring Landlord Assessment of Tenant Screening Reports.” Housing Policy Debate, 33(6): 1484-1510.

Stimmel Law. (n.d.). “The Legal Right to Privacy.” Retrieved from https://www.stimmel-law.com/en/articles/legal-right-privacy.

United Nations. (1948). Universal Declaration of Human Rights. Retrieved from https://www.un.org/en/about-us/universal-declaration-of-human-rights.

United Nations Human Rights. (1966a). International Covenant on Civil and Political Rights.Retrieved from https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights.

United Nations Human Rights. (1966b). International Covenant on Economic, Social and Cultural Rights. Retrieved from https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-economic-social-and-cultural-rights.

United States District Court, District of Connecticut (2021, March 12). “CONNECTICUT FAIR HOUSING CENTER and CARMEN ARROYO, individually and as next of friend for Mikhail Arroyo, Plaintiffs, v. CORELOGIC RENTAL PROPERTYSOLUTIONS, LLC.” Case No. 3:18cv00705-VLB. Retrieved from https://www.documentcloud.org/documents/20511454-govuscourtsctd125021191/#document/p1/a2022099.

Van Bekkum, M., & Borgesius, F. Z. (2021). “Digital welfare fraud detection and the Dutch SyRI judgment.” European Journal of Social Security, 23(4): 323-340.

Walter, R. J., Viglione, J., & Tillyer, M. S. (2017). “One Strike to Second Chances: Using Criminal Backgrounds in Admission Decisions for Assisted Housing.” Housing Policy Debate, 27(5): 734-750.

© Copyright 2025 Righting Wrongs: A Journal of Human Rights. All rights reserved. 

Righting Wrongs: A Journal of Human Rights is an academic journal that provides space for undergraduate students to explore human rights issues, challenge current actions and frameworks, and engage in problem-solving aimed at tackling some of the world’s most pressing issues. This open-access journal is available online at www.webster.edu/rightingwrongs.


[1] While international human rights law does not explicitly address AI, it guarantees protections that help guide government duty-bearers in the face of this new technology. Specific AI-focused regulations have started to emerge, including the European Ethical Charter on AI, the Council of Europe’s Framework Convention on AI, and the EU AI Act (Marcu, 2024).

Got anything to say? Go ahead and leave a comment!

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>