Artificial intelligence systems have become integral to modern decision-making, touching everything from job recruitment and credit scoring to healthcare and criminal justice. While these technologies promise greater efficiency and objectivity, real-world deployments have shown that AI can inadvertently perpetuate or even amplify biases and discrimination. In recent years, several high-profile legal cases have brought this issue into sharp focus, forcing courts to grapple with questions about fairness, accountability, and the future of automated decision-making.
The Foundations of AI Discrimination
To understand how AI systems can discriminate, it’s essential to recognize that these models learn from historical data. When that data reflects existing inequalities or social prejudices, the resulting algorithms may produce discriminatory outcomes, often without explicit intent on the part of their creators. This phenomenon is sometimes described as “bias in, bias out.”
“AI doesn’t create bias out of thin air; it reflects and amplifies the biases present in our society.”
Legal systems worldwide are beginning to confront these challenges, with courts scrutinizing not just the outcomes of AI systems, but also the processes by which they are developed, tested, and deployed.
Amazon’s Recruiting Tool: Gender Bias in Action
One of the most discussed examples of AI-driven discrimination emerged from within Amazon. In 2018, Reuters reported that Amazon quietly scrapped an AI recruiting tool after discovering that it systematically downgraded resumes containing the word “women’s,” such as “women’s chess club captain.” The system, trained on resumes submitted over a ten-year period (predominantly by men), learned to favor male candidates for technical roles.
Although this case did not reach the courts, it became a landmark in public discourse and regulatory scrutiny. The Equal Employment Opportunity Commission (EEOC) in the United States and similar bodies in Europe cited this example in shaping guidelines for AI in hiring. The incident highlighted the challenges companies face in ensuring fairness, and it spurred calls for greater transparency and accountability in algorithmic decision-making.
The Legal Landscape: Title VII and Automated Hiring
In the United States, Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex, or national origin. Legal scholars and regulators quickly pointed out that automated hiring tools, if shown to have a “disparate impact” on protected groups, could expose employers to liability—even if the bias was unintentional.
Courts have begun to consider whether AI systems are subject to the same standards as human decision-makers. In Griggs v. Duke Power Co. (1971), the U.S. Supreme Court established that employment practices that are facially neutral but have a discriminatory impact can violate Title VII. Although this case predates modern AI, its logic is being applied to automated systems today.
COMPAS and Criminal Justice: The Loomis Case
In the criminal justice system, the use of risk assessment algorithms has sparked fierce debate—and litigation. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool is used in several U.S. states to assess the likelihood that a defendant will reoffend. In 2016, the Wisconsin Supreme Court heard the case State v. Loomis, in which Eric Loomis challenged the use of COMPAS in his sentencing.
“The court’s reliance on a proprietary algorithm—whose workings were not fully disclosed—raises serious questions about transparency and due process.”
Loomis argued that COMPAS’s secretive algorithm could be biased against African American defendants and that its use violated his right to due process. The court ultimately upheld the use of COMPAS but cautioned lower courts to consider the limitations of such tools, particularly the risk of disparate impact and the lack of transparency.
This case became a catalyst for further legal scrutiny and academic research into the fairness of AI in the justice system. It also prompted some jurisdictions to reconsider the use of proprietary algorithms in sentencing, advocating for greater openness and independent auditing.
Algorithmic Transparency and the Law
One of the key legal questions raised by the Loomis case is whether defendants have a right to examine the algorithms used in their cases. While courts have sometimes treated algorithms as “trade secrets,” there is growing recognition that algorithmic transparency is essential to due process and fairness. Scholars and advocacy groups continue to push for clearer standards and legal requirements in this area.
Credit Scoring and Racial Bias: The Case of Apple Card
In 2019, Apple and Goldman Sachs launched the Apple Card, touting a high-tech, AI-driven approach to credit decisions. Shortly after launch, several customers—including prominent software developer David Heinemeier Hansson—publicly alleged that the algorithm granted significantly lower credit limits to women than to men, even when the women had higher incomes and better credit histories.
The controversy quickly drew the attention of the New York State Department of Financial Services (NYDFS), which launched an investigation into potential violations of the Equal Credit Opportunity Act (ECOA). The ECOA prohibits discrimination in any aspect of a credit transaction on the basis of race, color, religion, national origin, sex, marital status, or age.
While the investigation ultimately found no intentional wrongdoing, it concluded that the use of opaque algorithms in credit decisions posed significant risks. The case underscored the need for financial institutions to audit their AI models for disparate impact and to provide meaningful explanations for their decisions.
Interpretable AI: A Legal and Ethical Imperative
The Apple Card episode highlighted a broader trend: regulators and courts are increasingly demanding that AI systems be not only fair, but also interpretable. In practice, this means that companies must be able to explain, in plain language, how and why an AI model arrived at a particular decision—especially when that decision impacts people’s access to critical resources.
As AI continues to permeate banking, lending, and insurance, the ability to trace and justify algorithmic decisions is becoming a legal, ethical, and business imperative.
Healthcare Algorithms and Racial Inequity
Discrimination by AI is not limited to hiring, justice, or finance. In 2019, a study published in Science revealed that a widely used healthcare algorithm, deployed by hospitals and insurers across the United States, systematically underestimated the health risks of Black patients compared to white patients. The algorithm used prior healthcare spending as a proxy for health needs—a variable influenced by longstanding disparities in access to care.
“When AI systems measure what is easy to quantify, rather than what truly matters, they risk perpetuating the very inequities they are meant to address.”
This revelation prompted swift policy changes at several healthcare organizations and attracted the attention of federal regulators. The U.S. Department of Health and Human Services (HHS) issued guidance urging the healthcare sector to evaluate AI tools for bias and to ensure compliance with anti-discrimination laws such as Title VI of the Civil Rights Act.
Global Perspectives: GDPR and Algorithmic Fairness in the EU
Beyond the United States, the European Union’s General Data Protection Regulation (GDPR) contains explicit provisions on automated decision-making. Articles 13-15 give individuals the right to meaningful information about the logic, significance, and consequences of automated decisions. Article 22 further grants people the right not to be subject to decisions based solely on automated processing if those decisions produce legal or similarly significant effects.
The GDPR was cited in several legal actions against companies accused of discriminatory algorithmic practices. For example, in 2021, the Dutch government was found to have unlawfully used an AI system to flag welfare fraud, disproportionately targeting residents of immigrant neighborhoods. The courts ruled that the algorithmic surveillance violated privacy and anti-discrimination laws, forcing the government to suspend the program and apologize to affected communities.
Regulatory Responses and Industry Standards
The growing body of legal cases and regulatory investigations has spurred action at multiple levels. Governments are developing AI-specific regulations, such as the EU’s proposed Artificial Intelligence Act, which classifies certain uses of AI as “high risk” and imposes strict requirements for transparency, human oversight, and bias mitigation.
Industry groups are also stepping in, with the IEEE, ISO, and other organizations publishing standards for AI ethics, fairness, and accountability. Companies are increasingly establishing internal review boards and engaging third-party auditors to assess their AI systems for discriminatory effects.
“Legal compliance is just the starting point; true fairness in AI requires ongoing vigilance, collaboration, and a commitment to social responsibility.”
The Human Element: Accountability and Redress
One recurring theme across legal cases is the need for human accountability in AI-driven decisions. Courts and regulators emphasize that delegating choices to algorithms does not absolve organizations of their legal and ethical responsibilities. When discrimination occurs, affected individuals must have access to meaningful remedies, including the ability to challenge decisions and seek redress.
Looking Ahead: Lessons from the Courts
Legal cases involving AI and discrimination have illuminated the complex interplay between technology, society, and law. They have made clear that algorithmic fairness is not a technical problem alone, but a societal challenge that requires input from engineers, lawyers, ethicists, and affected communities.
As AI becomes ever more embedded in daily life, ensuring that these systems are designed and deployed with care is not just a matter of regulatory compliance—it is a matter of justice and human dignity. The courts, in their evolving jurisprudence, are laying the groundwork for a future in which AI can serve as a tool for equity, rather than an amplifier of inequality.
With each new case, the boundaries of legal responsibility and technological possibility are being redrawn. For those building and deploying AI, the imperative is clear: to pursue not only innovation, but also fairness, transparency, and accountability, so that the promise of artificial intelligence can be realized for all.