Algorithmic Justice on Trial: Bias, Contestability, and Human Dignity

Algorithmic Justice on Trial: Bias, Contestability, and Human Dignity

Executive Summary

Algorithmic systems increasingly govern who gets a job, a loan, healthcare, or the attention of law enforcement. Their rise presents a moral question: when machines make decisions that shape human lives, what standards must they meet to be legitimate? Recent controversies—such as a UK court finding that police use of facial recognition violated privacy and equality laws[1] and the U.S. Federal Trade Commission banning Rite Aid’s facial‑recognition system after it falsely tagged people of colour as shoplifters[2]—show that accuracy alone does not guarantee justice. Instead, algorithmic decisions must honour deeper principles of fairness, procedural rights and human dignity.

This article argues that algorithmic justice demands a return to first principles. Legitimacy is rooted not in compliance checklists but in three foundational pillars: fairness, contestability and dignity. Fairness requires that decision outcomes do not discriminate and that rules are applied consistently across groups. Contestability affirms that individuals must be notified, heard, and able to appeal when algorithms affect their rights. Dignity recognises each person as more than a data point, insisting that systems respect autonomy, avoid humiliation and preserve agency. These pillars emerge from constitutional norms, philosophical ethics and human rights law. They guide the assessment of metrics, audits and regulatory frameworks, rather than being derived from them.

The paper proceeds by grounding each pillar in first principles and then mapping how current laws and tools attempt to instantiate them. Section 2 explores why algorithms are “on trial” and frames the social contract that binds any authority, human or machine. Section 3 derives fairness metrics from the normative question “what makes a decision fair?” and examines empirical bias studies and audit tools. Section 4 derives contestability from due process and examines notification, explanation and appeals mechanisms. Section 5 examines dignity as the intrinsic worth of persons and how algorithmic systems can harm or respect it. Section 6 compares global governance models (EU AI Act, Council of Europe Convention, NYC Local Law 144, Colorado AI Act and UK guidance), evaluating how well they embody these principles. Section 7 offers a playbook for leaders to embed first principles into design and oversight. The conclusion calls for a shift from reactive compliance to proactive justice.

The Trial of Algorithms: Why Justice Must Anchor Machines

In liberal democracies, authority is legitimate only if it can be justified to those subject to it. This insight, rooted in social contract theory and the rule of law, applies equally to automated decision‑makers. Algorithms now perform quasi‑judicial functions—determining eligibility for social benefits, ranking job candidates, predicting criminal risk, or surveilling public spaces. When such systems err, individuals can lose jobs, liberty or dignity, yet often have no recourse. Bridges v South Wales Police illustrates this tension: the UK Court of Appeal held that the police deployment of live facial recognition violated privacy rights and equality law because it failed to check whether the system discriminated[1]. The case underscored that state adoption of AI must be subjected to the same constitutional scrutiny as any other exercise of power.

First principles also demand that algorithms be accountable to those they affect. The FTC’s action against Rite Aid reveals what happens when organisations treat AI as a black box. The retailer used facial‑recognition software without rigorous testing, leading to false positive identifications that disproportionately targeted women and people of colour[2]. The decision to ban the practice for five years emphasises that algorithmic decision‑makers cannot hide behind vendor claims of accuracy or proprietary secrets[3]. Accountability requires transparent reasoning, evaluation and recourse.

Public expectations are evolving. The Royal Academy of Engineering notes that algorithms will soon pervade “most, if not all, aspects of decision‑making” and calls for ongoing public dialogue to build acceptance[4]. Civil society organisations such as Big Brother Watch describe certain applications—like retail facial‑recognition trials—as “deeply disproportionate and chilling”[5], arguing that they erode civil liberties. The Equality and Human Rights Commission warns that biased data and training methods embed discrimination[6]. The Public Law Project’s TAG register lists dozens of government algorithmic tools, many unreported in official registers[7]. These critiques echo the first‑principles demand that people must be able to scrutinise and contest authority, whether exercised by humans or machines.

Thus, algorithms are “on trial” not because technology is inherently bad, but because any decision‑maker—human or artificial—must justify its authority to the governed. To do so, it must meet three conditions: it must treat people fairly, allow them to be heard, and respect their dignity. The following sections derive these conditions and evaluate how current tools and regulations embody them.

Fairness from First Principles: Measuring and Mitigating Bias

What makes a decision fair? Philosophers offer multiple answers, but common threads include equal respect for persons, impartial application of rules and proportionality between treatment and merit. In the context of AI, fairness means that similar individuals should be treated similarly unless relevant differences justify different outcomes; decisions should not produce systematic disadvantages based on protected characteristics; and procedures must be transparent and consistent. These principles underpin anti‑discrimination law and align with Rawls’ idea of justice as fairness, which requires that social and economic inequalities be arranged to benefit the least advantaged.

From these principles we derive the need for bias metrics. A fair decision rule should produce comparable outcomes across protected groups if those groups do not differ in relevant ways. Statistical Parity Difference measures whether each group receives the positive outcome at similar rates; Equalised Odds assesses whether error rates (false positives and negatives) are equal; Predictive Parity evaluates whether a positive prediction carries equal accuracy across groups; and Calibration checks whether predicted probabilities correspond to actual outcomes uniformly. These metrics translate first principles into operational tests: a large disparity in statistical parity may indicate that a model privileges one group; unequal error rates may show that a model imposes unfair burdens on certain groups.

Empirical audits demonstrate why these metrics matter. The Gender Shades study tested commercial facial‑analysis systems and found that they were highly accurate for light‑skinned men but error rates rose to over 20% for dark‑skinned women, in part because training datasets were 77% male and 83% white[8]. Such disparities contravene the principle of equal respect: the system recognises some identities but misreads others. Audits have also revealed that automated credit scoring and hiring tools replicate historical discrimination, suggesting that “colour‑blind” algorithms can still be biased if trained on biased data.

To uphold fairness, organisations must adopt systematic bias audits and mitigation strategies. Open‑source tools such as AI Fairness 360 (AIF360), Fairlearn and Aequitas implement dozens of fairness metrics and algorithms to correct bias. The choice of metric depends on the context and the relevant notion of fairness; for example, in hiring, equal opportunity (equal true positive rates) may be more important than equal selection rates if some groups have been historically excluded. Fairness metrics should inform, not replace, ethical judgment; no single metric captures all aspects of fairness, and trade‑offs may arise. Mitigation techniques range from pre‑processing (reweighting or resampling data), to in‑processing (adding fairness constraints during training), to post‑processing (adjusting decision thresholds). These technical tools operationalise the principle that no individual or group should bear unfair burdens.

Law is beginning to codify these principles. NYC Local Law 144 requires employers using automated hiring tools to conduct bias audits and publish summaries[9]. Colorado’s AI Act imposes a general duty of reasonable care on developers and deployers of high‑risk systems and prohibits discriminatory treatment or disparate impact[10]. Developers must provide detailed product descriptions and deployers must disclose known risks and impact assessments[11]. These laws do not invent fairness; they operationalise longstanding anti‑discrimination principles and hold algorithmic systems to them. Penalties for non‑compliance, such as fines up to $20,000 in Colorado[12], reflect the view that fairness is not optional but a core requirement of legitimate decision‑making.

Contestability from First Principles: Ensuring the Right to Be Heard

Fair outcomes are necessary but not sufficient for justice; procedures must also be fair. This is captured by the principle of due process, which requires that individuals have an opportunity to understand, challenge and correct decisions that affect them. In administrative law, due process embodies the right to notice, a hearing before an impartial decision‑maker, and a reasoned explanation. These elements give practical meaning to the idea that people are moral agents, not objects of decision‑making.

Contestability is the application of due process to algorithmic systems. From the first principle that authority must be justifiable to those it affects, it follows that algorithmic decisions must be transparent and open to challenge. GDPR Article 22 enshrines this logic by giving individuals the right not to be subject to a decision based solely on automated processing that significantly affects them, and requiring meaningful human review, the ability to express one’s view, and the ability to contest the decision[13]. Similarly, the Council of Europe AI Convention obliges states to provide effective procedural guarantees and notify individuals when AI systems are used[14]. These rights flow from the principle that people must not be dominated by opaque systems.

A contestability workflow operationalises due process in algorithmic contexts:

  1. Notice: Individuals must know when an algorithm will be used to make or influence a decision about them. Without notice, people cannot exercise their rights. NYC’s Local Law 144 requires employers to notify candidates before using automated tools[15]. Colorado’s AI Act requires deployers to provide disclosures before a consequential decision and after an adverse decision[11].
  2. Explanation: To contest a decision, individuals need reasons. The CAIA grants individuals a right to an explanation of adverse decisions and to correct information[11]. The ICO’s guidance urges organisations to provide meaningful information about the logic involved, not just generic descriptions[6]. Explanation translates an opaque model into comprehensible terms, enabling dialogue.
  3. Human Review: Decision‑making must not be wholly delegated to algorithms. GDPR Article 22 mandates the right to obtain human review[13]. Colorado’s law similarly guarantees a right to have a human consider the matter[11]. Human reviewers should have the authority and expertise to override the algorithm and must understand its limitations.
  4. Appeal: When internal review fails, individuals need an independent avenue for redress. The UK’s Algorithmic Transparency Recording Standard (ATRS) encourages public bodies to publish information about algorithmic tools, facilitating external scrutiny and appeals[16]. Yet the Public Law Project’s TAG register shows that many systems remain undisclosed[7], limiting the effectiveness of appeals. Independent ombudsmen or regulators can provide impartial adjudication; the Ada Lovelace Institute recommends such institutions to mirror those in other regulated industries[17].

Contestability, then, is not an optional feature but a manifestation of the first principle that no authority is legitimate unless it can be questioned. By embedding notice, explanation, human review and appeals into algorithmic systems, organisations honour individuals’ status as rights holders and strengthen public trust. Without contestability, fairness metrics cannot prevent injustice, because errors and biases will inevitably occur. The ability to contest is the safety valve that makes automated decision‑making compatible with democratic values.

Dignity from First Principles: Recognising Persons in the Age of AI

Human dignity is the recognition that every person has inherent worth and should never be treated merely as a means to an end. Philosophers from Kant to contemporary human rights theorists agree that dignity demands respect for autonomy, privacy and the capacity to participate in social and political life. In legal terms, dignity underpins constitutional principles like due process and equal protection. It also informs international human rights instruments, such as the Universal Declaration of Human Rights and UNESCO’s Recommendation on the Ethics of AI, which emphasise that AI systems should uphold human rights and preserve dignity.

In algorithmic contexts, dignity is violated when individuals are reduced to datapoints or subject to surveillance without their consent. The EHRC warns that AI can perpetuate discrimination if it is trained on biased data or designed without attention to its social context[6]. Facial‑recognition misidentifications, such as those in the Rite Aid case[2], can lead to humiliation, wrongful accusations and chilling effects on public participation. Big Brother Watch’s description of retail facial‑recognition trials as “deeply disproportionate and chilling” captures how intrusive surveillance undermines the sense of safety and respect[5]. When people cannot know when they are being watched or judged by machines, they are deprived of the ability to manage their appearance and behaviour, a core aspect of dignity.

Respecting dignity requires designing for recognition and agency. Systems should give individuals control over how their data is used and allow them to correct misrepresentations. They should avoid making inferences about sensitive traits unless absolutely necessary, and should communicate purposes, capabilities and limitations clearly. The Royal Academy of Engineering cautions against the “myth of infallibility” and calls for transparent communication about risks and uncertainties[18]. Designing for dignity means allowing people to choose whether to engage with AI, to opt out of automated processing, and to receive remedies when harms occur. The Colorado AI Act embodies this by granting individuals the right to opt out of processing of personal data used by high‑risk AI systems[11].

Dignity also requires social licence—the acceptance by those affected that a system serves legitimate aims and is deployed proportionately. Public engagement, participatory design and independent oversight contribute to social licence. The Royal Academy of Engineering stresses that government and businesses must consult widely and establish mechanisms to detect and address mistakes[4]. The Ada Lovelace Institute argues that independent institutions with statutory backing are needed to assure the public that AI systems are trustworthy[17]. By grounding decisions in dignity, organisations move beyond mere compliance and build systems that people can accept as legitimate.

Comparative Governance: Evaluating Models Against First Principles

Different jurisdictions have responded to the challenges of algorithmic justice by enacting laws and developing standards. From a first‑principles perspective, these regimes can be evaluated on how well they embody fairness, contestability and dignity.

1. EU AI Act

The European Union’s proposed AI Act classifies systems by risk and imposes stringent obligations on “high‑risk” systems in sectors like employment, education and law enforcement. Providers must undertake risk management, data governance, transparency and human oversight before placing systems on the market. Users must follow safe‑use instructions and monitor performance. Some practices—such as social‑credit scoring and untargeted real‑time biometric surveillance—are banned. From a fairness perspective, the Act attempts to pre‑empt harm through ex‑ante controls. It advances contestability by requiring transparency and post‑market monitoring, but individual rights hinge on broader EU law (GDPR). Its dignity contributions include banning intrusive surveillance practices, though critics argue that exceptions for law enforcement leave gaps.

2. Council of Europe AI Convention

The Council of Europe’s binding convention adopts a human‑rights lens. Article 15 mandates that states provide effective procedural guarantees and notify individuals when AI significantly affects their rights[14]. It emphasises participation and public debate in the design and deployment of AI systems. The convention thus aligns closely with contestability and dignity principles, making redress mechanisms a core requirement. Unlike the EU Act, it does not define risk categories but anchors obligations in human rights impacts. This aligns with the view that fairness and dignity are universal obligations, not contingent on risk assessments.

3. U.S. State and Local Initiatives: NYC Local Law 144 and Colorado AI Act

In the absence of federal legislation, U.S. states and cities have pioneered targeted laws. NYC Local Law 144 requires independent bias audits of automated employment decision tools and public disclosure of the results[15][19]. It enshrines notice and limited transparency but leaves contestability to existing employment law. From a first‑principles view, it recognises fairness (through audits) but only partially addresses contestability and dignity.

The Colorado AI Act is more comprehensive. It applies to developers and deployers of high‑risk systems that make or help make consequential decisions, such as in education, lending and housing[10]. The Act prohibits discriminatory treatment or disparate impact and imposes a general duty of reasonable care[10]. Developers must provide detailed product descriptions, and deployers must disclose system use, known risks and impact assessments[11]. Individuals must be notified before consequential decisions and after adverse decisions, and have rights to explanations, data correction and human review[11]. Violations carry penalties and developers may present an affirmative defense if they comply with recognised risk management frameworks[12]. From a first‑principles perspective, the CAIA makes fairness and contestability duties explicit and offers some dignity protections through opt‑out rights and transparency.[10]

4. UK Approach: Guidance and Voluntary Standards

The UK has taken a pro‑innovation approach, relying on existing laws (GDPR, Equality Act 2010) supplemented by guidance. The Algorithmic Transparency Recording Standard (ATRS) encourages public bodies to publish information about AI tools[16], fostering transparency. However, only a few systems appear on the government’s register, while the civil society TAG register lists dozens[7]. The Information Commissioner’s Office (ICO) and Equality and Human Rights Commission issue guidance on fairness and discrimination[6], but enforcement remains limited. From a first‑principles standpoint, the UK approach recognises fairness and contestability but relies on voluntary adoption, potentially leaving dignity unprotected in practice.

Comparative Insights

  • Ex‑ante vs ex‑post: The EU and Colorado models impose ex‑ante obligations (risk assessments, audits) to prevent harm. NYC’s law audits systems already in use. Ex‑ante controls align with fairness and dignity by preventing harm before it occurs, but they may slow innovation. Ex‑post approaches allow for experimentation but rely heavily on contestability to rectify harms.
  • Rights vs duties: The Council of Europe convention emphasises rights to notification, explanation and redress. Colorado’s law supplements rights with affirmative duties (reasonable care, disclosure, risk management). A first‑principles approach suggests combining rights (to contest) with duties (to prevent harm) for robust legitimacy.
  • Enforcement: EU and Colorado regimes include penalties, reinforcing that fairness and contestability are mandatory. The Council of Europe convention relies on domestic enforcement. The UK’s reliance on voluntary disclosure weakens incentives. Enforcement mechanisms are essential to ensure compliance with normative principles.
  • Scope: Colorado and the EU Act apply across sectors, reflecting the generality of first principles; NYC’s law targets employment. Sector‑specific rules may tailor protections, but universal principles argue for broad coverage where decisions have significant effects.

Towards Dignity by Design: A First‑Principles Playbook

To operationalise the principles derived above, leaders should embed fairness, contestability and dignity into organisational practice. The following playbook translates normative commitments into concrete actions.

1. Implement Fairness by Design.

  • Define fairness at the outset: Identify the relevant notion of fairness for each use case (e.g., equal opportunity vs equal outcomes) based on ethical reasoning, legal requirements and stakeholder input. Document these definitions transparently.
  • Measure systematically: Use multiple metrics (statistical parity, equalised odds, predictive parity, calibration) to evaluate models. Interpret metrics through the lens of first principles: a disparity in selection rates suggests potential unequal respect; unequal error rates signal unfair burdens.
  • Mitigate proactively: Apply data‑rebalancing, fairness‑aware learning and threshold adjustments to address identified biases. Conduct audits throughout development and deployment. Use independent auditors to enhance credibility, as required by NYC Local Law 144 and Colorado’s CAIA[15][11].
  • Align incentives: Create accountability structures where teams are rewarded for improving fairness metrics and penalised for ignoring them. Embed fairness considerations into performance reviews and risk management.

2. Operationalise Contestability.

  • Provide advance notice: Inform individuals when algorithmic systems will be used and explain potential impacts. Document how notice will be delivered and ensure it is understandable.
  • Offer meaningful explanations: Develop explanation templates that translate model logic into comprehensible narratives. Provide reasons for adverse decisions and guidance on how to improve outcomes. Record explanation delivery for accountability.
  • Ensure human review: Assign qualified reviewers who can override or adjust algorithmic decisions. Train them on due process, cognitive biases and the limitations of models. Track decisions overturned or modified to identify systemic issues.
  • Create appeal mechanisms: Establish independent appeal channels, whether through internal committees or external ombudsmen. Register systems publicly, following the ATRS template[16], and expand registration to cover private‑sector high‑risk systems. Maintain records of appeals and their outcomes.

3. Design for Dignity.

  • Minimise unnecessary data collection: Collect only what is needed for the decision; avoid sensitive traits unless directly relevant. Provide opt‑out options for non‑essential processing, as Colorado’s CAIA requires individuals to opt out of personal data processing for AI[11].
  • Protect autonomy: Frame algorithmic recommendations as aids, not commands. Empower human decision‑makers to weigh contextual factors. Offer alternatives to automated evaluations, so individuals are not forced into algorithmic systems.
  • Communicate limitations: Acknowledge uncertainties and possible errors. The Royal Academy of Engineering warns against portraying algorithms as infallible[18]. Transparency about limitations fosters informed consent and respect.
  • Engage stakeholders: Involve affected communities in design, testing and governance. Use participatory methods, such as citizen juries or ethics advisory boards, to integrate diverse perspectives. Evaluate whether systems align with community values and social licence.

4. Align with Evolving Governance.

  • Monitor legal developments: Stay informed about the EU AI Act, the Council of Europe convention, Colorado’s CAIA and other emerging laws. Adopt best practices proactively rather than waiting for enforcement.
  • Adopt risk management frameworks: Implement the NIST AI Risk Management Framework, which emphasises governance, mapping, measuring and managing risks[20]. Use these frameworks to structure internal processes and to claim safe‑harbour defences where laws provide them[12].
  • Collaborate with regulators and civil society: Share audit results and impact assessments with regulators, and work with advocacy groups to address systemic issues. Transparency and cooperation build trust and facilitate consistent standards.

By following this playbook, organisations can align their practices with the fundamental principles that confer legitimacy on decision‑making. This transition from reactive compliance to proactive justice not only mitigates legal risk but also strengthens public trust and social acceptance.

Conclusion: Toward a Just Algorithmic Society

Algorithmic systems are here to stay, but their legitimacy is not guaranteed. To earn trust, they must meet the same first principles that constrain human authority: fairness, contestability and dignity. Fairness ensures that decisions do not impose unjust burdens; contestability ensures that people can be heard and errors corrected; dignity ensures that individuals are respected as ends in themselves. Case studies of policing and retail illustrate the harms of ignoring these principles[1][2], while laws like the Colorado AI Act show how they can be operationalised[10][11].

This article has reinterpreted metrics, frameworks and regulations through a first‑principles lens. It has shown that bias audits derive from the ethical demand for impartial treatment; that contestability is an expression of due process; and that dignity requires design choices that recognise individuals as moral agents. Comparing global governance models reveals that no single jurisdiction has perfected algorithmic justice, but the best elements of each—European ex‑ante controls, the Council of Europe’s rights‑based approach, Colorado’s duties and opt‑outs, and the UK’s transparency initiatives—point toward a holistic framework.

The path forward is clear: treat algorithmic fairness, contestability and dignity not as technical afterthoughts but as foundational commitments. Translate them into metrics, workflows and design principles. Build independent institutions and legal frameworks that enforce them. Engage the public in shaping the norms that will govern our digital future. Only then can we harness the power of AI to serve justice and honour the intrinsic worth of every person.

[1] South Wales police lose landmark facial recognition case | Facial recognition | The Guardian

https://www.theguardian.com/technology/2020/aug/11/south-wales-police-lose-landmark-facial-recognition-case

[2] [3] Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards | Federal Trade Commission

https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without

[4] [18] ALG0046 - Evidence on Algorithms in decision-making

https://committees.parliament.uk/writtenevidence/79965/html/

[5] Sainsbury’s tests facial recognition technology in effort to tackle shoplifting | J Sainsbury | The Guardian

https://www.theguardian.com/business/2025/sep/02/sainsburys-tests-facial-recognition-technology-in-effort-to-tackle-shoplifting

[6] Artificial intelligence in public services | EHRC

https://www.equalityhumanrights.com/guidance/artificial-intelligence-public-services

[7] The Tracking Automated Government register - Public Law Project

https://publiclawproject.org.uk/resources/the-tracking-automated-government-register/

[8] Study finds gender and skin-type bias in commercial artificial-intelligence systems | MIT News | Massachusetts Institute of Technology

https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

[9] [15] [19] Automated Employment Decision Tools: Frequently Asked Questions

https://www.nyc.gov/assets/dca/downloads/pdf/about/DCWP-AEDT-FAQ.pdf

[10] [11] [12] A Deep Dive into Colorado’s Artificial Intelligence Act - National Association of Attorneys General

https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/

[13] Art. 22 GDPR – Automated individual decision-making, including profiling - General Data Protection Regulation (GDPR)

https://gdpr-info.eu/art-22-gdpr/

[14] CETS 225 - Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law

https://rm.coe.int/1680afae3c

[16]  Algorithmic Transparency Recording Standard - guidance for public sector bodies - GOV.UK

https://www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard/algorithmic-transparency-recording-standard-guidance-for-public-sector-bodies

[17]  New rules? | Ada Lovelace Institute

https://www.adalovelaceinstitute.org/report/new-rules-ai-regulation/

[20] Artificial Intelligence Risk Management Framework (AI RMF 1.0)

https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

Kostakis Bouzoukas

Kostakis Bouzoukas

London, UK