Ghost Work in the AI Economy: Unveiling the Hidden Labour Behind Intelligent Systems

Ghost Work in the AI Economy: Unveiling the Hidden Labour Behind Intelligent Systems

Executive Summary

Artificial intelligence (AI) is often marketed as a technology that reduces labour: chatbots replace customer‑service agents, machine‑learning models sort résumés and computer vision powers driverless cars. However, every breakthrough in AI owes its success to human judgment. Millions of people around the world label images, transcribe audio, moderate user‑generated content and rank responses to make algorithms accurate, safe and polite. These “ghost workers” are largely invisible in corporate narratives yet are indispensable to AI’s functionality. Digital‑labour platforms have proliferated from 142 in 2010 to over 777 by 2020[1] and have mobilised tens of millions of workers[1], many of whom are paid poorly and spend substantial time on unpaid overhead[2]. This article exposes the hidden supply chain that powers AI, explains why ignoring this workforce poses ethical and business risks, and proposes frameworks for companies to measure, disclose and improve their labour practices. By adopting simple metrics and transparent governance, organisations can build AI systems that are trustworthy, resilient and aligned with emerging regulation.

1. Hidden Labour Behind AI

1.1 The Myth of Autonomy

AI systems are frequently portrayed as self‑sufficient machines that learn from data and operate autonomously. Marketing materials depict algorithms that “think” and “decide” on their own, fuelling public fascination with sentient machines. Yet virtually every modern AI model relies on human labour at multiple stages. Anthropologist Mary Gray and computer scientist Siddharth Suri coined the term ghost work for the micro‑tasks — annotation, transcription, moderation and testing — that humans perform to make digital systems appear seamless. Workers draw bounding boxes around pedestrians so self‑driving cars can recognise them, rate chatbot answers to teach polite behaviour and filter violent content to keep social media safe. This fragmented, piece‑rate work is deliberately presented as auxiliary so that the myth of autonomy endures. Recognising that AI’s “magic” is in fact a reflection of human cognition is the first step toward ethical governance.

1.2 Scale and Growth of the Workforce

The hidden workforce powering AI is vast and growing. The International Labour Organization reports that the number of digital‑labour platforms worldwide increased from 142 in 2010 to more than 777 in 2020[1]. Estimates suggest the number of people earning income on these platforms rose from 43 million in 2018 to roughly 78 million by 2023[1]. These figures underrepresent the millions more who work through subcontractors rather than directly on platforms. Workers are distributed across the globe: India and the Philippines host large annotation hubs, while Kenya, Nigeria and Venezuela have become centres for content moderation and safety testing. High‑income countries also contribute significant numbers of workers; a 2019 survey found that out of roughly 250,000 registered crowdworkers on a popular U.S. platform, more than 226,000 were based in the United States[3]. Such dispersion underscores that ghost work is a global industry embedded in both emerging and advanced economies.

1.3 Hidden Time and Unpaid Labour

Ghost work extends beyond the visible tasks performed. A field study of 100 crowdworkers on a large micro‑task platform tracked not only the time spent on assigned tasks but also the minutes spent searching for jobs, completing mandatory training, waiting for tasks to appear and contesting unfair rejections. The study found that about one‑third of the workers’ time was consumed by these unpaid activities, lowering median effective earnings from US\$3.76 to US\$2.83 per hour[2]. Unpaid overhead includes reading lengthy instructions, cross‑checking guidelines, managing multiple platform interfaces and performing rework after quality audits. Because these activities are invisible to requesters and not factored into pricing, corporate reports overestimate the cost‑savings from “automated” systems. For workers, the unpaid time translates into uncertainty, longer workdays and incomes that hover around or below local minimum wages. Understanding the hidden time invested in AI development is crucial for accurate cost accounting and fair compensation policies.

2. Mapping the AI Supply Chain

2.1 Layers of Human Input

Behind every deployed AI model lies a supply chain of human contributions. This chain can be broken down into several stages:

  1. Data sourcing and collection: Workers gather raw material by taking photographs, recording speech, scraping text or translating sentences. Often, they provide personal data that is later anonymised and aggregated.
  2. Annotation and labelling: Annotators classify images, transcribe audio snippets, tag entities in text, draw polygons around objects and identify emotional tone. Tasks vary from straightforward (“Does this image contain a dog?”) to complex (“Identify sarcasm in this tweet”).
  3. Content moderation: Moderators review user‑generated content to remove hate speech, graphic violence and sexual exploitation. They provide examples to help refine automated filters.
  4. Testing and evaluation: Human evaluators compare model outputs against ground truth, note errors and suggest improvements. In generative AI, raters rank alternative responses to teach models preferences through reinforcement learning.
  5. Fine‑tuning and reinforcement learning: Workers provide preference scores in reinforcement learning from human feedback (RLHF), offering nuanced judgments on tone, politeness and factuality. These ratings calibrate the models’ behaviours.
  6. Quality assurance and rework: Senior annotators or auditors review completed tasks, correct mistakes, ensure consistency across datasets and manage appeals.

These stages are often spread across multiple companies and countries. A major technology firm might hire a data‑annotation company, which then contracts regional vendors, who recruit freelancers through micro‑task platforms. Each intermediary takes a percentage of the revenue, leaving the workers at the base of the pyramid with the lowest pay and benefits. In one high‑profile case, a vendor billed US\$12.50 per hour to a large AI company for content safety work, while Kenyan moderators received only US\$1.32–US\$2 per hour[4]. Such discrepancies highlight the need for supply‑chain transparency and equitable distribution of value.

2.2 Quantifying Human Inputs: The AI Labour Intensity Index

To make the human contributions to AI visible and comparable, organisations need a common metric. We propose an AI Labour Intensity Index (ALII) that captures four dimensions:

  • Annotated hours per model: Calculate the total human hours spent collecting, cleaning, labelling and validating data for each AI system. This includes overhead such as task hunting, reading guidelines and rework.
  • Geographic dispersion: Record the share of labour hours performed in each country or region. This shows reliance on low‑income countries and informs assessments of global equity.
  • Fair‑wage ratio: Compare the median wage paid to annotators and moderators against local living wages or statutory minimum wages. A ratio below 1 indicates underpayment relative to the cost of living.
  • Overhead factor: Measure the percentage of unpaid or invisible labour relative to compensated time. High overhead signals inefficiencies in task design or platform features.

By computing ALII scores for different models and projects, companies can benchmark their practices, identify outliers and set improvement targets. Investors and regulators can use ALII to compare products and evaluate whether claimed efficiency gains reflect hidden labour costs.

2.3 Visualising the Supply Chain

For decision‑makers to appreciate the scope of ghost work, a simple table can summarise the stages, skills and typical labour conditions. Table 1 offers an illustrative overview.

Stage

Example Activities

Skills Required

Common Labour Conditions

Data sourcing

Recording speech, taking photos, scraping web pages

Language skills, basic digital literacy

Piece‑rate pay, short‑term gigs, limited oversight

Annotation

Labelling images, transcribing audio, tagging entities

Attention to detail, domain expertise

Paid per task, quality bonuses, high rejection rates

Moderation

Reviewing user content, flagging hate speech

Cultural competence, emotional resilience

Low pay, exposure to disturbing content, mental‑health risks

Testing

Comparing outputs to ground truth, rating quality

Critical reasoning, subject‑matter knowledge

Variable pay, time‑boxed tasks, minimal recognition

Reinforcement

Ranking responses, providing preferences

Judgment, comprehension, communication

Fixed pay per task, iterative feedback cycles

Quality assurance

Auditing labels, managing rework

Senior expertise, consistency checks

Slightly higher pay, supervisory role

3. Business and Ethical Implications

3.1 The Efficiency Illusion

Corporate narratives often emphasise AI’s ability to cut costs by replacing human labour. However, the efficiency story is incomplete. As seen in the crowdworker study, unpaid overhead constitutes about one‑third of total working time[2]. If business leaders calculate labour costs using only paid task time, they underestimate the true human effort embedded in their models. Similarly, a 2025 survey of U.S. data workers found that 66 % spent at least three hours per week waiting for tasks and were not compensated for this idle time[5]. Such inefficiencies erode worker earnings and may increase turnover. Companies that disregard these costs risk overoptimistic financial projections and hidden liabilities in their supply chains.

3.2 Hidden Costs and Reputational Risk

Undercompensated workers pose operational and reputational hazards. Poor pay and lack of support can lead to high turnover, disrupting continuity and quality control. Training new annotators or moderators takes time and resources, especially when tasks require domain knowledge or cultural sensitivity. If moderators suffer psychological harm from exposure to graphic content, they may leave abruptly, forcing companies to scramble for replacements and risking lapses in content safety. Public revelations about exploitative labour practices can trigger consumer backlash, investor pressure and regulatory scrutiny. Reports of Kenyan moderators earning less than \$2 per hour while vendors bill several times that amount[4] have already spurred calls for fair‑pay standards and stricter oversight. Companies that proactively address these risks by improving labour conditions and disclosing their practices will be better positioned to weather future controversies.

3.3 Ethical Displacement

AI’s labour‑saving narrative often masks a geographic displacement of work. Annotation and moderation tasks are outsourced to lower‑wage regions where currency differentials and limited job opportunities make micro‑task earnings attractive. Yet these wages frequently fall below local living standards. In Kenya, for example, content moderators contracted to support a major AI company were paid US\$1.32–US\$2 per hour, which is roughly equal to or slightly below the local minimum wage[4]. Bonuses and performance commissions can raise earnings marginally but are contingent on stringent accuracy and speed targets. The economic benefits of AI thus accrue disproportionately to technology companies and intermediaries in high‑income countries, while the burdens of low pay, job insecurity and psychological stress are borne by workers in the Global South. Addressing this ethical displacement requires recognising the real cost of human labour in AI and ensuring that efficiency gains are shared more equitably.

4. Global Distribution and Labour Conditions

4.1 Geographic Patterns and Wage Disparities

Ghost work is a global phenomenon with distinct regional patterns. Large crowdsourcing platforms recruit workers in countries such as India, Bangladesh, Venezuela and the Philippines, where internet access and English proficiency coexist with lower wage expectations. In Africa, Kenya and Nigeria have emerged as hubs for content moderation and data labelling because of relatively high education levels and time‑zone overlap with Europe and the United States. Workers in these regions may view platform work as a valuable source of income, yet wages often remain below local living standards. Case studies of Kenyan moderators show that even after including performance bonuses, salaries hovered around US\$1.32–US\$2 per hour[4]. At the same time, workers in high‑income countries perform similar tasks but earn higher absolute wages; however, their effective pay can still fall below local minimum wages once unpaid time is accounted for. Understanding these disparities is crucial for designing fair‑wage policies that reflect local costs of living rather than global averages.

4.2 Precarity and Worker Voice

Precarity characterises much of the ghost workforce. Workers face instability across multiple dimensions:

  • Wages: Compensation is typically piece‑rate, with pay contingent on task acceptance and quality. Accounting for unpaid overhead can reduce effective wages by more than 30 %[2]. Performance‑based bonuses, while appealing, can create pressure to cut corners or work long hours.
  • Job security: Contracts are short‑term, and accounts may be deactivated without explanation. When a subcontractor loses a contract, hundreds of workers can be left without income overnight[6].
  • Benefits: Few platform workers receive health insurance, sick leave or pension contributions. A 2025 survey of U.S. data workers found that only 23 % had employer‑provided health insurance[7].
  • Working conditions: Moderators are exposed to graphic violence, sexual exploitation and hate speech. Without adequate counselling, this exposure can cause burnout and trauma[8]. Annotators often juggle multiple clients with inconsistent guidelines and unstable internet connections.
  • Voice and bargaining power: Workers seldom have access to unions or formal grievance mechanisms. In the U.S. survey, more than half of respondents reported that the estimated times provided for tasks were unrealistic[9], yet they lacked leverage to negotiate.

Improving conditions along these axes is not only a moral imperative but also a strategic investment. Engaged, well‑compensated workers deliver higher‑quality data and lower turnover, which in turn reduces error rates and the need for costly rework.

4.3 Intersectional and Gender Dimensions

While comprehensive gender‑disaggregated statistics are scarce, qualitative evidence suggests that women and marginalised communities constitute a significant portion of the ghost workforce. Tasks requiring empathy and relational skills, such as content moderation and conversation rating, are often considered “feminised” and attract more women. Women may prefer flexible micro‑tasks that allow them to juggle domestic responsibilities and paid work. However, the lack of benefits and job security can exacerbate gender inequities, especially for single mothers or caregivers. Additional factors such as disability, language proficiency and access to infrastructure further shape participation in digital labour. A robust human‑labour audit should therefore collect data on gender, disability and other intersecting identities to inform targeted interventions and inclusive policies.

5. Governance and Solutions

5.1 Human‑Labour Audits and Labour Cards

Visibility is a prerequisite for accountability. Companies should conduct human‑labour audits for each major AI system. Such audits would document the number of workers involved, the tasks performed, hours spent, wage ranges, geographic locations and overhead factors. Aggregated data can then be published in labour cards accompanying model releases, much like model cards that describe technical performance. Labour cards could include:

  • Workers involved: Approximate number of annotators, moderators and auditors, plus their geographic distribution.
  • Compensation: Average hourly rate, fair‑wage ratio and overhead factor.
  • Working conditions: Task descriptions, exposure to sensitive content, availability of mental‑health support and grievance mechanisms.
  • Governance: Policies on labour standards, audit procedures and worker feedback.

Publishing labour cards would not require revealing proprietary training data. Instead, it would provide stakeholders with sufficient information to evaluate ethical practices. Over time, third‑party organisations or consortia could certify labour cards to standardise reporting across the industry.

5.2 Integration into Regulation and ESG Frameworks

Regulators in many jurisdictions are moving toward risk‑based AI governance. The European Union’s AI Act, for example, classifies systems as unacceptable, high‑risk or low‑risk and imposes different obligations accordingly[10]. Integrating labour transparency into this framework would ensure that high‑risk systems not only meet technical standards but also respect human rights. Companies could be required to include ALII scores and labour cards in conformity assessments. Similarly, due‑diligence laws such as the EU Corporate Sustainability Due Diligence Directive could be expanded to cover digital labour, obliging companies to trace their labour supply chains and address abuses. The Ada Lovelace Institute stresses that policymakers must assign responsibilities across AI supply chains and ensure that actors with fewer resources receive support[11]. This could involve establishing grievance mechanisms for workers and providing resources to smaller vendors to comply with reporting requirements.

In the investment realm, ESG frameworks are evolving to include labour considerations. Investors increasingly scrutinise working conditions alongside carbon emissions and diversity. Including ALII metrics and labour disclosures in ESG reports would allow capital to flow toward companies that uphold fair labour standards. Such integration aligns ethical imperatives with financial incentives.

5.3 Corporate Governance and Investor Incentives

Ethical labour practices should be embedded in corporate governance. Boards and executive committees overseeing AI development must expand their remit beyond technical safety and privacy to include human‑labour risks. They should review ALII scores and require that wages meet or exceed local living standards, overhead factors are minimised and mental‑health support is funded. Linking executive compensation to labour metrics could align incentives. For example, bonuses might depend in part on achieving fair‑wage ratios above 1 or reducing overhead by improving task design.

Investor pressure can accelerate change. Shareholder resolutions and proxy votes increasingly focus on social issues. An AI Labour Transparency Index published by an independent body could rank companies on disclosure, fair‑wage compliance, worker voice and grievance procedures. Procurement officers in government and large corporations could require minimum transparency scores from vendors. Just as environmental indices have spurred competition on sustainability, a labour index would create reputational incentives to treat workers fairly. Companies that lead on labour transparency might attract customers and investors who value responsible innovation.

5.4 Collaboration and Multi‑Stakeholder Governance

No single actor can solve the labour challenges in AI. Governments, multilateral organisations, companies, researchers, unions and civil society must collaborate. The International Labour Organization can develop guidelines for digital work, building on existing labour standards. The Organization for Economic Co‑operation and Development (OECD) could harmonise reporting frameworks across countries. National regulators can enforce transparency obligations and provide resources for small and medium‑sized enterprises to comply. Worker cooperatives and unions can offer on‑the‑ground insights and advocate for fair conditions. Academic researchers can refine metrics like ALII and evaluate their impact. Open‑source communities might experiment with community‑owned data‑collection projects that share economic rewards with contributors. Multi‑stakeholder governance ensures that solutions are equitable and grounded in diverse perspectives.

Conclusion: Towards Fair and Trusted AI

AI’s promise of autonomy obscures a fundamental reality: machine intelligence is inseparable from human labour. Millions of people across continents — annotators, moderators, testers, auditors — contribute their time, judgement and emotional resilience so that algorithms can function. Yet these contributions remain largely unseen, undervalued and underpaid. Empirical studies reveal that one‑third of crowdworkers’ labour is unpaid[2], that workers in emerging economies sometimes earn barely US\$1.50 per hour[4] and that most data workers lack basic benefits[7]. Such conditions are at odds with the ethical aspirations of AI developers and the expectations of consumers and regulators.

This article has outlined a roadmap to transform ghost work from an invisible cost into a recognised component of AI development. By adopting the AI Labour Intensity Index, companies can quantify and monitor human inputs. Through labour audits and labour cards, they can disclose their practices and invite scrutiny. By embedding labour considerations into regulation, ESG frameworks and corporate governance, they can align ethical obligations with strategic imperatives. And by collaborating with workers, regulators and civil society, they can build systems that are not only smart but also just.

Ultimately, recognising the human engine behind AI is not a burden but an opportunity. Fair and transparent labour practices will improve data quality, reduce operational risks and foster trust among users and investors. As the AI economy matures, the most successful organisations will be those that treat human labour not as a disposable input but as an asset worthy of respect and investment. The future of AI depends on making its invisible workforce visible.


Frequently Asked Questions

1. What is “ghost work”?
Ghost work refers to the hidden human labour behind AI systems. These tasks include labelling images, transcribing audio, moderating user content, and testing model outputs. They are performed by real people—often via digital platforms or subcontractors—whose contributions are essential for AI but rarely acknowledged.

2. How large is the ghost‑work workforce?
Over the past decade, the number of platforms facilitating micro‑task and crowd work has ballooned, attracting tens of millions of workers worldwide. These workers are spread across regions such as South and Southeast Asia, Africa, Latin America, the United States, and Europe. Many juggle multiple gigs to piece together a living wage.

3. What kinds of tasks do ghost workers perform?
Tasks span several layers of the AI supply chain: sourcing and collecting data, annotating and labelling it, moderating and flagging harmful content, testing model outputs, providing preference feedback for fine‑tuning, and conducting quality assurance. Each layer demands different skills, from basic data entry to cultural expertise and emotional resilience.

4. Why do ghost workers often earn so little?
Low earnings stem from piece‑rate pay models and significant unpaid overhead—time spent searching for tasks, reading lengthy guidelines, and waiting for work. Workers also face high rejection rates and short contracts, leaving them with little bargaining power and no benefits.

5. What is the AI Labour Intensity Index (ALII)?
The ALII is a proposed metric that quantifies the human labour embedded in an AI model. It measures four factors: total annotated hours, geographic distribution of work, the ratio of wages to local living standards, and the percentage of unpaid overhead. Companies can use it to benchmark their projects and identify where labour practices need improvement.

6. Why should businesses and investors care about ghost work?
Ignoring ghost work can lead to poor data quality, high turnover, reputational damage, and regulatory scrutiny. Underpaid, stressed workers may produce inconsistent labelling or leave abruptly, disrupting development schedules. Investors increasingly view fair labour practices as part of environmental, social and governance (ESG) risk management.

7. How can companies and regulators address ghost work issues?
Firms can begin by auditing their human labour supply chains, publishing “labour cards” to disclose hours worked and wage levels, and ensuring fair‑pay ratios. Boards should review labour metrics alongside safety and privacy considerations. Regulators can integrate labour disclosures into AI risk assessments and due‑diligence laws, while independent bodies could develop transparency indices to benchmark firms. Collaboration across industry, government, and civil society is essential for lasting change.


Resources:

[1] Digital labour platforms can advance social justice by focussing on worker welfare | International Labour Organization

https://www.ilo.org/resource/statement/digital-labour-platforms-can-advance-social-justice-focussing-worker

[2] [2110.00169] Quantifying the Invisible Labor in Crowd Work

https://arxiv.org/abs/2110.00169

[3] How Many Amazon Mechanical Turk Workers Are There in 2019?

https://www.cloudresearch.com/resources/blog/how-many-amazon-mturk-workers-are-there/

[4] [6] [8] OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME

https://time.com/6247678/openai-chatgpt-kenya-workers/

[5] [7] [9] Ghost Workers in the AI Machine: U.S. Data Workers Speak Out About Big Tech's Exploitation - TechEquity Collaborative

https://techequity.us/2025/09/30/ghost-workers-in-the-machine/

[10] EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act

https://artificialintelligenceact.eu/

[11]  Allocating accountability in AI supply chains | Ada Lovelace Institute 

https://www.adalovelaceinstitute.org/resource/ai-supply-chains/

Kostakis Bouzoukas

Kostakis Bouzoukas

London, UK