Privacy at Scale: Managing Data Rights Across Partner Ecosystems

Executive Insight
The modern boardroom sits at the intersection of ambition and accountability. In an era where value creation depends on data sharing across complex ecosystems, leaders face a simple paradox: the same networks that deliver exponential returns also magnify privacy, regulatory and reputational risk. The economics of trust make this paradox more than a compliance issue. Evidence shows that trusted companies grow 41 % faster and retain 51 % more customers[1], while a single breach can cost an average of $4.44 million[2] and erode customer loyalty. In 2025, digital trust is no longer a marketing slogan; it is a measurable source of competitive advantage.
This paper argues that building digital trust at scale requires more than privacy policies or reactive compliance. It demands an operating system of trust that hard‑codes purpose limitation and data minimisation into products, processes and partnerships, and measures them through board‑level service level objectives (SLOs). The operating system must be grounded in global frameworks—from the EU General Data Protection Regulation (GDPR) to the NIST Privacy Framework, ISO/IEC 27701, BS 10012, IEEE 7002 and W3C privacy principles—and must translate abstract principles into actionable metrics and controls. The result is an integrated approach that aligns shareholder value with societal expectations and regulatory requirements. By the end of this paper, boards will have a blueprint for embedding privacy at scale, monitoring progress, holding vendors accountable and capturing the trust dividend.
Foundations: Purpose Limitation and Data Minimisation
Purpose limitation: stopping purpose creep before it starts
At the heart of privacy law lies the purpose limitation principle. GDPR Article 5(1)(b) stipulates that personal data must be collected for specified, explicit and legitimate purposes and not further processed in ways incompatible with those purposes[3]. The UK’s Information Commissioner’s Office (ICO) interprets this to mean organisations must be clear from the outset about why they are collecting data, document those purposes, inform individuals, and ensure any new uses are compatible or based on consent or legal obligation[4]. Purpose limitation is often seen as a legal box to tick, but at scale it becomes a strategic discipline. Without it, data can drift into new hands and new uses, undermining trust and violating laws across jurisdictions. Boards should view purpose limitation as a design constraint and require explicit purpose declarations for every dataset and model. Purpose gates in code and contracts can make purpose statements enforceable rather than aspirational.
Data minimisation: calibrating collection to the purpose
Complementing purpose limitation is the data minimisation principle. GDPR Article 5(1)(c) requires that personal data be “adequate, relevant and limited to what is necessary” for the stated purpose[5]. The ICO clarifies that organisations should collect only the personal data they need and delete anything unnecessary[6]. In practice, data minimisation means designing event schemas with only required fields, using privacy‑enhancing technologies (PETs) like synthetic data or differential privacy, and building automated retention and deletion routines. It is not about collecting no data; it is about calibrating collection to the purpose and using technology to derive value from minimal datasets. Boards should insist on evidence that teams have critically justified each attribute and on metrics that reveal when collections creep beyond necessity.
Vendor accountability: turning partners into extensions of your controls
Modern enterprises depend on hundreds of vendors—from cloud providers and analytics platforms to marketing agencies and AI model builders. Third‑party risk is now board‑level risk. A single weak link can expose sensitive information, undermine compliance and damage reputation. Organisations must move beyond contractual boilerplate to tier vendors by the sensitivity and scale of data they handle, collect evidence of their controls, and embed audit rights and termination clauses in contracts. The Shared Assessments Standardized Information Gathering (SIG) Questionnaire offers a comprehensive set of questions used to evaluate third‑party or vendor risk, created with input from a diverse membership and updated annually[7]. For cloud providers, the Cloud Security Alliance Consensus Assessments Initiative Questionnaire (CAIQ) provides a structured set of yes/no questions that allow customers to evaluate a provider’s security, compliance and privacy posture[8]. Vendor assurance also requires independent attestations: SOC 2 reports assess controls relevant to security, availability, processing integrity, confidentiality and privacy; ISO/IEC 27001 and its privacy extension ISO/IEC 27701 verify that a vendor’s information security and privacy management systems meet international standards[9]. Boards should treat vendor assurance as a continuous discipline rather than a one‑time exercise. Evidence packs should be refreshed annually, and contracts should require vendors to document dataset provenance and accountability.
Metrics That Matter: Board‑Level Privacy SLOs
Principles are important, but without metrics they remain toothless. To embed privacy in decision‑making, boards need quantifiable indicators that tie into risk appetite statements and executive incentives. The following ten Privacy SLOs translate abstract principles into measurable outcomes:
1. Purpose‑Fit Ratio (PFR) – the percentage of datasets and models where the declared purpose matches actual use, validated through logs and policy checks. A high PFR (target ≥ 98 %) signals that purpose creep is under control.
2. Data Minimisation Index (DMI) – one minus the ratio of collected attributes to attributes demonstrably necessary for the purpose, with a target ≥ 0.85. A higher DMI indicates lean data collection and disciplined schema design.
3. Retention Conformance (RC) – the percentage of records automatically deleted per retention schedule. Target ≥ 99.5 %. RC exposes whether data is being kept longer than necessary.
4. Vendor Assurance Coverage (VAC) – the proportion of Tier‑1 and Tier‑2 vendors with current SIG/CAIQ responses, SOC 2 or ISO 27001/27701 certifications and data protection impact assessments. Targets: Tier 1 = 100 %, Tier 2 ≥ 95 %.
5. Data Subject Request SLA (DSR‑SLA) – the percentage of data subject rights requests (access, deletion, correction) closed within contractual service level agreements (e.g., 30 days). Target ≥ 99 %.
6. PETs Adoption Rate (PAR) – the percentage of new models or datasets employing PETs (e.g., differential privacy, federated learning, homomorphic encryption, secure multi‑party computation, synthetic data) when handling sensitive data. Target ≥ 60 %.
7. Re‑Identification Residual Risk (RRR) – the percentage of anonymised datasets that meet defined thresholds for k‑anonymity and differential privacy parameters (e.g., k ≥ 10). Target ≥ 95 %.
8. Third‑Country Transfer Governance (TCTG) – the percentage of cross‑border transfers governed by valid mechanisms (Data Privacy Framework, Standard Contractual Clauses, Transfer Impact Assessments). Target 100 %.
9. Incident Transparency Lag (ITL) – median hours from discovery of a breach or incident to notifying regulators or affected users. Target ≤ 72 hours. Transparency reduces fines and builds trust.
10. Model Purpose Drift (MPD) – number of production models repurposed without a new DPIA or board sign‑off divided by total model changes. Target: zero. MPD ensures models aren’t quietly repurposed beyond their original scope.
These SLOs are not academic. They allow boards to put numbers against trust, monitor progress quarterly and connect privacy performance to compensation. They also enable benchmarking across industries and drive meaningful conversations with regulators and investors.
Standards and Policy Backbone: Translating Law into Control
Building a privacy operating system requires aligning with global frameworks and anticipating regulatory changes. The following standards form the backbone of privacy at scale:
GDPR and the EU AI Act: The regulatory canvas
The GDPR is the gold standard of data protection law. It articulates principles of lawful processing, including purpose limitation and data minimisation[3][5]; grants data subjects rights; and mandates accountability measures such as Data Protection Impact Assessments (DPIAs) and breach notification within 72 hours. It has extraterritorial reach, meaning that organisations operating globally must align with its requirements even when data flows across borders.
Riding on the GDPR, the EU Artificial Intelligence Act introduces obligations specifically for AI systems. Its timeline is staged: certain prohibitions and AI literacy obligations take effect from February 2025[10], rules for general‑purpose AI models begin August 2025[11] and obligations for high‑risk AI systems phase in between 2026 and 2027[12][13]. Boards must treat these dates as critical milestones. Contracts with AI vendors should include assurances that their models have been tested and assessed in accordance with the Act, and that transparency obligations (e.g., labelling AI‑generated content) are met. Large‑scale providers will be subject to further requirements by 2030[14], underscoring the need for forward‑looking readiness.
OECD privacy principles and the NIST Privacy Framework: A risk management lens
The OECD Privacy Guidelines remain the conceptual bedrock of international privacy norms. They emphasise collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation and accountability. As AI and digital ecosystems proliferate, OECD guidance has begun to link these principles to AI governance, highlighting the need for purpose limitation and minimisation in automated decision‑making.
The NIST Privacy Framework 1.1 complements the GDPR by providing a voluntary, risk‑based tool to identify, assess and manage privacy risk. It outlines high‑level privacy risk management outcomes[15] and is designed to be used alongside the NIST Cybersecurity Framework. NIST’s 2025 update emphasises alignment between privacy and security, introduces a section on AI and privacy risk management, and preserves flexibility so organisations can adapt the framework to their context[16]. Boards should require cross‑functional teams to create NIST privacy profiles that map to the SLOs, thereby transforming guidelines into measurable outcomes.
ISO/IEC 27701 and BS 10012: Institutionalising privacy management
ISO/IEC 27701 extends ISO/IEC 27001 by specifying requirements for a Privacy Information Management System (PIMS)[17]. It gives organisations a systematic way to map data flows, identify privacy risks and implement controls. Certification demonstrates that an organisation’s privacy management practices align with international standards and helps build trust with partners and customers[18].
BS 10012 is a British standard that also sets out a PIMS framework. It addresses awareness, data sharing, disposal, retention, risk assessment and training[19] and emphasises that implementing a PIMS improves reputation and confidence in handling personal information[20]. Together, ISO 27701 and BS 10012 provide a management‑system approach that can be integrated with ISO 27001 or other quality systems. Boards should view them not merely as certification checkboxes but as a foundation for continuous improvement and audit readiness.
IEEE 7002 and W3C Privacy Principles: Engineering privacy into products
The IEEE 7002–2022 Standard specifies how to manage privacy issues for systems or software that collect personal data. It covers corporate data collection policies, quality assurance and the use of privacy impact assessments to identify and measure privacy controls[21]. IEEE 7002 provides practical guidance for engineers, bridging the gap between policy and code.
The W3C Privacy Principles, published in 2025, offer definitions and principles for building the web as a trustworthy platform. They highlight that individuals benefit when technology and policy work hand in hand[22]. These principles underpin emerging web APIs like Privacy‑Preserving Attribution (PPA), which generate aggregated advertising metrics without individual tracking. Boards should ensure product teams follow these engineering principles, particularly when building web applications that rely on third‑party scripts or cross‑origin data.
Mapping standards to controls: A holistic view
Standards and frameworks are only useful if they map to controls and metrics. Boards should maintain a standards map that shows how GDPR principles align with NIST functions, ISO 27701 clauses, IEEE 7002 activities and SOC 2 trust service criteria. For example:
· Purpose limitation → NIST Govern function; ISO 27701 clause 5.1.1 (documented purposes); SOC 2 privacy criterion on notice and consent.
· Data minimisation → ISO 27701 clause 5.3.3 (data minimisation); SOC 2 privacy criterion on collection.
· Incident response → NIST Communicate function; ISO 27701 clause 6.13 (breach notification); SOC 2 security and confidentiality criteria.
Mapping standards ensures that every principle has corresponding processes, controls, metrics and assurance mechanisms. It also helps identify overlaps and gaps so that resources are directed efficiently.
When Privacy Fails and Scales: Lessons from Ecosystems
Purpose creep and vendor drift: How ecosystems erode trust
Data flows seldom respect organisational boundaries. In a typical partner ecosystem, personal data passes through apps, SDKs, analytics platforms, advertising networks and cloud providers. Without stringent purpose limitation and minimisation controls, purpose creep occurs: data collected for one service is quietly repurposed for advertising or behavioural profiling. The ICO warns that organisations must be clear from the outset about why they are collecting data and ensure any new use is compatible or based on consent[4]. Yet in practice, purpose declarations can be vague and enforcement weak. Boards must treat purpose creep as a systemic risk—akin to financial misstatement—and ask for evidence that code and contracts enforce purpose binding.
Vendor drift is equally pernicious. Even vendors that initially meet standards can backslide by adding new sub‑processors, changing data retention periods or failing to maintain their certifications. Because third‑party risk is now board‑level risk, organisations must implement continuous assurance: requiring regular updates of SIG/CAIQ questionnaires, verifying SOC 2/ISO 27701 certificates annually and triggering remediation or exit if vendors’ practices lapse. Vendor drift is not just a privacy issue; it can lead to data breaches, regulatory fines and reputational damage. Boards should monitor the Vendor Assurance Coverage (VAC) metric and hold executives accountable for keeping it at target levels.
Information‑sharing ecosystems: PETs as a path to collaboration
Collaborative analytics holds enormous promise: banks can pool data to detect fraud, hospitals can improve diagnostics and retailers can share supply‑chain information. Yet sharing raw personal data is often prohibited by law and imprudent from a trust perspective. Privacy‑enhancing technologies (PETs)—such as synthetic data, differential privacy, secure multi‑party computation and federated learning—offer a way forward. The UK FCA’s Synthetic Data Expert Group notes that synthetic data can expand data usage and support data sharing without revealing underlying sensitive information[23] and that it can improve fraud detection while mitigating bias[24]. The group emphasises the importance of balancing information sharing with data protection[25]. The Royal Society describes PETs as “Partnership Enhancing Technologies” because they maximise benefit while reducing harms and enable greater accountability through audit[26]. Boards should view PETs adoption not as an experiment but as a strategic imperative, tracked via the PETs Adoption Rate (PAR) metric and funded in innovation budgets.
Building the Operating System of Trust
Tiering vendors: Focus resources where risk is highest
Not all vendors are created equal. A critical first step is to tier vendors based on the sensitivity and volume of data they process:
1. Tier 1: Vendors that handle sensitive personal data, perform cross‑border transfers, train models on client data or impact large user populations. Examples include customer data platforms, AI model providers and payroll processors.
2. Tier 2: Vendors that process personal data at scale but do not train models. Examples include marketing automation tools and support ticket systems.
3. Tier 3: Vendors that handle no personal data (e.g., facility maintenance or office furniture suppliers).
Tier 1 and Tier 2 vendors should provide complete evidence packs: filled‑out SIG/CAIQ questionnaires, SOC 2 or ISO/IEC 27001/27701 certifications, DPIA summaries, retention schedules, privacy engineering notes, breach history and sub‑processor lists. Tier 3 vendors can provide a basic security attestation confirming that they do not handle personal data. Boards should insist that procurement processes assign a tier at contract inception and require “no artifact, no contract”—without an evidence pack, contracts are not signed.
Evidence and assurance: Building confidence through external validation
The SIG and CAIQ questionnaires cover governance, information protection, IT operations and incident management[27][8]. They are living documents; answers should be updated at least annually and whenever a vendor’s practices change. SOC 2 reports evaluate a service organisation’s controls relevant to security, availability, processing integrity, confidentiality and privacy. They provide independent assurance that a vendor’s controls meet the trust service criteria and should be non‑negotiable for Tier 1 vendors. ISO/IEC 27701 certification demonstrates the existence of a PIMS and alignment with privacy regulations[17][18]. When combined, these tools create a layered assurance stack: questionnaires provide depth, SOC 2 offers external attestation, and ISO 27701 shows the existence of a systematic privacy management approach.
Contracts are another pillar of assurance. Data processing agreements should include: purpose binding and minimisation clauses, audit rights with cure periods, breach notification within 72 hours, sub‑processor pre‑approval, data deletion on termination, and documentation of dataset provenance. Datasheets for datasets—as proposed by Timnit Gebru and colleagues—require creators to document the motivation, composition, collection process and recommended uses of a dataset[28]. Including datasheets in vendor evidence packs promotes transparency and helps identify potential biases[29]. Boards should also require vendors to maintain a model purpose registry that lists each model’s stated purpose and monitors changes. Without these contractual protections, companies risk misalignment between their privacy commitments and their partners’ practices.
Implementing the SLO dashboard: Turning metrics into management
Metrics mean little without implementation. Boards should require a privacy dashboard that displays current versus target values for each SLO. Data for the dashboard can come from automated systems: PFR is computed by comparing declared purposes (stored in a model purpose registry) with actual usage logs; DMI is derived from comparing collected attributes with a canonical list of necessary attributes defined by engineers and legal counsel; VAC is calculated from vendor tiering and evidence pack completeness; RRR is measured through regular re‑identification risk tests using k‑anonymity and differential privacy budgets; TCTG is derived from the percentage of cross‑border transfers using approved mechanisms; ITL is computed from incident management logs. Boards should review the dashboard quarterly, and any metric below threshold should trigger a remediation plan and, if necessary, vendor termination or product redesign. Tying SLO performance to executive compensation ensures accountability.
Case‑Style Narratives: Bringing the Framework to Life
Consumer App: Replacing third‑party tracking with privacy‑preserving attribution
A global consumer app with hundreds of millions of users faced a familiar challenge: how to measure advertising effectiveness without tracking individuals across sites. Traditional approaches relied on device IDs and hashed emails, which are increasingly incompatible with privacy regulations and user expectations. Before adopting privacy‑preserving approaches, the company collected dozens of attributes per event and had a Purpose‑Fit Ratio of 75 % and a Data Minimisation Index of 0.4. Complaints were rising.
The company joined a working group developing Privacy‑Preserving Attribution (PPA), a web standard that produces aggregate statistics about how advertising leads to conversions without creating a risk to the privacy of individual web users[30]. PPA collates information from multiple origins, aggregates it using an approved service and adds noise to ensure differential privacy[31]. After deploying PPA, the app removed cross‑site identifiers, reduced collected attributes from 20 to 6, and declared explicit purposes for each dataset. Within six months, PFR increased to 99 %, DMI to 0.9, and data‑protection complaints fell by 35 %. Advertising revenue remained stable because aggregated metrics provided sufficient insight. The board cited alignment with W3C privacy principles[22] and the use of SLOs as key to maintaining trust while driving growth.
Financial‑Services Consortium: Using PETs to enable fraud‑detection collaboration
A consortium of banks wanted to improve fraud‑detection models by pooling data across institutions. Secrecy laws and competitive concerns made direct data sharing impossible. Before the project, models were trained on limited local datasets, resulting in poor detection rates and high false positives. Data minimisation was poor as each bank collected more attributes than needed.
The consortium adopted federated learning with differential privacy and synthetic data. Each bank trained a local model on its dataset; model updates were aggregated using secure multi‑party computation. Synthetic datasets were generated from the federated model for benchmarking. The FCA’s Synthetic Data Expert Group emphasises that synthetic data can expand data usage and support data sharing without revealing underlying sensitive information[23] and can help fraud detection while mitigating bias[24]. The initiative conducted DPIAs to document risks and demonstrate compliance. After implementation, fraud‑detection accuracy improved by 20 %, the Data Minimisation Index increased from 0.5 to 0.9, and the consortium’s PETs Adoption Rate reached 70 %. Regulators commended the programme for aligning with the Royal Society’s view that PETs maximise benefits and reduce harms[26].
B2B SaaS Provider: Consolidating vendors and enforcing evidence packs
A B2B SaaS provider with 400 vendors faced regulatory scrutiny after a breach at a small marketing vendor exposed data from 200 000 customers. Before the breach, only 30 % of vendors had provided SOC 2 reports and there was no systematic purpose declaration or retention policy. Vendor Assurance Coverage was 45 %. The incident prompted a board‑level inquiry.
The company re‑tiered vendors and required Tier 1 vendors to submit updated SIG questionnaires and SOC 2 or ISO 27701 certificates[7]. Contracts were amended to include purpose binding, minimisation clauses, audit rights, breach notification and deletion on termination. A model purpose registry was introduced, and vendors were asked to provide datasheets for each dataset[28]. Non‑compliant vendors were given a remediation period, after which they were replaced. After one year, VAC rose to 98 % for Tier 1 and 90 % for Tier 2 vendors, Incident Transparency Lag dropped from 120 hours to 48 hours, and the board secured better cyber‑insurance terms. The company’s experience shows that vendor consolidation, evidence packs and contractual clauses are not administrative overheads; they are competitive differentiators.
Board Playbook: Twelve‑Month Roadmap and Risk Register
Roadmap to embed the operating system of trust
1. Quarter 1 – Inventory and Purpose Registry: Compile a master inventory of datasets, models and vendors. Create a model purpose registry that records declared purposes for each dataset and model. Conduct baseline assessments for PFR and DMI.
2. Quarter 2 – Vendor Waves: Tier vendors and distribute SIG/CAIQ questionnaires. Require Tier 1 vendors to produce current SOC 2 or ISO 27701 attestations. Build a vendor dashboard showing VAC status. Amend contracts to include purpose binding, minimisation, audit rights and breach notification.
3. Quarter 3 – PETs Pilots: Identify high‑impact use cases for PETs (e.g., fraud detection, cross‑marketing). Pilot synthetic data, federated learning or homomorphic encryption. Measure PETs Adoption Rate and adjust risk appetites accordingly.
4. Quarter 4 – Assurance and Reporting: Commission independent audits of SLOs. Update the board risk register. Align programmes with EU AI Act deadlines; ensure high‑risk AI systems have undergone conformity assessments. Publish a digital‑trust report to share progress with investors, regulators and customers.
Risk register: Preparing for the worst while building for the best
Risk |
Indicator |
Mitigation |
Owner |
Purpose creep |
PFR
< 98 % |
Implement
purpose gates in code and contracts; require change control via DPIA for new
uses |
Chief
Privacy Officer |
Excess attributes |
DMI
< 0.85 |
Establish
an attribute review board; implement schema caps; perform regular audits |
Chief
Data Officer |
Vendor breach |
VAC
< target; ITL > 72 h |
Tier
vendors; require SOC 2/ISO 27701; perform breach drills and
tabletop exercises |
Chief
Information Security Officer |
Re‑identification of “anon” data |
RRR
< 95 % |
Employ
differential privacy and k‑anonymity; commission external red‑team tests;
invest in PETs |
Head
of Data Science |
EU AI Act non‑readiness |
Missed
milestones |
Develop
a compliance timeline; update contracts; monitor vendor readiness; run AI
literacy programmes |
General
Counsel |
This risk register should be reviewed quarterly. Each risk has a clear indicator, mitigation and owner, tying actions to accountability. Boards should ensure that risk owners have the budget and authority to implement mitigations and that progress is reported transparently.
Critical Voices and Societal Considerations
Privacy programmes risk becoming compliance exercises that ignore the broader societal impacts of data practices. To avoid complacency, boards should engage with critical voices. Shoshana Zuboff characterises surveillance capitalism as a “titanic struggle between capital and each one of us” and an “assault on human autonomy”[32]. Her critique reminds us that privacy is fundamentally about human dignity and not just risk management. Timnit Gebru and colleagues propose datasheets for datasets, urging organisations to document the motivation, composition, collection process and recommended uses of datasets[28]. Such documentation increases transparency and enables stakeholders to understand potential biases[29]. Kate Crawford warns that AI systems often embed historical prejudices and that the classification practices that underlie AI reflect sociopolitical hierarchies[33]. Boards should therefore ensure that privacy initiatives intersect with broader work on fairness, accountability and transparency. Listening to these voices not only mitigates reputational risk but also strengthens the organisation’s social licence to operate.
Conclusion: Capturing the Trust Dividend
Privacy at scale is not optional; it is the foundation of a trustworthy digital economy. Boards that adopt an operating system of trust—anchored in purpose limitation, data minimisation, vendor accountability, PETs adoption and board‑level metrics—will enjoy a trust dividend that translates into faster growth, higher customer loyalty and lower cost of capital[1]. Those that delay will face escalating regulatory fines, rising insurance premiums and erosion of brand value. By weaving together global standards, evidence‑driven SLOs, rigorous vendor assurance and critical perspectives, this paper offers a blueprint for achieving privacy excellence at scale. The path is challenging, but the rewards are significant. In an age where data is the lifeblood of commerce and society, privacy at scale is the only sustainable path to growth.
[1] [2] Digital Trust as the New Competitive Advantage
https://www.breakthroughpursuit.com/digital-trust-as-the-new-competitive-advantage/
[3] [4] Principle (b): Purpose limitation | ICO
[5] [6] Principle (c): Data minimisation | ICO
[7] [27] SIG: Third Party Risk Management Standard | Shared Assessments
https://sharedassessments.org/sig/
[8] CSA_Consensus_Assessments_Initiative_Questionnaire v2 (Need Print and remove pages)
[9] [17] [18] ISO/IEC 27701 - Information security, cybersecurity and privacy protection — Privacy information management systems — Requirements and guidance
https://www.iso.org/standard/85819.html
[10] [11] [12] [13] [14] Implementation Timeline | EU Artificial Intelligence Act
https://artificialintelligenceact.eu/implementation-timeline/
[15] CSWP 40, NIST Privacy Framework 1.1 | CSRC
https://csrc.nist.gov/pubs/cswp/40/nist-privacy-framework-11/ipd
[16] NIST Updates Privacy Framework, Tying It to Recent Cybersecurity Guidelines | NIST
[19] [20] BSI-BS-10012-Client-Guide-India.pdf
[21] Standards - IEEE Digital Privacy
https://digitalprivacy.ieee.org/standards/
[22] Privacy Principles
https://www.w3.org/TR/privacy-principles/
[23] [24] [25] Report: Using Synthetic Data in Financial Services
https://www.fca.org.uk/publication/corporate/report-using-synthetic-data-in-financial-services.pdf
[26] Privacy Enhancing Technologies | Royal Society
https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/
[28] [29] Datasheets for Datasets
https://arxiv.org/pdf/1803.09010
[30] [31] Privacy-Preserving Attribution: Level 1
https://www.w3.org/TR/privacy-preserving-attribution/
[32] Shoshana Zuboff: ‘Surveillance capitalism is an assault on human autonomy’ | Society books | The Guardian
[33] [Review: Kate Crawford’s “Atlas of AI” | Chapter 4: Classification]