AI Governance Series: Part 2

The Black Box Problem: Why “Explainable AI” Is Usually Just More Theatre

Consider an illustrative example: a high-street bank rejects thousands of loan applications using an automated credit model. When applicants exercise their GDPR Article 15 rights and ask for explanations, the bank provides a three-page technical document outlining the model’s architecture: neural‑network layers, activation functions, training methodology. That may satisfy an engineer, but it does not satisfy the law. GDPR requires meaningful, intelligible, and decision-specific information about the logic used and its consequences for the data subject; a model schematic alone is practically useless for someone seeking to know why they were refused or how to improve. Regulators are increasingly intolerant of this kind of transparency theatre – witness recent major fines for transparency and lawful-basis failures (LinkedIn: €310m; OpenAI: €15m).

The Legal Contradiction at the Heart of Explainability

The OECD AI Principles recommend that organisations provide information enabling those adversely affected to challenge decisions. The UK’s Ethics, Transparency and Accountability Framework for Automated Decision-Making offers practical steps for monitoring performance and documenting key decisions. Meanwhile, GDPR Article 22 grantsindividuals the right not to be subject to solely automated decisions that produce legal or similarly significant effects, with limited exceptions (explicit consent is given, the decision is necessary for a contract, or where EU or Member State law authorises it with appropriate safeguards).

Where the frameworks fracture

Legal scholars debate whether GDPR’s Recital 71 creates a “right to explanation” for automated decisions. The text requires “meaningful information about the logic involved, “but “meaningful” remains undefined. The UK’s context-based approach leaves much of that interpretation to sectoral regulators, creating inconsistency across industries. The EU AI Act mandates transparency for high-risk systems yet permits trade secret protections for proprietary algorithms, creating a loophole large enough for most organisations to avoid genuine disclosure.

Organisations operating across jurisdictions face irreconcilable obligations: explain enough to satisfy regulators without revealing enough to lose competitive advantage. Most resolve this by producing explanations that satisfy neither goal – generic statements that provide legal cover while offering individuals nothing actionable.

An illustrative financial services example

 A mortgage applicant receives: “Your application was assessed using 150+ factors including credit history, income stability, and regional economic indicators. The model determined your risk profile exceeded our threshold.” This tells the applicant nothing as it does not identify which factors were decisive in their case, whether the underlying data was accurate, or what a meaningful remedial step would look like. That is the heart of the transparency problem: disclosures that are technically factual but not intelligible or contestable by the affected person.

What genuine transparency requires (not more theatre)

By “theatre” I mean symbolic compliance: disclosures and procedures designed to signal diligence without meaningfully constraining risk or empowering affected individuals. Avoid the theatre by adopting these three concrete practices:

Decision logs that survive scrutiny. Record each automated decision with inputs, model version, score/confidence measures, and override history. Ensure logs are secure, auditable, and searchable so they can support regulatory reviews and subject access requests (Article 15).

Layered explanations for different audiences. One size fits none. Provide:

  • a plain-language summary for individuals (e.g., “Your application was declined primarily because your verified income was below the acceptable threshold for the requested loan amount”);
  • a regulator-facing dossier showing compliance evidence;
  • a technical model card for data scientists with performance metrics and subgroup analyses.

Algorithmic impact assessments before deployment. Document the system’s purpose, data sources, known limitations, and potential harms. Consult affected communities, not just internal stakeholders. Publish assessments unless genuine commercial sensitivity applies, and be ready to justify that claim to regulators.

The privacy paradox: why data minimisation matters here too

Even transparent systems create privacy risks when they consume vast datasets. The challenge is not just unauthorised access, it’s the creeping normalisation of surveillance. UK and EU GDPR establish lawful bases for processing, but applying “legitimate interests” to large-scale training data remains contested. Guidance and codes of practice (including the General-Purpose AI Code of Practice and ICO guidance) help, but they do not eliminate the fundamental choice organisations make when they collect more data than strictly necessary.

The problem of regulatory arbitrage: organisations might seek the least restrictive jurisdiction in which to operate while serving users globally. Individuals therefore have little practical recourse when their data is collected from public sources and reused for model training without meaningful consent.

Three practical controls that move beyond theatre

  1. Data minimisation by design.Collect what you need, not what you can. Use synthetic data or federated learning where possible to reduce personal data exposure. If your model requires demographic data for fairness testing but not for predictions, segregate and protect that data accordingly.
  2. Strong encryption and access controls.Basic security hygiene (encryption at rest and in transit, multi-factor authenticator, MFA, role-based access control, regular vulnerability scanning) prevents many of the breaches that lead to high-impact enforcement actions – the ICO’s significant fines for security failures are a reminder that governance is more than paperwork.
  3. Lawful-basis documentation up front. Establish and document your lawful basis before processing begins, ensuring that it withstand scrutiny under GDPR Article 16. If relying on consent for training data, make it granular and revocable. If relying on legitimate interests, complete and retain a robust balancing test that can withstand regulatory scrutiny.

The pattern repeats: frameworks without enforcement

Transparency and privacy failures follow the same trajectory as bias: organisations mistake documentation for accountability. They produce impact assessments that no one reads, explanations no one understands, and privacy policies no one can meaningfully consent to. This is governance theatre – performance oriented at regulators rather than protection oriented at people.

In Part 3, we’ll examine the accountability crisis at the heart of AI governance: who answers when systems fail? We’ll explore why “human oversight” has become the most dangerous lie in AI deployment, and what separates genuine accountability from the theatre we’ve exposed in Parts 1 and 2. Because the real question isn’t whether your AI can explain itself, it’s whether anyone with power to intervene actually understands what it’s doing.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top