White Label Consultancy | 18th December 2024

EDPB Publishes Guidance on AI Models and Data Protection

On the 17th of December 2024, The European Data Protection Board (EDPB) issued a detailed guidance to help Supervisory Authorities (SAs) assess GDPR compliance in the development and deployment of Artificial Intelligence (AI) models. This guidance was requested by the Irish Data Protection Authority, which highlighted in its request that many organisations are now using AI models, including large language models (LLLM). However, their operations, training, and use raise ‘a number of wide-ranging data protection concerns’, significantly impacting data subjects across the EU/EEA. In response, the EDPB addresses the following key questions:

  • When and how AI models can be considered anonymous,
  • Whether legitimate interest can serve as a legal basis for data processing,
  • What happens if personal data is unlawfully processed during AI model development.

Anonymity of AI Models

Determining whether an AI model is anonymous requires a case-by-case evaluation. SAs are tasked with assessing whether It is highly unlikely, using reasonable means, to identify individuals directly or indirectly through extraction or queries.

A non-exhaustive list of methods is provided to assist SAs, including:

  • Reviewing risk assessments and other documentation from controllers,
  • Testing for vulnerabilities such as membership inference and model inversion attacks,
  • Verifying the implementation of privacy-preserving measures like pseudonymisation.

Legitimate Interest

When controllers rely on legitimate interest, SAs are advised to apply a three-step test:

  • The processing must have a lawful, specific, and real purpose,
  • It must be strictly necessary, with no less intrusive alternatives available,
  • The controller’s interests must not outweigh the rights and freedoms of individuals.

Practical measures to mitigate risks include pseudonymisation, enhanced transparency, and opt-out options. These steps help address situations where individuals’ rights might otherwise be at risk.

Unlawfully Processed Data

SAs are provided with some tools to address scenarios involving unlawfully processed personal data during development of the AI model:

  • Corrective measures may include deleting the data or retraining the AI model,
  • If a model has been anonymised, further processing may fall outside GDPR, provided the anonymisation is fully substantiated.

Practical Tools for Supervisory Authorities

The guidance offers a framework for case-by-case assessments, recognising the complexity and rapid evolution of AI technologies. SAs are encouraged to:

  • Review Data Protection Impact Assessments (DPIAs) and relevant technical documentation,
  • Evaluate lifecycle safeguards, including transparency measures and risk mitigation techniques,
  • Test AI models for vulnerabilities and verify compliance measures.

Value for Companies

The guidance gives companies a roadmap for preparing their AI models to comply with GDPR by following steps like:

  • Conducting DPIAs and including all necessary documentation (the guidance explains what kind of documentation),
  • Testing AI models for risk, e.g., re-identification or data leaks,
  • Implementing strong anonymization measures,
  • Performing LIAs to validate legal bases,
  • Planning corrective actions, like retraining models, if data used for training, was unlawfully processes.

Although the guidance is directed at the SAs, it can help companies proactively identify and fix issues, such as gaps in transparency or vulnerabilities in their AI models. This guidance can be used as an instruction on how to address regulatory expectations and ensure smoother interactions with SAs.

The EDPB also plans to provide additional guidance on specific topics, including web scraping, to further support SAs in handling AI-related data protection challenges.

Read the full guidance here