How to Deploy Generative AI: CNIL Provides Initial Clarifications

18 July 2024

Are you looking to deploy a generative artificial intelligence system within your organization but are unsure about the applicable framework? The CNIL offers initial guidance for a responsible and data-protection-compliant deployment.

What is generative AI?

"Generative" artificial intelligence refers to systems capable of creating content (such as text, computer code, images, music, audio, videos, etc.). When such systems can perform a wide range of tasks , they can be classified as general-purpose AI systems. An example of this is systems incorporating large language models (LLMs).

Their use generally aims to increase the creativity and productivity of the people who use them by allowing them to generate new content, but also by analyzing or modifying existing content (e.g. offering summaries, corrections or machine translations).

However, due to their probabilistic nature, such systems are likely to produce inaccurate results that might still appear plausible.

Additionnally, the development of these systems requires training on large volumes of data, which often includes information on individuals, or personal data. This also applies to the data provided when using these systems.

Therefore, several precautions should be taken to respect individuals' rights over their data.

How Can Such Systems Be Deployed?

Many stakeholders are asking the CNIL about how to deploy generative AI systems, particularly concerning the measures and governance needed to comply with the applicable rules, especially regarding the protection of personal data.

The publication of these questions and answers aims to guide organizations planning to deploy these systems by offering a responsible and secure approach.

In summary, the CNIL recommends:

  • Starting from a Concrete Need: Avoid deploying a generative AI system without a specific purpose; instead, ensure it meets already identified uses.
     
  • Framing Uses: Define a list of authorised and prohibited uses based on the associated risks (e.g., not providing personal data to the system, or not entrusting it with decision-making).
     
  • Acknowledging Limitations of Those Systems: Be aware of the system's limitations, particularly regarding the risks it may entail or pose to the interests and rights of individuals.
     
  • Choosing a Robust system and a Secure Deployment Mode: For example favor the use of local, secure and specialized (fine-tuned) systems. Otherwise, if using a third-party provider, determine to what extent they may reuse the data provided to the AI system, and adapt usage accordingly.
     
  • Training and Raising Awareness: Educate end-users, about both prohibited uses and the risks involved in official uses.
     
  • Implementing Appropriate Governance: Ensure compliance with the GDPR and these recommendations, in particular by involving all stakeholders from the outset (data protection officer, information systems officer, CISO, business managers, etc.).

Link to FAQ

How to Ensure the Compliance of a Particular Generative AI System?

These initial responses pertain only to the deployment or use of generative AI systems.

Designing, fine-tuning, or improving these models or systems presents complex compliance challenges, as they typically require vast amounts of data from various sources (e.g., the Internet, licensed third-party sources, user interactions, etc.).

In this regard, the CNIL has published its first recommendations on the development of AI systems. Recently, it has also submitted new recommendations for public consultation.

In line with its AI Action Plan, the CNIL plans to issue additional recommendations on generative AI systems in the near future.