15/06/2023 - The Personal Data Authority (AP) has raised concerns about the handling of personal data at organisations using so-called 'General Purpose AI'. Therefore, the AP has asked OpenAI, the company developing the chatbot ChatGPT, for clarification on how the Chatbot handles personal data.   

ChatGPT and General Purpose AI 

Earlier in the blog series on AI, we set out the risks and opportunities of using General Purpose AI. In it, the ethical risks of AI models such as ChatGPT were highlighted. Examples of these risks include the lack of transparency (both in the training method and the output process) and the possibility of developing unintended and potentially unwanted applications. 

ChatGPT (in full: Chat Generative Pre-Trained Transformer) is a so-called 'Large Language Model' (LLM). In short, this means that it is an AI tool that generates text-based content. An LLM falls under the broader development of 'General Purpose AI'. General Purpose AI can be trained on large unlabelled datasets for general purposes and can therefore be used for different types of applications. In the upcoming AI Act General Purpose AI is currently defined as: 'an AI system that can be used in and adapted to a wide range of applications for which it is not intentionally and specifically designed' . ChatGPT also trains itself based on questions people ask in the chatbot by storing and using it. That data may contain (very) personal information. 

Demand for clarification from AP 

The request for clarification by the AP to OpenAI focuses on the personal data ChatGPT collects directly from its users. Primarily because the model trains itself with personal data and that it is not clear how that data is used: 'The generated content may be inaccurate, outdated, inaccurate, inappropriate, offensive, or objectionable and may take on a life of its own. Whether, and if so, how OpenAI can rectify or delete that data is unclear,' the AP said. The AP further mentioned that it will take several actions regarding ChatGPT, starting with this request for clarification. 

ChatGPT more frequently criticised for privacy risks 

ChatGPT's processing of personal data also came to the attention of other privacy regulators in Europe. The use of ChatGPT in Italy was last 31 March temporarily blocked by the Italian regulator because of similar privacy issues. There, the chatbot was blocked because: 

  • no information was provided to users whose personal data is processed by OpenAI; 
  • no appropriate legal bases were present for the processing of personal data; 
  • inaccurate information was provided, and;  
  • verification of ChatGPT users' age was missing.  

Following a visit to the Italian regulator, measures were taken by OpenAI to resolve these privacy issues. Based on these measures, the Italian regulator later granted  permission to OpenAI to offer ChatGPT in Italy once again. 

Broader development 

As other privacy regulators in Europe are concerned about ChatGPT, a ChatGPT task force has been set up within the European Data Protection Board (EDPB), to share information and coordinate actions.  

We expect the discussion on around the lack of clarity in ChatGPT's processing of personal data to continue internationally and for national regulators, such as the AP, to take multiple actions like this one. 

Do you have questions about using ChatGPT within your company or AI in general? Are you looking for practical advice for your organisation's responsible use of AI? Contact Considerati for our specialised advice and tailored support.  

Want to read more about AI and algorithms? Check out our blog series on our website 

Stefan Boss Legal Consultant

Do you have any questions?

Do you have any questions about this subject? Contact Considerati, as we offer specialized advice and tailor-made support.