19/05/2023 - As thought leaders in privacy, data protection and regulation of Artificial Intelligence (AI), Considerati is launching a blog series centred around the intersection of AI with other disciplines such as ethics and privacy. We are launching this blog series ahead of the Conference: Artificial intelligence, impact & update 2023 that Considerati is organising in collaboration with Outvie on 16 June. On this day, Prof Mr Dr Bart Schermer, founder of Considerati, will speak on the status and developments of the AI regulation. In addition, Responsible Tech consultants Marianne Schoenmakers and Jord Goudsmit will organise a workshop on how to measure and monitor Responsible AI Governance within your organisation.   

In case you would like to attend, you can order your tickets here. We encourage you to use our code 70940_Considerati at the end under the ‘Voucher’ section. Using our code will automatically mark your entry for a fun giveaway, where you can win one of the three copies of the book ‘De democratie crasht’ by Kees Verhoeven.  


This blog series is aimed at demystifying and exploring some of the many realms of AI, especially in light of the EU AI Act moving to the trilogue phase. The blogs will delve into the nuances of AI, its implications on privacy, human rights, ethical considerations, and the legal aspects that surround the use of AI.  

What better topic to kickstart the series than the much-hyped ChatGPT, which has caused a lot of commotion in (popular) media, a new rat race in sillicon valley and amendments in the proposed AI Act - specifically targeted at it. In particular, we get the question from many organisations: what to do with this?  

In this blog, we discuss the trend of 'Large Language Models' (LLMs), such as ChatGPT, the opportunities offered by such models and the necessary ethical considerations for organisations using them. 

The developments around ChatGPT and other LLMs fit into a broader development of what is called 'general purpose AI'. This general purpose AI is trained on large unlabelled data sets and can be used for different applications. In other words, these underlying models are used by a chain of developers in a wide variety of applications, hence the term 'general purpose AI'.   

Whereas ChatGPT or GPT-4 is used for text generation, there are also applications for text-based image generation (Dall-E, Midjourney). In addition, we see it being used for text-based video generation, audio, 3D and coding. In practice, it can be used for generating automated but personalised customer service, it can be used as a programming tool, for transcribing and summarising meetings, personal therapy bots, support in clinical decision making, as well as job creation, mis- and disinformation generation, and much more.  

Ethical considerations 

Although the potential use cases of general purpose AI are legion, the characteristics of general purpose AI models raise concerns. These characteristics include their enormous size, a lack of transparency (both in training method, and process of output), and the possibility of developing unintended and potentially unwanted applications.  

LLMs have shown that they can be discriminatory (e.g. more understanding towards a man than a woman after maternity leave), pose a risk to personal and sensitive information, and can generate misinformation.  

General purpose AI systems are trained by collecting, analysing and processing information from the internet. This raises several (privacy) issues: is there plagiarism, for example, when generating text based on someone else's work? And was there consent and a legal basis for processing this data? Based on the unclarity in the latter, ChatGPT has been temporarily suspended by the Italian data protection authority.  

Responsibly using ChatGPT and other general purpose AI  

General purpose AI has a lot of potential applications and can be used in various ways by organisations. When using ChatGPT or other general purpose AI, make sure this is backed by (ethical) reflection and consideration.  

For example, always make sure there is a subject matter expert involved. This subject matter expert can weigh the outcomes of ChatGPT on misinformation and potential biases and must assess whether the outcome has the risk of plagiarism. Consider what sensitive and/or personal data you would want to feed into the AI system. It is unclear what will happen with this data, whether it will be used for training and in what way. And lastly, always be transparent about how you are using ChatGPT or other general purpose AI to those affected by this use.  


As we wrap up our brief analysis of ChatGPT in this first blog, we hope to leave you with the thought that while general purpose AI, including LLMs such as ChatGPT, comes with a variety of opportunities, the responsibilities associated with their use cannot be undermined. Facilitating a proper (ethical) reflection on the use of general purpose AI is thus necessary. 

In our next blog, we will be venturing into the world of AI and Human Rights. The blog will be published on Wednesday, 31st May 2023. Stay tuned! 

Jord Goudsmit Consultant Responsible Tech

Do you want to know more?

Do you have questions about ChatGPT or AI in general? Are you looking for practical advice to ensure responsible use of AI by your organisation? Contact Considerati for our specialized advice and tailor-made support.