AI ethics guidelines landscape

In recent years, responsible artificial intelligence (AI) has been getting more attention. The potential of AI is undeniable, there has been a growing interest in the application of AI within many industries. However, the application of AI comes with its risks. That is why ethical frameworks and guidelines have emerged, aiming to regulate and limit those risks. AlgorithmWatch (2020) created an AI ethics guidelines inventory, in which over 160 guidelines were accumulated. Most of them were published after 2018, showing the increased attention to making AI more responsible.

However, as these AI ethics guidelines evolve, it turns out they are not easy to apply in practice. These guidelines tend to be too abstract and generic to be readily applicable in practice. There is a need for more practical guidelines to better help navigate the complexities of AI implementation. My thesis here at SURF investigates what is necessary to create a standards framework to help use AI more responsibly and offer more practical guidance. This blog post explores the problem of the applicability of these guidelines and how my thesis will aim to help bridge the gap between abstract guidelines and practical application in higher education.

What is the problem with the current AI ethics guidelines?

The problem I just introduced has also been addressed in academic research on AI. Aline Franzke (2022) did a systematic qualitative analysis of AI ethics guidelines. She found that most of the guidelines don’t have a definition of ethics, which makes the ethical starting point of the guidelines vague for its readers. Because of this, it is unclear what the guidelines see as the correct and incorrect thing to do.

Jobin et al. (2019) did a literature review on the most common principles of AI ethics guidelines. Here they showed the lack of agreement between the guidelines they examined. Firstly, there is a debate on what the most important principles are. Secondly, there is a debate on what exactly the principles mean and what their solutions are.

The abstractness of AI ethics principles is visible when trying to translate the principles into practice. Most guidelines have a description of principles that should be maintained, but they often lack practices to uphold the principle, which can lead to misinterpretation and uncertainty. For example, the ethics guidelines for trustworthy AI from the high-level expert group on AI mention that transparency is a key requirement for trustworthy A. Next, they name technical and non-technical methods that could be used to make sure the requirement of transparency is met. But these methods remain abstract as they show no concrete advice on how exactly the requirement of transparency should be met or how these methods should be used.

Another argument on why these guidelines are difficult to apply in practice is that educational institutions may already have values that they want to maintain. These values may not always overlap with the principles that are presented in the guidelines. This makes it difficult to have guidance on how other values should be implemented. This further shows the need for a framework that bridges the gap between abstract guidelines and practical application.

What now?

Recognizing the need for support in implementing responsible AI, there is a demand for concrete guidance when procuring or developing AI services. Which is what I am working on in my master's thesis, aiming to answer the research question: "How can organizations like SURF help responsibly apply AI within the higher education sector?"

My thesis will delve into the responsible application of artificial intelligence in the context of Dutch higher education. To do this, I will examine the landscape of AI ethics guidelines and look at what the problems are that SURF is already running into regarding implementing responsible AI. For example, are there already AI services that SURF is offering to its members and how does SURF recommend them to use these services? By using SURF's role as an organization facilitating IT infrastructure for higher education institutions, we gain valuable insights into practical strategies for fostering responsible AI practices within educational settings.

Check out for more information on Responsible AI at SURF and the SURF Sounds podcast on Responsible AI in education and research


Sources mentioned

AlgorithmWatch. (2020). AI Ethics Guidelines Global Inventory by AlgorithmWatch. AI Ethics Guidelines Global Inventory.

Franzke, A. S. (2022). An exploratory qualitative analysis of AI ethics guidelines. Journal of Information, Communication and Ethics in Society, 20(4), 401–423.

High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. In Shaping Europe’s Digital Future.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.



Dit artikel heeft 0 reacties