What is the problem with the current AI ethics guidelines?
The problem I just introduced has also been addressed in academic research on AI. Aline Franzke (2022) did a systematic qualitative analysis of AI ethics guidelines. She found that most of the guidelines don’t have a definition of ethics, which makes the ethical starting point of the guidelines vague for its readers. Because of this, it is unclear what the guidelines see as the correct and incorrect thing to do.
Jobin et al. (2019) did a literature review on the most common principles of AI ethics guidelines. Here they showed the lack of agreement between the guidelines they examined. Firstly, there is a debate on what the most important principles are. Secondly, there is a debate on what exactly the principles mean and what their solutions are.
The abstractness of AI ethics principles is visible when trying to translate the principles into practice. Most guidelines have a description of principles that should be maintained, but they often lack practices to uphold the principle, which can lead to misinterpretation and uncertainty. For example, the ethics guidelines for trustworthy AI from the high-level expert group on AI mention that transparency is a key requirement for trustworthy A. Next, they name technical and non-technical methods that could be used to make sure the requirement of transparency is met. But these methods remain abstract as they show no concrete advice on how exactly the requirement of transparency should be met or how these methods should be used.
Another argument on why these guidelines are difficult to apply in practice is that educational institutions may already have values that they want to maintain. These values may not always overlap with the principles that are presented in the guidelines. This makes it difficult to have guidance on how other values should be implemented. This further shows the need for a framework that bridges the gap between abstract guidelines and practical application.
Recognizing the need for support in implementing responsible AI, there is a demand for concrete guidance when procuring or developing AI services. Which is what I am working on in my master's thesis, aiming to answer the research question: "How can organizations like SURF help responsibly apply AI within the higher education sector?"
My thesis will delve into the responsible application of artificial intelligence in the context of Dutch higher education. To do this, I will examine the landscape of AI ethics guidelines and look at what the problems are that SURF is already running into regarding implementing responsible AI. For example, are there already AI services that SURF is offering to its members and how does SURF recommend them to use these services? By using SURF's role as an organization facilitating IT infrastructure for higher education institutions, we gain valuable insights into practical strategies for fostering responsible AI practices within educational settings.
Check out https://www.surf.nl/responsible-ai for more information on Responsible AI at SURF and the SURF Sounds podcast on Responsible AI in education and research https://open.spotify.com/episode/3EofvGdhOeNrnyBd1iPUpc?si=0548edfdb7504224&nd=1
AlgorithmWatch. (2020). AI Ethics Guidelines Global Inventory by AlgorithmWatch. AI Ethics Guidelines Global Inventory. https://inventory.algorithmwatch.org/
Franzke, A. S. (2022). An exploratory qualitative analysis of AI ethics guidelines. Journal of Information, Communication and Ethics in Society, 20(4), 401–423. https://doi.org/10.1108/jices-12-2020-0125
High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. In Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2