Preliminary insights into responsible AI practices in research

In a rapidly evolving world, artificial intelligence emerges as a transformative force in various research domains. While its potential is undeniable, the challenges it presents can't be overlooked. In a short survey we aimed to gauge the current state of the responsible application of AI in research. Preliminary findings indicate that promising steps are being made, but that significant gaps remain in institutional guidance and support, as well as in the practical implementation of responsible AI practices.

Authors: Matthijs Moed, John Walker, and Duuk Baten

The challenge of responsible AI

For a few years we have been able to see how machine learning is changing existing research practices, replacing traditional statistics or numerical analysis for deep learning and neural network [1]. The possibilities of AI have also been enabling new research practices and impacted fields like weather simulation, life sciences, and astronomy [2]. However, with this increasing potential of AI technologies also come new challenges. As the European Union puts it, ‘trustworthy AI’ needs to be lawful, ethical, and robust [3]. So how do we guarantee compliance with law and regulation? How can we deal with the impacts of possibly biased models, when applied? How reliable are our models, and can we accurately assess their bias and inaccuracies? How do the ‘black box’ properties of AI models impact the scientific need for explainability and reproducibility?

Taking the temperature

To get a better insight into the needs and challenges faced by researchers, we launched a short survey on responsible AI in research. Insights which can help us formulate and prioritize activities. To structure the survey, we used two existing frameworks: the European Commission guidelines for trustworthy AI, and the AI ethics maturity framework by the Erasmus School of Philosophy.

Note, our findings are based on a relatively small sample size (n = 21) and our approach to data gathering and analysis was not systematic. Nevertheless, our aim is to offer a 'thermometer' of sorts, to get a sense of where our members are currently at and to identify possible needs. Respondents are a mix of researchers and research supporters as well as managers and are drawn from more than a dozen different institutes, including universities, applied universities and university medical centres.  

EU Ethics guidelines for trustworthy AI

In 2019 the European Union’s High-Level Expert Group on Artificial Intelligence published its Ethics Guidelines for Trustworthy Artificial Intelligence [3]. According to these guidelines, trustworthy AI should be lawful, ethical, and robust.

They set out seven key requirements that AI systems should meet in order to be deemed trustworthy:

  1. Human agency and oversight: AI systems should empower human beings, but also have proper oversight mechanisms.
  2. Technical robustness and safety: AI systems should be resilient and secure, as well as accurate, reliable and reproducible.
  3. Privacy and data governance: AI systems should fully respect privacy and data protection, with adequate data governance mechanisms in place.
  4. Transparency: the data, systems and AI business models should be transparent, with traceability mechanisms in place
  5. Diversity, non-discrimination and fairness: unfair bias must be avoided, and diversity and accessibility fostered.
  6. Societal and environmental well-being: AI systems should benefit all human beings, including future generations, and should be sustainable and environmentally friendly.
  7. Accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The requirements are accompanied by a specific assessment list to guide developers and deployers of AI in implementing the requirements in practice [4].

Findings

The first finding of our ‘thermometer’ was that most respondents indicate that they have a good understanding of what RAI means in the context of their work and already use RAI methods and approaches. These approaches are primarily about ensuring privacy and security compliance, implementation of data management practices, and usage of fairness and explainability toolkits. Correspondingly, as major risks respondents identify privacy, security, bias and transparency.

There is a broad recognition of the relevance of all key requirements of trustworthy AI as set by the European Commission. Here we see in particular a recognition of the relevance of privacy and data governance, transparency, technical robustness, and human oversight. This suggests our correspondents in general identify responsible AI as relevant to their work.

Survey results of the 7 key requirements, content described in-text
Here we see how the respondents judged the relevance of the 7 key requirements for their work

A third of respondents indicate it is unclear to them what their organisation expects of them when it comes to implementing RAI in practice. Only one respondent received training related to responsible AI, which was mostly focussed on data privacy and research ethics. Lastly, about 1/3rd of the respondents noted that their institution provides guiding policies or frameworks for responsible AI practices. Respondents also regularly mentioned the role of ethics committee for their research practices (if the role of ethics committees in computer science research is of interest to you, consider subscribing to the ethics committee network van IPN).

These results suggest that a more general approach to responsible AI, beyond compliance and legal checks, is not yet widespread. This is echoed by the self-reported maturity levels.

AI ethics maturity model

In 2022 researchers from the Erasmus School of Philosophy and School of Management published a model for assessing the maturity of AI ethics in organisations [5]. The authors recognise the need for organisations to go from the ‘what’ of AI ethics to the ‘how’ of governance and operationalisation.

They propose a holistic approach along multiple dimensions:

  1. Organisational governance and internal oversight, which concerns the strategy and leadership responsibility around ethical data practices.
  2. Skills and knowledge, which highlights the steps required to create a culture where ethical data practices are embedded by identifying the knowledge sharing, training, and learning required within an organization.
  3. Data management risk processes, which seeks to identify key business processes that underpin ethical collection, use and sharing of data, to identify and assess risks of harm.
  4. Funding and procurement, which focuses on the investing in ethical data practices and developing requirements for procurement.
  5. Stakeholder and staff engagement, which highlights the engagement with internal and external stakeholders.
  6. Legal standing and compliance, which addresses compliance with relevant laws, regulations and social norms.

The level of maturity reached by an organisation is assessed with the following scale:

  1. Initial: the baseline level
  2. Repeatable: refined and repeatable in individual teams and projects
  3. Defined: processes are standardised though not widely adopted
  4. Managed: widely adopted and monitored
  5. Optimising: optimise and refined processes

By assessing their maturity along the six dimensions, organisations can identify gaps and formulate a strategy to advance their AI ethics procedures.

Maturity

Overall, the reported maturity level varies between ‘repeated’ and ‘defined’, but with significant differences among the five dimensions of awareness & culture, policy, governance, communication & training, development processes and tooling. A significant group responded ‘I don’t know’ to the questions about maturity, which suggests the actual results might be lower.  

A spider diagram of the results of the questionnaire.
A spider diagram of the results of the survey.

There is significant awareness of responsible AI practices on the individual and, to some extent, the organisation level. In general, organisational policy has been defined with standardized processes, and mostly is about compliance with privacy and security regulation. Consequently, governance is implemented with legally mandated checks, and is refined and made repeatable by individual teams and projects.

Respondents indicate that communication and training is generally lacking. There are small-scale initiatives for training and communication, with a third of respondents stating there is minimal to no communication at all in their organisation.

As for the more technical implementation of responsible AI practices, there is some use of tooling, and a clear demand for tools and services to investigate bias, and to improve explainability and reproducibility.

Get in touch

Lastly, respondents indicate three main activities for SURF to play a role in: tools and services, (national) policy frameworks or best practices, and community activities and knowledge-sharing.

We recognize SURF can play a role in these activities and developments. However, for this we need your help.

  • Share the tools and methods you have good experiences with, either here on SURF communities, or share them directly with us.
  • If you work on these questions at your institutional level, we would love to hear more about your experiences and needs, especially if you have successfully implemented policy frameworks that go beyond privacy and security compliance.
  • And to keep up to date, be sure you to follow the Artificial Intelligence community. For this login using your institution identity and click ‘Volgen’.

Whether you are interested knowledge sharing about responsible AI, or work on the practical implementation of responsible AI practices, or are looking to develop policy instruments, we would like to collaborate. Please get in touch with duuk.baten@surf.nl.

Sources

Auteur

Reacties

Dit artikel heeft 0 reacties