Embedding ethics in artificial intelligence research

Young multiethnic male employee uses laptop computer in system control monitoring centre

The AI community has been challenged to embed ethical approaches in AI research by considering the societal and ethical impacts at each stage of their work.

The ‘Looking before we Leap’ project is funded by an Arts and Humanities Research Council (AHRC) research programme. Ahead of the publication of their findings, the project has already received interest from:

  • DeepMind
  • Google
  • OpenAI
  • Facebook Research.

About the project

Artificial intelligence (AI) technologies already have a significant impact on our day to day lives. While many of these impacts are positive, there is a variety of unseen and often unsettling aspects of AI associated with privacy risks. This includes data sharing, privacy risks and a lack of consent, which can negatively impact us all.

Looking before we Leap explores the kinds of issues research ethic committees face when reviewing AI research. The project proposes steps that could be taken to address these issues in the future.

The project team put together a series of recommendations for AI research institutions by:

  • analysing existing research
  • running a series of interviews and workshops with members of corporate and academic AI ethics committees.

These recommendations centre around a number of steps that can be taken to improve ethics review processes, including:

  • multi-stage ethics reviews
  • engaging in broader societal impact evaluations
  • prioritising training of staff and researchers in ethical risks.
Nine small images with schematic representations of differently shaped neural networks, a human hand making a different gesture is placed behind each network.

Credit: Alexa Steinbrück, Better Images of AI, Explainable AI, CC-BY 4.0

Impacts of the project

By analysing issues that research ethics committees face, the project has provided guidance and advice to corporate and academic labs for introducing ethics reviews at product development and implementation stages.

To help people connect with their recommendations, the team will provide:

  • training and guidance
  • six interactive case studies that explore common ethical challenges in AI and data science research.

This will be released with their report in June 2022.

The project’s findings have already reached an international audience with the Chinese Academy of Sciences expressing interest in their findings. UK Research Integrity Office and Centre for Data Ethics and Innovation are currently exploring how policymakers and public funding bodies can incentivise broader societal impact questions.

The recommendations from Looking before we Leap have also been incorporated into AHRC’s new Responsible AI Programme. See the programme director funding opportunity for more information.

Looking before we Leap was delivered by a research team from:

  • Ada Lovelace Institute
  • The Alan Turing Institute
  • University of Exeter: Institute for Data Science and AI.

Top image:  Credit: gorodenkoff, iStock, Getty Images Plus via Getty Images

This is the integrated website of the seven research councils, Research England and Innovate UK.
Let us know if you have feedback or would like to help us test new developments.