Nine new artificial intelligence (AI) research hubs that will deliver next-generation innovations and technologies have been announced today.
The hubs will provide focused investment that will enable AI to evolve and tackle complex problems across applications from healthcare treatments to power-efficient electronics.
The Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI), has invested £80 million in the new hubs that will propel the UK to the forefront of advanced AI research.
From combating cyber threats, to supporting better health treatments and delivering faster development of electronic devices and microchips, the research aims to transform the way we develop and use AI.
Increasing our understanding of AI systems
Three of the hubs will address the mathematics and computational research, which is foundational to AI, playing a pivotal role in increasing our understanding of new, efficient AI systems.
Six of the hubs will explore AI for science, engineering and real-world data, which will provide the tools needed to accelerate future AI innovations and advance its application in key areas such as healthcare.
A further ten scoping studies have been funded by the Arts and Humanities Research Council (AHRC), also part of UKRI. The studies will help to define responsible AI across education, policing and the creative industries.
Today’s announcement comes on the same day that the government has published its AI regulation white paper consultation response, which carves out the UK’s own approach to regulation.
Next wave of brilliant AI innovation
Minister for AI Viscount Camrose, said:
The investment we’re pouring into these new projects is only possible as a result of our pro-innovation approach to AI. The AI Regulation White Paper consultation response we’ve set out today will see us forging ahead with that plan, driving forward the next wave of brilliant AI innovations.
These hubs will nurture new, cutting-edge breakthroughs, from healthcare treatments and more power efficient electronics to machine learning and chemical discovery.
New projects being delivered by BRAID will also help to define responsible AI in key sectors such as education, policing, and the creative industries, ensuring public trust in the technology as we continue to harness its transformative capabilities.
Accelerating the adoption of trusted and responsible AI
Professor Dame Ottoline Leyser, Chief Executive of UKRI, said:
UKRI is supporting researchers and innovators to develop the next generation of AI technologies that will transform our economy and society. The investments announced today will help to deliver the capability the UK needs to realise the opportunities of this transformative technology.
Through our £1bn portfolio of investments in AI research and innovation, we are supporting the development of new technologies, boosting skills, and accelerating the adoption of trusted and responsible AI.
The hubs, led by eight universities but working across the whole of the UK, underline the UK’s commitment to maintaining a leadership position in AI research, innovation and ethical deployment.
Delivering revolutionary AI innovations
Professor Charlotte Deane, Executive Chair of EPSRC, said:
Artificial intelligence is already transforming our world. EPSRC supports world-leading research to unlock its potential and ensure it is developed and used in an ethical and responsible way. Long-term research funding has led to revolutionary advancements that have made AI a powerful tool for many applications.
These hubs will deliver revolutionary AI innovations and tools in sectors from healthcare to energy, smart cities and environment. They will achieve this by solving key challenges and improving our understanding of AI helping to drive the increased productivity and economic growth promised by this technology.
The investment today follows an announcement during autumn 2023 for 12 UKRI AI Centres for Doctoral Training (CDTs). The UKRI AI CDTs will ensure that the UK has the skills needed to:
- seize the potential of the AI era
- nurture the British tech talent that will push the AI revolution forwards
Responsible AI across education, policing and the creative industries
Also announced today are 10 six-month scoping projects that will define what responsible AI is across sectors such as education, policing and the creative industries.
The projects are supported with £2 million AHRC funding through the Bridging Responsible AI Divides (BRAID) programme.
They will produce early-stage research and recommendations to inform future work in this area. They illustrate how the UK is at the forefront of defining responsible AI and exploring how it can be embedded across key sectors.
In addition to the scoping projects AHRC is confirming a further £7.6 million to fund a second phase of the BRAID programme, extending activities to 2027 and 2028.
The next phase will include a new cohort of large-scale demonstrator projects, further rounds of BRAID fellowships, and new professional AI skills provisions, co-developed with industry and other partners.
Also announced today, a £9 million investment delivered by EPSRC through the International Science Partnerships Fund.
This new investment will bring together researchers and innovators in bilateral research partnerships with the US focused on developing safer, responsible, and trustworthy AI as well as AI for scientific uses.
The research will examine new methodologies for responsible AI development and use.
Developing common understanding of technology development between nations will enhance inputs to international governance of AI and help shape research inputs to domestic policymakers and regulators.
Providing lasting contributions
Professor Christopher Smith, Executive Chair of AHRC and UKRI International Champion said:
The impact of AI can already be felt in many areas of our lives. It will transform our jobs and livelihoods, and impact on areas as diverse as education, policing and the creative industries, and much more besides. UKRI’s research will be at the heart of understanding this new world.
The research which AHRC announced today will provide lasting contributions to the definition and practice of responsible AI, informing the practice and tools that are crucial to ensure this transformative technology provides benefits for all of society.
The new bilateral EPSRC partnership programme between the UK and US, also announced today, highlights the vital role of international collaboration in all areas of research and innovation, not least AI. It ensures we share expertise and learn from each other to develop ways to harness the extraordinary potential of AI safely and fairly for citizens around the world.
These projects are vital and timely interventions from across the research ecosystem to support responsible, safe and beneficial uses of the transformative power of AI.
EPSRC AI hubs
University of Bristol
Information theory for distributed AI (INFORMED-AI)
Led by Professor Sidharth Jaggi
The INFORMED-AI hub is developing theoretical foundations and algorithmic approaches for intelligent distributed systems. These systems aim to be effective, resilient and trustworthy in their operations.
AI for collective intelligence (AI4CI)
Led by Professor Seth Bullock
The AI4CI hub will develop new machine learning and smart agent technologies fuelled by real-time data streams in order to achieve collective intelligence for individuals and national agencies across:
The University of Edinburgh
CHAI-EPSRC AI hub for causality in healthcare AI with real data
Led by Professor Sotirios Tsaftaris
Edinburgh’s CHAI hub will improve healthcare using AI by predicting outcomes and personalising treatments. This hub will develop novel methods to unravel complex causal relationships within healthcare data.
AI for productive research and Innovation in eLectronics (APRIL) hub
Led by Professor Themis Prodromakis
This hub will develop AI tools to transform the time it takes to develop a range of new products from new, fundamental materials for electronic devices to complicated microchip designs and system architectures.
The use of these AI tools will lead to faster, cheaper, greener and overall, more power-efficient electronics.
ProbAI: a hub for the mathematical and computational foundations of probabilistic AI
Led by Professor Paul Fearnhead
The ProbAI hub in Lancaster is exploring ways to embed probability models, probabilistic reasoning and measures of uncertainty within AI methods.
University of Liverpool and Imperial College London
AI for Chemistry: aIchemy
Led by Professor Andrew Cooper and Professor Kim Jelfs (co-directors)
The joint Liverpool-Imperial hub will study foundational AI methods, experimental and computational chemistry, and autonomous, closed-loop robotics for chemical discovery.
University College London
AI hub in generative models
Led by Professor David Barber
Generative AI is a key technology that will continue to affect our lives. The hub will develop tools that industry, science and government can use to build responsible generative models to benefit the economy and society.
University of Oxford
Mathematical foundations of intelligence: an ‘Erlangen Programme’ for AI
Led by Professor Michael Bronstein
Focusing on using mathematical principles, this hub will use geometry, topology and probability to enhance AI methods.
National edge AI hub for real data: edge intelligence for cyber-disturbances and data quality
Led by Professor Rajiv Ranjan and supported by expertise from:
- Durham University
- University of Hull
- Imperial College London
- University of Southampton
- Swansea University
- Cardiff University
- University of Warwick
- Lancaster University
- University of West Scotland
- University of St Andrews
- Queens University, Belfast
The hub focuses on the effect of cyber disturbances on the effectiveness and resilience of edge AI, with a particular focus on cyber threats and how to make it more secure and robust.
Edge AI research is the study of how to apply AI techniques near the source of the data instead of sending it to the cloud or a central server.
AHRC BRAID programme project summaries
The University of Edinburgh
Towards embedding responsible AI in the school system: co-creation with young people
Led by Professor Judy Robertson
This project will investigate what generative AI could look like in secondary education. It involves working with young people as stakeholders whose right to be consulted and engaged with on this issue is a key tenet of responsible AI.
Shared post-human imagination: human-AI collaboration in media creation
Led by Dr Szilvia Rusvev
The project will investigate responsible AI in the context of media creation, focusing on collaboration, creativity and representation. This includes concerns about copyright, job security and other ethical and legal challenges.
The University of Sheffield
Museum visitor experience and the responsible use of AI to communicate colonial collections
Led by Dr Joanna Tidy
This project will work with the Royal Armouries to investigate the use of AI to enhance museum visitor experience, specifically in relation to biases in AI, which stem from the colonial history of museum collections.
Ethical review to support responsible AI in policing: a preliminary study of West Midlands Police’s specialist data ethics review committee
Led by Dr Marion Oswald
This project focuses on how ethical scrutiny can improve the responsibility and legitimacy of AI deployed by the police. It involves working with the West Midlands Police and Crime Commissioner and West Midlands Police data ethics committee.
University of Nottingham
Creating a dynamic archive of responsible ecosystems in the context of creative AI
Led by Professor Lydia Farina
This project seeks to develop an insight into what might actually constitute responsible AI in the context of creative AI. It involves examining the ethical and moral tension arising between the concepts of creativity, authenticity and responsibility.
University of Glasgow
iREAL: inclusive requirements elicitation for AI in libraries to support respectful management of indigenous knowledges
Led by Dr Paul Gooding
iREAL will develop a model for responsible AI systems development in libraries seeking to include knowledge from indigenous communities, specifically Aboriginal and Torres Strait Islander communities in Australia.
University of Warwick
AI in the street: scoping everyday observatories for public engagement with connected and automated urban environments
Led by Professor Noortje Marres
This project will explore divergences between principles of responsible AI and the messy reality of AI as encountered in the street, in the form of automated vehicles and surveillance infrastructure. The aim is to ground understandings of AI in lived experiences.
Queen Mary University of London
‘CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures’
Led by Professor David Leslie
This project engages with creative workers to co-develop impact assessments that address fundamental rights and working conditions in the context of generative AI. It ensures that workers have a voice in the development of these technologies and corresponding labour policy
The University of Sheffield
FRAIM: framing responsible AI implementation and management
Led by Dr Denis Newman-Griffiths
This project will work with four partner organisations across public, private, and third sectors to build shared learning, values and principles for responsible AI. This will enable best practice development, help organise information and support decision making.
The Alan Turing Institute
Trustworthy and ethical assurance of digital twins (TEA-DT)
Led by Dr Christopher Burr
This project will conduct scoping research and engagement to develop the trustworthy and ethical assurance platform into an open-source and community-driven tool. This helps developers of digital twins or AI systems to address ethical challenges and establish trust with stakeholders.
Top image: Credit: koto_feja, E+ via Getty Images