Skip to main content
Menu

Oxford researchers help shape new UK strategy for AI resilience

The UK’s AI Safety Institute (AISI) has published a new national strategy, Strengthening AI Resilience, outlining steps to build long-term resilience into the UK’s AI ecosystem. The strategy highlights priorities for ensuring AI technologies remain safe, secure, and beneficial for society and includes key contributions from researchers at the University of Oxford, including several from the Department of Engineering Science.

Side-by-side image featuring Dr Adel Bibi (left), Senior Researcher in Machine Learning at the University of Oxford’s Department of Engineering Science, wearing glasses and a checked shirt, and Professor Nick Hawes (right), Professor of Artificial Intelligence and Robotics and Director of the Oxford Robotics Institute, smiling and seated at a table in a light-filled room.

Dr Adel Bibi (left) and Professor Nick Hawes (right) are part of Oxford’s efforts to make AI systems more resilient, transparent and trustworthy — work that’s helping shape the future of AI and robotics at the University and beyond.

20 projects that have been awarded seed grants of up to £200,000 to carry out independent research focused on safeguarding the societal systems and critical infrastructure into which AI is being deployed. 

Among the projects supported through the AISI’s initial funding programme is a grant led by Dr Adel Bibi,    Senior Researcher in Machine Learning at the Department of Engineering Science. His project: The Safety of Operating Systems AI Agents: Formulations, Evaluations, and Certification, aims to establish rigorous methods for evaluating and certifying the safety of AI agents operating in complex, dynamic environments. Dr Bibi is working in collaboration with Professor Philip Torr, and Dr Adam Mahdi of the Oxford Internet Institute. He says of the project:

"Agentic AI systems are rapidly spreading, with many companies already using them at scale. These agents can perform tasks like sending emails, scheduling meetings, making banking transactions, and handling complex actions such as booking travel, managing enterprise workflows, or running customer support.
Our project focuses on assessing the safety of these systems, creating new evaluation methods, identifying risks and attack surfaces, and working towards their certification and defense. This Systemic AI Safety grant will be a major step forward in advancing this important line of research."

Another Oxford-led project explores how AI systems can support humans to make good  decisions in high-stakes scenarios. Professor Nick Hawes, Professor of Artificial Intelligence and Robotics and Director of The Oxford Robotics Institute, and Professor Ruth Chang (Faculty of Law) are co-leading AI and Hard Choices: The Parity Model, which investigates how AI can engage with complex decisions in a way that is philosophically robust and practically aligned with human values. Professor Hawes said of the project:

"An increasing number of us are engaging with AI systems to help make life-changing decisions, but these systems are not designed to reflect, or engage actively with, the nuances of the questions, we there are no guarantees they will help us make good decisions aligned with our own values. Our project is taking the first steps to address this, by building decision-support AI that can understand hard choices and work actively to structure decisions in ways that represent human priorities."

The full AISI strategy reflects growing national and international efforts to ensure that AI systems are not only powerful but also trustworthy and aligned with public interest.

Read the full strategy on the AISI website