Skip to main content
Menu

How to make AI more trustworthy

58 experts from a range of areas have written a report for ways to improve AI development

Computer generated image of handshake

Artificial Intelligence (AI) pervades throughout our online lives. From which adverts you are shown on social media, to Google maps identifying traffic hot spots from your mobile phone. Much of the AI we encounter, often without realising, is yet to be regulated by codes of conduct. Consequently, the world has seen situations where the public’s trust has been abused, such as the well-known case of Cambridge Analytica using AI and Facebook user data for targeted political advertising. There have also been concerns for disinformation, criminal risk assessment, and social harms linked with facial recognition, such as loss of privacy.

To address these concerns, 58 experts from a wide range of communities, including AI, policy, and systems, have written a report suggesting steps that AI developers can take to ensure that “AI development is conducted in a trustworthy fashion”.

“Artificial intelligence has the potential to transform society in ways both beneficial and harmful. Beneficial applications are more likely to be realized, and risks more likely to be avoided, if AI developers earn rather than assume the trust of society and of one another”, details the report.

Ruth Fong, Engineering Science DPhil student and co-author, explains “This report surveys mechanisms for increasing our ability to trust the development and deployment of artificial intelligence. In other engineering disciplines, such mechanisms are already widespread. For example, consider the role played by building codes in civil engineering.”

“When a structural fault is found in a bridge, there are clear ways to understand what went wrong, how to fix the problem, and who is responsible for the failure. But if a self-driving car is involved in an accident, we currently lack the [technical and policy] tools to answer the same questions.”

The report outlines mechanisms that will allow users, regulators, academics and AI developers to deal with questions they might face in the development process. For example, it lists;

  • Can I (as a user) verify the claims made about the level of privacy protection guaranteed by a new AI system I’d like to use for machine translation of sensitive documents?
  • Can I (as a regulator) trace the steps that led to an accident caused by an autonomous vehicle? Against what standards should an autonomous vehicle company’s safety claims be compared?
  • Can I (as an academic) conduct impartial research on the impacts associated with large-scale AI systems when I lack the computing resources of industry?
  • Can I (as an AI developer) verify that my competitors in a given area of AI development will follow best practices rather than cut corners to gain an advantage?

Professor Noa Zilberman, co-author and academic at the Department of Engineering Science, says "Today, more than ever before, people and governments need to be able to trust AI systems. The recommended mechanisms will enable AI developers to make verifiable claims, to which they can be held accountable."

10 recommendations are listed in the report for institutions, software and hardware developers to help with verify claims around AI development. These are presented as an incremental step towards a better future for AI.

Fong adds, “Researchers, industry and policy makers are profoundly aware of the impact of artificial intelligence (AI), in terms of its benefits but also its risks. This report includes a summary of research by leading international research groups, including work done at Oxford and in this department, that address the safe and responsible development of novel AI technology.”

Engineering Science DPhil students Emma Buemke and Logan Graham also co-author the paper, as well as several researchers from the University of Oxford’s Future of Humanity Institute, OpenAI, Google Brain, University of Cambridge, Intel and many more institutions.

The full report is available to read at towardtrustworthyai.com.