• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Oxford Brookes University logo

Institute for Ethical AI

Promoting the ethical development and deployment of AI technology.

  • Home
  • About
    • Meet the Team
  • Our approach
  • Case studies
  • Consortia
    • The Future of Work Coalition
    • Legal AI
  • Newsletter
  • Contact

A role for validation

Researcher: Kevin Maynard

Oxford Brookes University is creating networks of companies in the HR and legal sectors to understand what tests and kitemarks a company can ask that AI developers run on their systems so the companies can understand how the system works. The test should be able to analyse accuracy, explainability and biases. The ambition is to enable purchasers to have a better understanding of what they are buying.

Funder: Research England

Primary Sidebar

  • Case studies
    • A role for validation
    • The Visual Artificial Intelligence Laboratory
    • AI predictive data modelling in diabetes management
    • Autonomous moral AI
    • Blenheim Palace: AI provides real time management of customers movement
    • Brethertons Solicitors: creating the legal firm of the future
    • Cyber security in the social care sector
    • Development of AI systems that learn ethical and moral constraints
    • Identifying parameters that influence decisions in AI systems
    • Moorcrofts Law: development of a dangerous clause identifier
    • Nominet: Identifying fraudulent activities
    • Supporting professional services in their use of AI
    • Tackling bias in AI recruitment tools
    • Tackling online hate
    • Uncertain uncertainty

Footer

Institute for Ethical AI
Oxford Brookes University
Headington Campus
Oxford, OX3 0BP
UK
+44 (0) 1865 484235
ethicalAI@brookes.ac.uk

  • Contact
  • Privacy policy
  • Cookie policy

Funded by

Research England logo

Copyright © 2023