Founder and Co-Director
Nigel is Associate Dean for Research and Knowledge Exchange and Professor of AI and Robotics at Oxford Brookes University. His research interests include machine learning, embodied conversational agents, social robotics and autonomous moral machines. He has over 30 years of experience as a lecturer and a researcher in AI. Nigel graduated from Lancaster University with a BSc (Hons) in Computing and Philosophy in 1982 and was awarded his PhD at Oxford Brookes University (CNAA) in explainable AI in 1991.
Kevin has developed expertise in risk, fairness and interpretability and applies this knowledge to the development of reasonable and balanced regulation of the AI and social media industries. His background comes from pharmaceutical research and bringing therapeutics and medical devices into clinical use, developing needle based devices for major companies such as AstraZeneca and Celltech. Many of the principles of medical product regulation are directly applicable to AI as the two situations can be modelled as managing black boxes ie systems where it is difficult to understand how the innerards work. Kevin thus applies this knowledge directly into his work on AI and Social Media.
Chara is a Principal Lecturer in Law at Oxford Brookes University. She is a leading law researcher in the area of online hate speech and hate crime. She has been involved in multiple law commission bodies to develop the next approaches to forming digital law and specifically cyberhate law. In particular, she is interested in the interaction between technology, criminal law and hate crime/speech. Her work has been cited by the UK Parliament, the Scottish Government and the Law Commission for England and Wales. She is currently a member of the Northern Irish Core Expert Group led by Judge Desmond Marinnan which is conducting an independent review into legal reform of hate crime provisions. Chara is leading teams considering how AI-based systems deal with minorities and specifically people with unique disabilities. She completed an undergraduate degree and Bachelor of Civil Law (BCL) at Merton College, Oxford.
Professor of Artificial Intelligence
Fabio is the founder and head of the Visual Artificial Intelligence Laboratory within the School of Engineering, Computing and Mathematics. Fabio is a recognised leader in the field of uncertainty theory and belief functions. He has been conducting work at the current boundaries of human action recognition. His group are now shifting towards work at the current boundaries of visual AI, such as the design of new deep learning architecture able to regress whole action tubes in real time; structured-output DNs with as output part-based discriminative models, deep neural video captioning incorporating attention models and prior logical knowledge, and the creation of a theory of mind for visual AIs
Paul is a faculty member of the Oxford Brookes Business School. His areas of expertise include digital strategy and leadership, with a particular focus on the management of innovation involving AI and machine learning. During his time outside of academia, Paul worked as a consultant and trainer for the Chartered Institute of Public Finance and Accountancy, where he led much of the Institute’s contributions to national e-Government projects, including authoring official guidance on subjects related new technology and public sector change. Paul has a PhD in Management Studies from Cambridge University and a Master’s in Information Management from Lancaster University.
Jintao is a natural language processing (NLP) AI developer with experience in machine learning, data mining, bioinformatics and software engineering. He is working for the Institute alongside the law firm Moorcrofts to develop a disruptive natural language-based contract review and negotiation tool for solicitors and in-house legal teams, to make this process more efficient. His academic research includes the application of biostatistic and machine learning in discovery of disease biomarkers such as Alzheimer’s disease and Parkinson’s disease, and simulations of neuronal activities. Jintao has worked as a software developer in Fintech, B2B2C payment and e-commerce as well as business intelligence for online gaming, marketing and banking.
Section Leader (Validation and Regulatory Affairs)
Arijit is an experienced enterprise software architect working in artificial intelligence and big data analytics. He has over 25 years of consulting experience gained working in Europe, US, and Asia. Arijit has worked for multi-national blue-chip companies in diverse industry sectors, from Finance, Legal, Energy, Biopharma and Government. He has also been a serial entrepreneur founding companies providing artificial intelligence and cloud computing solutions within the Biopharma and Healthcare sectors.His work at the institute is focused on building a validation and regulatory systems platform for artificial intelligence applications. Arijit received his MSc in 1997 from the University of Kent, researching into the development and application of artificial neural network models, applied to medical diagnosis challenges.
Selin E. Nugent
Assistant Director (Social Science Research)
Selin is an anthropologist interested in human behavioural and cognitive modelling as it relates to cooperation and societal impacts of major technological transitions. Her work integrates computational methods and an evolutionary perspective to tackle questions on the social roles of AI systems. Her current work explores the impacts of machine agency and human-computer interaction on social interactions in the workplace. She will be leading our Future of Work Coalition. Selin holds a BA in Anthropology from Emory University (2012), an MA (2013) and PhD (2017) in Biological Anthropology from The Ohio State University. She previously held a position as a postdoctoral researcher at the University of Oxford’s Centre for the Study of Social Cohesion.
Rebecca is a senior consultant within the institute, with a specialism in Risk of Artificial Intelligence. She is in the process of developing a Risk Classification Framework. She is a keen advocate of fairness in AI, particularly with regard to disability fairness. She is also undertaking a PhD in Ethical AI, specifically researching the topic ‘Autonomous Moral AI’ – building morals into robots. She enjoys interdisciplinary pursuits, and her academic interests span Computer Science, Philosophy and Psychology. She has previous experience working as an analyst in the financial services and UK civil service, and has been working with the institute since it began.
Senior Lecturer in AI and Mathematics
Matthias joins us from the School of Engineering Computing and Mathematics at Oxford Brookes University. He has published over 40 peer-reviewed articles on AI and machine learning. His most significant, and award-winning research investigates infant-learning-inspired algorithms for robot learning. Matthias regularly writes with Nigel Crook on processes which can control ethics and morals of machines driven by AI. He is currently working on the mathematical approaches for control of AI systems which are placed under multiple simultaneous constraints or objectives for accuracy and ethical behaviours; autonomous goal-system development; explorative learning of inverse models; autonomous systems creating novel goals and autonomous ethics based on social value systems, and audiovisual signal-level synchrony. Matthias previously held a position as Specially Appointed Assistant Professor at Osaka University, Japan. He received his PhD, with highest honours, from Bielefeld University, German in 2012.
Tjeerd Olde Scheper
Tjeerd joins us from the School of Engineering, Computing and Mathematics at Oxford Brookes University. He has published over 24 papers concerning how biosystems control their dynamic behaviour and how to apply this to challenges in engineering and health. He has developed and patented sector-leading technology around methods of controlling dynamic physical systems that exhibit chaotic behaviour.
Senior Consultant (Social Sciences Research)
Alex is a Senior Consultant Researcher working on the Social Science Research Team. He is currently helping spearhead the Future of Work Coalition. His research interests include machine learning use cases for defence, defence against adversarial attacks for machine vision and Lethal Autonomous Weapons Systems. He is also an MPA candidate in Digital Technologies and Policy at UCL’s Department of Science, Technology, Engineering and Public Policy.
Nicola leads on strategic partnerships. In 2016, Nicola initiated the first AI and social robotics conference at Oxford Brookes University. Since then she has presented on the practical and human issues of implementing AI with a particular focus on applications in Human Resources. She has a keen interest in facilitating useful conversations in the ethical application of AI systems within organisations as well as lively debates on future relationship between human and intelligent machines. Nicola’s current research projects include: chatbot design for protected characteristics, such as disability and how to imbue acts of kindness in autonomous systems.