Chara Bakalis is the research lead for the Institute for Ethical AI and a Principal Lecturer in Law at Oxford Brookes University. She is a leading law researcher in the area of online hate speech and hate crime. She has been involved in multiple law commission bodies to develop the next approaches to forming digital law and specifically cyberhate law. In particular, she is interested in the interaction between technology, criminal law and hate crime/speech. Her work has been cited by the UK Parliament, the Scottish Government and the Law Commission for England and Wales. She is currently a member of the Northern Irish Core Expert Group led by Judge Desmond Marinnan which is conducting an independent review into legal reform of hate crime provisions. Chara is leading teams considering how AI-based systems deal with minorities and specifically people with unique disabilities. She completed an undergraduate degree and Bachelor of Civil Law (BCL) at Merton College, Oxford.
Nigel Crook, founder and co-director of the Institute for Ethical AI is Associate Dean for Research and Knowledge Exchange and Professor of AI and Robotics at Oxford Brookes University. His research interests include machine learning, embodied conversational agents, social robotics and autonomous moral machines. He has over 30 years of experience as a lecturer and a researcher in AI. Nigel graduated from Lancaster University with a BSc (Hons) in Computing and Philosophy in 1982 and was awarded his PhD at Oxford Brookes University (CNAA) in explainable AI in 1991.
Fabio Cuzzolin is the founder and head of the Visual Artificial Intelligence Laboratory within the School of Engineering, Computing and Mathematics. The team is projected to include 15-20 people in 2019: three faculty members, 5 postdocs, a KTP associate, four PhD students, 2 MSc and final year students, and a number of visitors.
Fabio has been conducting work at the current boundaries of human action recognition. The group has built in just a few years a leadership position in the field of deep learning for real-time action detection, localisation and recognition. This has led to the best detection accuracy to date and the only system able to localise multiple actions on the image plane in better than real time.
His group are now shifting towards work at the current boundaries of visual AI, such as the design of new deep learning architecture able to regress whole action tubes in real time; structured-output DNs with as output part-based discriminative models, deep neural video captioning incorporating attention models and prior logical knowledge, and the creation of a theory of mind for visual AIs.
Fabio is a recognised leader in the field of uncertainty theory and belief functions. His reputation comes from the formulation of a geometric approach to uncertainty in which probabilities, possibilities, belief measures and random sets are represented and analysed by geometric means.
Paul Jackson is a principal consultant at the Institute for Ethical AI and a faculty member of the Oxford Brookes Business School. His areas of expertise include digital strategy and leadership, with a particular focus on the management of innovation involving AI and machine learning. During his time outside of academia, Paul worked as a consultant and trainer for the Chartered Institute of Public Finance and Accountancy, where he led much of the Institute’s contributions to national e-Government projects, including authoring official guidance on subjects related new technology and public sector change. Paul has a PhD in Management Studies from Cambridge University and a Master’s in Information Management from Lancaster University.
Jintao Long is a natural language processing (NLP) AI developer with experience in machine learning, data mining, bioinformatics and software engineering. He is working for the Institute alongside the law firm Moorcrofts to develop a disruptive natural language-based contract review and negotiation tool for solicitors and in-house legal teams, to make this process more efficient. His academic research includes the application of biostatistic and machine learning in discovery of disease biomarkers such as Alzheimer’s disease and Parkinson’s disease, and simulations of neuronal activities. Jintao has worked as a software developer in Fintech, B2B2C payment and e-commerce as well as business intelligence for online gaming, marketing and banking.
Kevin Maynard is a director of the Institute for Ethical AI. He has developed expertise in risk, fairness and interpretability and applies this knowledge to the development of reasonable and balanced regulation of the AI and social media industries. His background comes from pharmaceutical research and bringing therapeutics and medical devices into clinical use, developing needle based devices for major companies such as AstraZeneca and Celltech. Many of the principles of medical product regulation are directly applicable to AI as the two situations can be modelled as managing black boxes ie systems where it is difficult to understand how the innerards work. Kevin thus applies this knowledge directly into his work on AI and Social Media.
Selin Nugent is an assistant director of Social Science research at the Institute for Ethical AI. She recently joined the Institute from Oxford University where she integrated computational methods to analyse diachronic, cross-cultural, and cognitive perspectives on the functions of ritual and cohesion in cultural evolution in social contexts. Selin will be leading the delivery of HR and Legal AI consortia drawing on the skills of her colleagues. She earned her BA and BSc in Anthropology & Human Biology from Emory University (2012) and her MA in Biological Anthropology from The Ohio State University (2013). She was awarded her PhD in Biological Anthropology, with a focus in Bioarchaeology, from The Ohio State University (2017).
Rebecca Raper is a senior consultant within the Institute for Ethical AI. Being involved at an early stage of the Institute’s development, Rebecca was responsible for undertaking the initial market research to understand how AI impacts industry, specifically within the human resources sector. She has a keen interest in understanding bias in AI systems and fairness in recruitment, and has more recently been looking at the impact of AI systems on protected characteristics, such as disability. Rebecca is a PhD candidate in ethical artificial intelligence; her thesis looks at how we can create autonomous moral artificial intelligence.
Matthias Rolf is Senior Lecturer in AI and Mathematics at the School of Engineering Computing and Mathematics at Oxford Brookes University. He has published over 40 peer-reviewed articles on AI and machine learning. His most significant, and award-winning research investigates infant-learning-inspired algorithms for robot learning. Matthias regularly writes with Nigel Crook on processes which can control ethics and morals of machines driven by AI. He is currently working on the mathematical approaches for control of AI systems which are placed under multiple simultaneous constraints or objectives for accuracy and ethical behaviours; autonomous goal-system development; explorative learning of inverse models; autonomous systems creating novel goals and autonomous ethics based on social value systems, and audiovisual signal-level synchrony. Matthias previously held a position as Specially Appointed Assistant Professor at Osaka University, Japan. He received his PhD, with highest honours, from Bielefeld University, German in 2012.
Tjeerd Olde Scheper is a Senior Lecturer in The School of Engineering, Computing and Mathematics at Oxford Brookes University. He has published over 24 papers concerning how biosystems control their dynamic behaviour and how to apply this to challenges in engineering and health. He has developed and patented sector-leading technology around methods of controlling dynamic physical systems that exhibit chaotic behaviour.
Nicola Strong is a senior consultant for the Institute for Ethical AI leading on strategic partnerships. In 2016, Nicola initiated the first AI and social robotics conference at Oxford Brookes University. Since then she has presented on the practical and human issues of implementing AI with a particular focus on applications in Human Resources. She has a keen interest in facilitating useful conversations in the ethical application of AI systems within organisations as well as lively debates on future relationship between human and intelligent machines. Nicola’s current research projects include: chatbot design for protected characteristics, such as disability and how to imbue acts of kindness in autonomous systems.