• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Oxford Brookes University logo

Institute for Ethical AI

Promoting the ethical development and deployment of AI technology.

  • Home
  • About
    • Meet the Team
  • Our approach
  • Case studies
  • Consortia
    • The Future of Work Coalition
    • Legal AI
  • Newsletter
  • Contact

Tackling online hate

Researcher: Chara Bakalis

One of the questions Chara Bakalis has explored in her research is how the law should approach the regulation of online hate.  A view often expressed is that the law needs to ensure that ‘what is a crime offline, should be a crime online’. This view assumes that the existing legal provisions for offline hate should simply be transposed to the online world. However, Chara’s work has shown that this is not the correct approach.  She has argued that the manifestation of online hate has a number of unique features that means offline provisions are not appropriate. Provision is out of date and does not take account of the latest technological developments and the rapidly changing landscape in which cyberhate occurs. Instead, her work  offers a fresh approach to the regulation of online hate by creating a framework for identifying the different types of harm caused by online hate, and by demonstrating how each of these harms require a different legislative solution. Chara is currently working with the Law Commission. Her next project will be looking at the responsibility of internet companies and the liability they may have for content shared on their platforms.

Primary Sidebar

  • Case studies
    • A role for validation
    • The Visual Artificial Intelligence Laboratory
    • AI predictive data modelling in diabetes management
    • Autonomous moral AI
    • Blenheim Palace: AI provides real time management of customers movement
    • Brethertons Solicitors: creating the legal firm of the future
    • Cyber security in the social care sector
    • Development of AI systems that learn ethical and moral constraints
    • Identifying parameters that influence decisions in AI systems
    • Moorcrofts Law: development of a dangerous clause identifier
    • Nominet: Identifying fraudulent activities
    • Supporting professional services in their use of AI
    • Tackling bias in AI recruitment tools
    • Tackling online hate
    • Uncertain uncertainty

Footer

Institute for Ethical AI
Oxford Brookes University
Headington Campus
Oxford, OX3 0BP
UK
+44 (0) 1865 484235
ethicalAI@brookes.ac.uk

  • Contact
  • Privacy policy
  • Cookie policy

Funded by

Research England logo

Copyright © 2023