This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 8 minute read

Can AI be sued?

The UK Jurisdiction Taskforce (UKJT) is consulting on a Legal Statement exploring how English private law might deal with harm caused by AI systems. 

The UKJT says that uncertainty about legal risk may slow adoption of AI, hinder its innovation and leave businesses unsure about the nature and extent of risk. The purpose of the statement is therefore to seek to provide as much legal certainty to the business and tech community as possible by explaining how the law is likely to deal with problems arising from this rapidly developing technology. The UKJT intends to publish a final version of the Legal Statement once it has considered responses.

Background

It is difficult to get through a full day - or even an hour - without being exposed to reporting and commentary on artificial intelligence. The coverage is often feverish, reflecting excitement about accelerating developments in the field, the exponential growth in the sector and the transformative potential of AI across all sectors and society in general.

One aspect of the growth of AI is a growing awareness of its potential to cause harm. The question arises: how will English law determine liability where a person alleges they have been harmed by AI? There are no AI-specific liability regimes or legal principles in the UK, but the English common law is well developed and flexible. While there is therefore good reason to think it can be applied and/or developed incrementally to address such novelties, the uniquely autonomous nature of AI gives rise to uncertainty, or might be perceived to do so. 

The UKJT has therefore stepped into the breach. The UKJT is a group of experts in law, technology and digital innovation, and includes among its members the Master of the Rolls, Sir Geoffrey Vos. It was established by the Lawtech UK Panel in 2020 and is backed by the Ministry of Justice. It aims to encourage the legal sector, traditionally conservative by nature, to embrace the use of ‘lawtech’ and other digital technology. Its work to date has been focussed on crypto assets and associated technologies.

The purpose of the UKJT’s work is not to not propose any new laws. Rather, its aim is to clarify how existing legal principles might apply when AI is involved, recognising that courts will adapt the law as genuinely novel situations arise.

Scope of the Legal Statement

The consultation focuses on non-deliberate AI harms under the private law of England and Wales. This encompasses the law of negligence, product liability, professional duties, vicarious liability, and responsibility for false statements generated by chatbots. Criminal law, public law, IP, competition, tax and wider regulatory requirements are not within the scope of the statement. 

In addition, the consultation acknowledges that in many cases liability will be determined by reference to a contract, since most commercial relationships involving AI already rely on contracts to manage risk. Where the alleged wrongdoer and victim are actors in an AI supply chain (e.g. data providers, foundation model developers, application developers, etc.), these questions will primarily be a function of the express terms of the contract (particularly warranties, disclaimers and indemnities).

Since a liability analysis where there is a relevant contract will depend so heavily on the contract(s) in question, and the scope to identify more generalised principles is limited, the consultation focuses on non-contractual duties. These arise where, instead of having voluntarily taken on responsibility to another (e.g. by contract), the law imposes responsibility, for example in the law of negligence.

Premises and key assumptions 

The consultation takes the approach that the analysis is most usefully performed by anchoring to the ‘autonomous’ characteristic of AI, which it suggests is the most salient of AI’s novel characteristics and the one most closely connected with legal uncertainty (real or perceived). The UKJT deemed this to be the best approach in circumstances where there is no universally agreed definition of AI, and the technology takes many forms. It also aligns with the definition put forward by the UK Government in its White Paper on AI in 2023.

‘Autonomy’ in this context is meant to capture (i) an unpredictable relationship between input and output, (ii) opacity of reasoning, and (iii) limited user control over output. The UKJT considered whether to approach the question from the perspective of AI’s ‘adaptability’, but decided that ultimately any legal issues arising from that characteristic would already arise by virtue of ‘autonomy’. 

The other key premise is that AI systems do not have legal personality in English law. This means legal responsibility for harm cannot be attributed to the AI, and must instead be attributed to natural or legal persons.

The draft Legal Statement

The statement’s main propositions as regards liability for harm are as follows:

  • Whether the loss suffered is physical or economic, the negligence analysis is essentially the same. The usual principles of negligence apply: duty of care, standard of care, breach, causation and foreseeability.
     
  • In relation to physical harm:
    • Whether a person involved in the development, supply or deployment of AI might be liable in negligence for physical harms is highly fact sensitive.  However, the law of negligence is flexible, having adapted and evolved through the court’s decisions over many years. There should be no reason why its principles cannot be applied to harms caused by AI failures.
    • In many scenarios, AI is treated as a tool used by someone who already owes a duty—such as a doctor using AI to support a diagnosis. Duties can also arise further up the supply chain, depending on what harm was reasonably foreseeable and the practical ability of users to detect or prevent errors.
    • Courts will judge the required standard of care using expert evidence and relevant industry guidance. They will consider issues such as data quality, development and testing processes, the design of guardrails, and whether AI should have been used at all for the task concerned.
    • Causation may be complex when models operate opaquely. However, where gaps in evidence arise —perhaps because a system does not log key inputs — courts can adopt practical approaches similar to those used in other technically complex cases.
    • In particular, the Statement suggests that foundation model developers (those who design, develop and train foundation models) are unlikely to owe a duty to protect against harms arising as a result of their model being used for unforeseeable purposes where there was not sufficient testing by an actor further down the AI supply chain, such as an AI developer.
       
  • Liability for economic harm under English law generally depends on there being a “special relationship” between the parties in which one has voluntarily assumed responsibility to the other. The circumstances in which economic loss caused by AI can give rise to liability for negligence will generally either involve professional negligence or statements made by AI (e.g. by a chatbot).
     
  • In relation to professional negligence:
    • The scope and nature of the services that the relevant professional has contracted to provide will define the scope and nature of his/her duty. The contract will likely contain an express or implied term that the services be carried out with reasonable care and skill (which will likely be mirrored by a common law duty of care to exercise reasonable care and skill, unless that is inconsistent with the terms of the contract). The scope of the duty will usually fall to be assessed by reference to the contract.
    • A professional will be found to have acted with reasonable care and skill if he or she has acted in a way that a reasonable body of the professional would also have acted. The standard of care which he or she is required to exercise is that ordinarily exercised by reasonably competent members of the profession, with the same rank and specialisation. A professional may be negligent if they:
      • use AI without proper understanding, testing or supervision
      • fail to protect confidentiality
      • rely on AI inappropriately
      • fail to explain material use of AI to clients when needed
      • in some cases, fail to use AI where a competent practitioner would have done so.

Causation

The statement addresses factual and legal causation at length, given this is one of the areas in which the autonomous nature of AI systems may lead to difficulties.

  • Given the opacity between input and output, evidencing why an AI system produced a particular outcome may not be possible after it has made a decision.
     
  • Courts may approach factual causation differently where there are difficulties with evidence; specifically, they may take a ‘benevolent’ approach to the claimant’s evidence and a more critical approach to the defendant’s evidence by, for example, shifting the burden of proof on particular factual issues.
     
  • Where there is scientific uncertainty, the courts might take an alternative approach to causation, such as by applying the “material increase in risk” principle, rather than the “but for” test (i.e., the Fairchild exception).
     
  • The statement considers that though it is not impossible to imagine an actor in the AI supply chain being held responsible for “creating a source of danger by bringing into existence without suitable safeguards or otherwise failing to control (when it has special powers to do so) an AI model or system which it knows to be capable of certain types of harm if misused”, such a case is likely to be rare. The more “general purpose” an AI system, the less likely liability is to arise.
     
  • The authors consider it to be generally unlikely that an “upstream” party in an AI supply chain (e.g. a foundation model developer) would be held responsible for a third party’s misuse of an AI system it created. However, if the upstream party created the source of danger or otherwise assumed responsibility for it (e.g., by contract), there might be the possibility of liability.
     
  • Overall, the UKJT’s view is that the English common law is capable of evolving and/or developing limited exceptions to meet issues with evidence and causation that cannot be addressed by applying existing principles.

Vicarious liability

Because AI cannot commit a legal wrong itself, vicarious liability cannot attach to the system. However, employers may be liable if an employee uses AI negligently in the course of their work, applying conventional principles. The question remains whether the employee was negligent in carrying out actions which were within the scope of their employment.

Liability for AI‑generated statements

The Statement also addresses false or harmful statements generated by chatbots.

Although there is no existing English legal authority for the proposition that a chatbot may make a statement on behalf of a legal person (for the purposes of negligent misstatement), a Canadian court recently held an airline liable for false statements made by its chatbot.

The UKJT’s view appears to be that, subject to the facts, there is no reason in principle why English common law principles applicable to negligent misstatement cannot be applied to the requisite elements of false statement, duty of care, reliance and loss.

The draft Legal Statement also addresses potential liability in defamation, but we do not address that in this note.

Strict product liability

Finally, the statement notes that the Consumer Protection Act 1987 may apply if AI is embedded in a tangible product and causes death, personal injury or certain property damage. The key question is whether the product was defective, meaning not as safe as people were entitled to expect.

This regime does not usually cover standalone software or cloud‑based AI services. Claimants must prove both defect and causation, though they do not need to identify the specific technical flaw.

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

artificial intelligence, commercial disputes, it disputes, technology, technology regulation, article, liability, litigation, dispute resolution