Published on 21 May 2025

The AI Patent Problem: Why UK Courts Are Getting It Wrong

A lack of tech understanding may be undermining innovation in artificial intelligence

Why It Matters

As AI innovation grows, courts play a critical role in deciding whether these technologies receive legal protection. But when judges misunderstand how AI works, they risk rejecting genuine inventions for the wrong reasons.

Key Takeaways

• UK courts have invented their own definitions of “computer program” under patent law, leading to incorrect rulings on AI inventions.

• A recent Court of Appeal case shows how poor understanding of artificial neural networks can result in faulty legal definition.

• The lack of technical expertise in the judiciary poses serious risks to innovation, especially for businesses working with emerging technologies.

 

When the Law Misreads the Technology

In the UK, patent protection is governed by the Patents Act 1977, which excludes “a program for a computer” from being patentable — unless the invention goes beyond just being a program. But the Act doesn’t define what a “program for a computer” actually is. That leaves the courts to interpret the term, often without sufficient understanding of the underlying technology.

A striking example is the recent Court of Appeal decision in Comptroller-General of Patents, Designs and Trade Marks v Emotional Perception AI Limited. In this case, the court rejected a patent application for an invention involving artificial neural networks (ANNs), a key AI technique. The court’s ruling hinged on its own invented definition of “computer” and a computer program — but both definitions, according to legal scholar Prof. Hannah Yee-Fen Lim, bore little resemblance to the scientific and factual reality of computers and computer programs.

The court consulted multiple dictionary definitions but instead of choosing one, it invented its own: a computer program is “a set of instructions for a computer to do something.” This broad and vague definition fails to capture the logic-based structure of traditional programming, and misrepresents how AI systems — particularly ANNs — operate.
 

Why AI Is Not Just Another Program

Artificial neural networks do not work like conventional software. In traditional programming, instructions follow an “if-then” logic. But in ANNs, the system learns and adjusts itself through weights and biases — numerical values that evolve as the AI is trained. These elements are not written instructions; they are dynamic, autonomous components that adapt based on data inputs.

The Court of Appeal ruled that these weights and biases were themselves computer programs, simply because they fell under the court’s overly broad and inaccurate definition. But from a technical standpoint, these weights and biases are just numbers — not sets of instructions. Worse still, the court failed to recognise the autonomous nature of these AI systems, which was central to the patent’s claim of being an inventive step.

In doing so, the court misunderstood a core aspect of AI innovation and incorrectly classified a novel technology as something legally unpatentable. This raises broader concerns about how UK courts assess emerging technologies, especially when they invent and apply their own flawed or oversimplified definitions.
 

What This Means for Lawmakers, Courts, and Businesses

This case is a warning signal for how technological misunderstanding can lead to bad legal outcomes — with serious consequences for innovation and business competitiveness.

For lawmakers, the message is clear: it’s time to define core technology terms like “computer program” in legislation. Leaving this task to judges — who often lack technical training — can result in inconsistent and inaccurate interpretations.

For courts, the solution lies in seeking expert input. Whether through amicus briefs (submissions by non-parties with expertise), court-appointed experts, or more robust training of judges in tech-related areas, judges need better tools to understand what they’re ruling on — particularly in complex areas like AI.

For businesses and patent professionals, the takeaway is to draft patent applications with clarity, assuming the reader has no deep technical knowledge. This includes explaining how AI components function and highlighting their inventive aspects in a way that is more accessible.

Without better alignment between legal systems and emerging technologies, groundbreaking innovations may continue to fall through the cracks and fail to obtain legal protection.

 

 

Authors & Sources

Authors: Hannah Yee-Fen Lim (Nanyang Technological University)

Original publication:

This material was first published by Thomson Reuters, trading as Sweet & Maxwell, 5 Canada Square, Canary Wharf, London, E14 5AQ, in European Intellectual Property Review, Hannah Yee-Fen LIM “AI and computer programs in UK patent law: a messy mix” (2025) 47(4) E.I.P.R. 185-191 and is reproduced by agreement with the publishers. For further details, please see the publishers’ website.

For more research, click here to return to NBS Knowledge Lab.