When Courts Misunderstand AI in the Age of Neural Networks
Why It Matters
As artificial intelligence becomes embedded in everything from music recommendations to medical imaging, courts must understand how AI works to apply the law properly. Misunderstanding how the technology functions could have negative consequences for innovation and intellectual property rights.
Key Takeaways
- A UK court’s misunderstanding of how neural networks function led to a flawed judgment on AI patentability.
- Artificial Neural Networks (ANNs) are not “machines” and the weights and biases are not “computer programs” in themselves.
- Courts need deeper technical literacy to ensure that patent law keeps pace with AI advances.
When a Court Got AI Wrong
In 2024, the UK Court of Appeal ruled in Comptroller-General of Patents, Designs and Trade Marks v Emotional Perception AI Ltd on whether an Artificial Neural Network (ANN) could be patented. The invention, developed by Emotional Perception AI Ltd, used an ANN to recommend music tracks based on human emotional responses, effectively learning how people perceive similarity in sound.
The Court decided that such an AI system was not patentable. It reasoned that an ANN is a “machine” and that its components, the mathematical weights and biases that determine how the network learns, were essentially “computer programs.” As patent law excludes “computer programs as such” from patent protection, the Court concluded that the AI invention did not qualify.
However, this reasoning, argues Assoc Prof Hannah Yee-Fen Lim, was technically flawed. It reflected a fundamental misunderstanding of how AI, and neural networks in particular, actually function.
Neural Networks Are Not Machines
Artificial Neural Networks are mathematical models inspired by the human brain. They consist of “nodes” that process input data through weighted connections; adjusting these weights over time to “learn” patterns. But crucially, the model itself cannot operate independently. It only functions when implemented on hardware, such as a computer processor or dedicated chip.
Holding an ANN to be a “machine” therefore is scientifically incorrect. The Court’s logic that every ANN is a computer confuses the computing model with the device it runs on. Code stored on a USB drive and never executed, performs no processing. Only when executed on a physical computer does it become operational.
By overlooking this distinction, the Court misapplied its own definition of “computer” as a “machine that processes information.” The ANN itself cannot process anything without the computing hardware that hosts it.
Weights and Biases Are Data, Not Instructions
A second error lay in the Court’s decision that the “weights and biases” within a neural network are themselves computer programs. In computing, there is a vital distinction between instructions (which tell the machine what to do) and data (the information those instructions act upon).
Weights and biases are values – numbers adjusted during training to fine-tune how the network interprets inputs. They are not commands; they are results of computation.
In some hardware-based neural networks, these weights may even be represented physically by resistors or electrical currents. Clearly, these are not lines of code or step-by-step instructions, reinforcing that they cannot be treated as computer programs under patent law.
Why Technical Literacy in Law Matters
A/P Hannah Yee-Fen Lim warns that such misunderstandings pose serious risks in patent adjudication. Patent law is one of the most technically complex areas of the legal system. Legal counsel and judges must grasp enough of the underlying technology to properly argue and assess whether an invention meets the legislative requirements.
Such misunderstandings can distort legal reasoning and erode confidence in the justice system, particularly when dealing with fast-moving technologies like AI. To apply the law correctly and fairly, adjudicators must engage deeply with the technology, not just its descriptions in legal filings.
Business Implications
This case underscores a broader issue: as AI systems become more common in commercial products and services, their legal treatment will impact upon innovation. If courts misclassify AI inventions as mere “computer programs,” firms may lose incentives to invest in advanced AI development.
Conversely, stronger judicial understanding could promote balanced outcomes, protecting genuine innovation while ensuring that overly broad patents do not stifle competition. For technology companies, this means that technical clarity in patent filings is essential. For policymakers and the judiciary, it highlights the urgent need for ongoing education in digital technologies, especially artificial intelligence.
AI is no longer confined to computer science labs; it shapes business models, consumer products, and even cultural content. Ensuring that courts keep pace with these advances is critical for fair and future-ready legal systems.
Authors & Sources
Author: Hannah Yee-Fen Lim (Nanyang Technological University)
Original Article: Complexities of AI and Artificial Neural Networks in Patent Law Decisions. Published in European Intellectual Property Review (2025), Vol. 47, Issue 8.
This material was first published by Thomson Reuters, trading as Sweet & Maxwell, 5 Canada Square, Canary Wharf, London, E14 5AQ, in European Intellectual Property Review “Complexities of AI and Artificial Neural Networks in Patent Law Decisions” (2025) 47(8) European Intellectual Property Review, pp 447-452 and is reproduced by agreement with the publishers. For further details, please see the publishers’ website.
For more research, click here to return to NBS Knowledge Lab.





