ISSUE 53 Expert Witness Journal - Journal - Page 39
An Expert’s View on the Three
Main Copyright Cases being Brought
Against AIs in the United States
The cases referenced in this article can be found in these three links:
https://shorturl.at/noGO1
https://shorturl.at/fpFSU
https://shorturl.at/cvxEW
It should be noted that this article is being written
about these USA based cases because the author is unaware of any similar cases being brought in the UK
and whilst there may be legal distinctions between US
copyright law and UK copyright law the underpinning facts and methods would be the same and thus
the evidence likely to be given by an expert would be
unchanged.
engineers; instead, the model discerns patterns and
relationships within the data itself. The training process involves adjusting internal parameters (weights)
to minimize the difference between the model's
predictions and the actual outcomes. This iterative
process, involving millions or even billions of parameters, enables the model to generate coherent and
contextually appropriate text.
Understanding Large Language models
Large Language Models (LLMs), such as OpenAI's
GPT models, have revolutionized the field of artificial
intelligence by their ability to generate human-like
text. Understanding how these models learn is crucial to discussing their implications in the realm of
copyright law.
The learning process of LLMs can be analogously
compared to human learning in certain aspects. Just
as a child learns a language by being exposed to numerous examples of speech and text, absorbing patterns and rules implicitly, LLMs learn language
patterns from the text data they are exposed to. However, there are fundamental differences. Human
learning is deeply interactive, contextual, and often
involves a conscious understanding of rules and
abstract concepts. LLMs, on the other hand, operate
on statistical correlations and do not possess
consciousness or understanding.
LLMs are trained using a process known as
unsupervised learning. They are fed vast amounts of
text data, from which they learn to predict the next
word in a sentence. This method does not involve
explicit programming of rules or logic by human
EXPERT WITNESS JOURNAL
37
F E B R UA RY 2 0 2 4