I am excited to announce that Prof. Thomas R. Lee from BYU Law and former Justice on the Utah Supreme Court, along with Prof. Jesse Egbert from Northern Arizona University Applied Linguistics, will be guest-blogging this week on their latest draft article, Artificial Meaning?. The article delves into artificial intelligence and corpus linguistics, with Prof. Lee being a trailblazer in applying corpus linguistics to the field of law. Here is a summary of the article:
The shift towards textualism is increasingly becoming empirical, focusing on ordinary meaning as commonly understood in language. This approach requires empirical evidence derived from transparent methods, rather than relying solely on human intuition or subjective dictionary definitions.
Scholars and judges have started to embrace the tools of corpus linguistics, which analyze language usage through large databases of authentic language samples.
However, a new proposal challenges this approach by suggesting the use of AI-driven Large Language Models (LLMs) like ChatGPT. This proposal gained attention through a concurring opinion by Eleventh Circuit Judge Kevin Newsom in the Snell v. United Specialty Insurance Co. case, advocating for the use of AI to determine the ordinary meaning of terms like “landscaping.” Despite the allure of AI tools, current models may not be equipped to provide reliable empirical data.
We address the arguments presented in the Snell case and the related articles, highlighting the limitations of AI tools and the effectiveness of corpus linguistics in empirical analysis. We propose a transparent and replicable method for gathering relevant data, exploring a potential future where AI-driven LLMs and corpus analysis can complement each other in linguistic inquiries.