Modern textualism is founded on three key principles. Textualists believe in interpreting the language of the law based on its ordinary meaning because: (1) it can be determined using clear methods; (2) it is attributable to the public who must follow it; and (3) it prevents judges from injecting their own policy preferences into their decisions.
Textualist judges are expected to provide evidence to support their interpretations of legal language. Relying solely on intuition or subjective views does not meet the standards of transparent textual analysis and fails to protect public reliance on the law.
The Snell concurrence acknowledges these issues but suggests using AI LLMs, which are insufficient compared to corpus linguistic tools.
Our draft article demonstrates how corpus linguistics can provide a more detailed analysis. By examining the use of the term “landscaping” in public language, we found that it can encompass both botanical and non-botanical improvements.
In another search, we looked at whether “landscaping” is commonly used to describe the installation of in-ground trampolines. Our findings from the iWeb corpus show that “landscaping” is a common term for this type of work.
There is no definitive answer on what constitutes “ordinary meaning,” with some considering it the most common interpretation and others accepting less common but still valid meanings.
AI LLMs do not provide the same level of detail as corpus linguistics tools, often offering simplistic conclusions without supporting evidence.
In Snell, the AI chatbots’ responses lacked the depth and nuance provided by corpus linguistics analysis.
For a more thorough understanding, refer to Part I.B.1 of our article.
The chatbot openly admits that it cannot generate empirical evidence, as this would require new observations or experiments that it is unable to conduct directly. This limitation is a significant flaw, as the premises of textualism demand more than just the chatbot’s simplistic conclusions on topics like “landscaping” or in-ground trampolines.
The chatbot’s oversimplified responses fail to address deeper questions of legal theory, such as what constitutes an “ordinary” application in a given context. Without access to underlying data, a judge who relies on the chatbot’s opinions may not be conducting a thorough analysis of ordinary meaning, potentially overlooking the public’s reliance interests and injecting personal biases into their rulings.
This is just one example of how current AI tools like LLMs fall short compared to corpus linguistic methods. Our upcoming article will explore other limitations of existing AI systems and propose ways to leverage AI to enhance corpus linguistic analysis while minimizing inherent risks. Stay tuned for more insights in our future blog posts.
Source link