How Reliable Is the Use of Artificial Intelligence in Legal Practice?

Nicole Stander

If you haven’t already heard, ChatGPT is a language model created by the company OpenAI and released in November 2022. ChatGPT has recently gained popularity due to its ability to generate quick, detailed, and human-like responses in the form of text. It does so by returning answers to questions and prompts input by human beings. 

Many professionals are embracing the advent of artificial intelligence (“AI”) and finding ways to integrate ChatGPT into their respective industries. Some have gone as far as suggesting that platforms such as ChatGPT have the potential to eventually replace certain professionals entirely. 

In the legal industry, ChatGPT is already being used for research, drafting contract clauses, explaining legal concepts, summarizing judgments, and more. The capabilities of this AI tool, when tested, are quite surprising. I once came across a LinkedIn post shared by an amused lawyer who had asked ChatGPT to summarise a particular judgment using the style of well-known children’s author, Dr Seuss, and the results were eerily accurate!

However, as impressive as the results can be, the tool has limitations and the responses provided are not always correct. This is because ChatGPT has been trained on historical data and relies primarily on interactions with human beings to learn new information. It is certainly not capable of analyzing unique and complex legal scenarios to the same degree as a lawyer with several years of training and experience. 

Legal practitioners must further bear in mind that ChatGPT has not been designed specifically for use in the legal field, and the data that it has been trained on covers an endless range of topics. Furthermore, as the law differs considerably from jurisdiction to jurisdiction, the answers generated can easily be inaccurate, or in some cases, completely fictitious. 

Recently, at the Johannesburg Regional Court in South Africa, lawyers were criticized for relying on fake case citations generated by ChatGPT in arguing a defamation claim. After proper investigation, it emerged that the lawyers had obtained the judgments from ChatGPT and that while the cases and citations did exist, they did not relate to the facts of the matter at hand. 

Magistrate Arvin Chaitram, who presided over the case, ruled that while it did not appear that the lawyers had intended to mislead the court, they were nonetheless “overzealous and careless”, and their conduct was met with a punitive costs order. Magistrate Chaitram further commented that: “When it comes to legal research, the efficiency of modern technology still needs to be infused with a dose of good old-fashioned independent reading.” 

In a similar incident in the United States, lawyers were fined after it was discovered that they had submitted fake judicial opinions and quotes generated by ChatGPT to the court. The lawyers involved stood by the fake opinions presented even after the court questioned their existence. 

The above incidents should serve as a reminder not to overestimate the accuracy of the information sourced from AI tools and to always verify the authenticity of the information so obtained by using reliable source materials.  

It remains to be seen how AI technology will continue to develop and change the legal profession as we know it. Undoubtedly, ChatGPT – when used appropriately – can help increase efficiency and productivity by reducing the time spent on specific tasks. However, the information sourced should be viewed merely as a starting point, and thereafter verified and expanded upon by using credible sources. 

Similarly, members of the public are advised to be cautious when seeking solutions to legal problems on ChatGPT and to always consult an experienced lawyer for a professional opinion when it comes to applying the law to the specific facts of their case.