CHATGPT: WHAT DOES IT MEAN FOR OUR LAW FACULTY?
ChatGPT stands for ‘Chat Generative Pre-Trained Transformer’ which is a language generative model of artificial intelligence that generates answers based on the language used in the questions received. Within the first five days of launching at the end of 2022, ChatGPT had more than 1 million users. The powerful AI Chat mechanism has since attracted 25 million daily visitors and a total of 619 million website visits. With the sudden surge in use and its undeniable use cases in the context of higher education and research, the iNtaka Center for Law and Technology recently commenced a series of discussions on the implications, uses and challenges related to the use of ChatGPT in the UCT Law Faculty.
The hybrid event proved popular with academics. Attendees came ready to take notes, ask questions (to both the presenters and to ChatGPT) and share their own experiences with ChatGPT so far – good and bad. After a general welcome and some broad framing for the seminar by the iNtaka Centre’s director A/Prof Tobias Schonwetter, the seminar continued with a general introduction of ChatGPT by PhD candidate and iNtaka researcher Hanani Hlomani. In his presentation, Hanani explained the software behind ChatGPT and provided practical examples of the tool’s capabilities, limitations, and challenges.
Hanani Hlomani emphasized on the fact that ChatGPT uses natural language processes which generates human-like answers in response to prompts submitted by users. The three main uses of ChatGPT are language generation (designing human like text), language translation (translating from another language to English) and question answering. Key limitations of ChatGPT include that it is only trained on English written texts until 2021, that it is not creative and if a subject matter does not have a significant amount of material published about it the chat mechanism may not be accurate. However, the more inputs received from users, the better the Chat Bot will become. Users are encouraged to correct ChatGPT and at this point, it often still apologises for being incorrect! ChatGPT is extremely well-structured and provides convincing (sounding) answers to our questions. Hanani explained that the underlying algorithms seeks to provide answers that are as accurate as possible to human-like responses, and it generates these answers through merely predicting the most likely and appropriate answers to the questions posed and the words used in these questions.
Researchers, scholars, and academics are currently mainly focused on the Question/Answering function on ChatGPT as it raises many concerns within the academic sphere. Important questions in this context includes:
How would students be able to use ChatGPT in their assignments?
How can we detect whether students have used the ChatGPT function in their submissions?
Will assignments and writing skills remain as important in the future as they were for previous generations of law students?
Will ChatGPT be a challenge to developing legal writing and research skills?
How can ChatGPT be used to aid our teaching and research enterprise?
Through a practical example of how ChatGPT responds to legal questions, attendees were able to make the following preliminary observations:
ChatGPT is seemingly able provide useful, well-phrased and convincing-sounding information on legal provisions, well-known case law and practical legal answers. The Chatbot was able to produce a satisfactory case summary of (the internationally well-known case) S v Makwanyane and could accurately pick up that while there was no dissenting judgement but the judges nevertheless had different reasons for their judgements. ChatGPT was able to identify the main legal issues in the case and the ratio of the decision.
However, it was evident that ChatGPT struggled with providing accurate sources for its answers. This appears to be a major weakness in the academic context. We found that often the listed source simply didn’t exist (in other words, ChatGPT made these up) or the title of the source was inaccurate.
An example of how ChatGPT answered a question (excerpt):
Our next speaker, Mariya Badeva-Bridght (co-founder of Laws.Africa and Director of AfricanLII), focused on how ChatGPT can be used to publish legal texts more efficently and make law more understandable to the legal community. Maria focused on how ChatGPT was able to simply tasks which wold usually take a number of days, e.g. working through large data sets. However, it was highlighted that where one researches case law or legal information which was recent, ChatGPT yet again provided inaccurate information – arguably because ChatGPTs training data wasn’t recent. Distiburingly, however, the information, although inaccurate, sounded legally correct and professional. This is another potential issue when focusing on the use of ChatGPT for students and lectuers, as ChatGPT may provide inaccurate information and deceive students (and potentially lecturers too) into believing that the generated response is correct. On the other hand, ChatGPT summaries and breakdowns are of good quality and could be useful in the context of these writing and research papers.
Our final speaker, LL.M. Student and iNtaka researcher Kyle Janse, shared with the audience how he built his own AI Chat Bot, using the ChatGPT API, to improve the chat bots output for study purposes, with a view of obtaining more accurate and detailed information to practical legal questions. In order to do this, Kyle used software like the Python Package and NotePad ++ which, when running the code via the command function on Windows enables users to ask practical questions and get accurate legal responses similar to that being used by ChatGPT. Kyle provided a fascinating example by asking both ChatGPT and his own AI Chat Bot similar questions and comparing their responses.
ChatGPT generated the following response:
Python Package-generated response:
As expected, participants of the seminar engaged in a lively discussion towards the end of the seminar. Some lecturers shared valuable insights on how they have already started to respond to the emergence of AI technologies such as Chat GPT. For instance, a few lecturers had used ChatGPT- generated responses to encourage their students to assess those responses and improve the answers provided. Others told the audience how in their view ChatGPT may be useful in producing quizzes and other question-type exercises for students. Yet, some faculty members seemed more concerned and skeptical. But there appeared to be some consensus at least that this conversation needs to be continued and that we cannot simply ignore these developments; nor should we merely focus on strategies how the use of ChatGPT can be prevented in our context to respond to the more problematic implications of such AI tools in tertiary education. Ultimately, these ever-improving tools also hold great promise for adapting and improving legal education and research, and our students as well as their future employers rightly expect us to properly prepare them for these new realities. In addition, tools like ChatGPT raise myriads of fascinating policy- and law-related research questions that faculty members may wish to engage with in the future.
Stay tuned for our next ChatGPT seminar to further unpack some of the issues we face as a result of this disruptive technology.
Comments