the dilemma of artificial intelligence and a challenge for Justice

By Lucas of Venice

We are invited to rethink new scenarios that present themselves as uncharted territory by recent advancements in the field of computing that are related to the development of new and improved Artificial Intelligence (AI) as well as its irruption and direct implications in the Argentine legal system and in their practices.

Analysis of long-term coexistence with new computerized systems that involve significant and/or dubious improvements in digital knowledge tools is a serious prospect. We’re discussing cutting-edge technology that has a direct impact on the legal system.

Are we taking proactive steps to ensure that the effects of this new technology on the legal system and academic institutions are equitable and just for everyone? How does this relate to the existing rules and principles?

AI application in the legal sector needs to be thoroughly thought out and planned, with a focus on ethics and openness. In order to ensure that AI is used properly, it must be created in a way that, in addition to preventing discrimination and ensuring justice, it also has ongoing review and evaluation methods.

ChatGPT is a revolutionary technology.

Everything has changed with the release of ChatGPT3, which was created by the OpenAI artificial intelligence research facility. It is a sort of AI that enables consumers to communicate with the computer directly through a chatbot in a way that is practical, easy, incredibly quick, and versatile.

The computer incorporates and records each contact it gets, from inquiries and concerns to affirmations or corrections made by the participant, all through a fluid, receptive, and amusing dialogue. ChatGPT was primarily created to replicate the patterns of human speech. It is both highly beneficial and debatable.

When broken down further, these chatbots are so potent that they boast the capacity to multitask and can produce academic, legal, or journalistic texts in essays, papers, or theses, as well as songs, poetry, laboratory formulas, and school assignments.

Its reach has no discernible bounds, and we are only in the presence of a preliminary prototype that is constantly being developed and will definitely continue to advance.

The same question asked of ChatGPT by two distinct users is likely to elicit similar but not identical results. The model’s capacity to provide distinctive and creative responses grows as it learns from new data and gets additional information.

It’s critical to emphasize that ChatGPT lacks a conscience and intentions.

The responses produced are not creative in the traditional sense of the word because the language model is just a mathematical one; rather, they are mashups of phrases and words you have already seen in the text you were trained with.

In addition, lacks the ability to use moral or ethical standards in his reasoning since he lacks the canons necessary to discern between good and evil.

ChatGPT: Ethical conundrums

Controversies emerge in several academic disciplines as a result of this organized and unrestricted access to all forms of information.

For instance, orthodox epistemology places debates directly as a threat to people’s ability to interact with books or any other sort of complicated written argument in a thoughtful, reflective, and in-depth manner while maintaining intellectual honesty.

The emergence of this intelligence may significantly alter instructional strategies. Perhaps routines or outdated memotechnical regulations that surround the process of teaching and learning will gradually fade away as this technology advances.

As frank as it may sound, the way we used to interact with these study habits will change. Instead of being important to know how to accomplish something, it will be important to know where to get it and how to give it computerized worth.

It will be essential to concentrate on machine learning. AI advances this, and teaching strategies might be modified to show students how to use these methods for data analysis and decision-making. Due to the fact that this breakthrough might produce automated conclusions, it is crucial to teach students how to evaluate and challenge the logic behind AI decisions in order to foster critical thinking.

ChatGPT: Using AI in the Criminal Justice System

More and more, AI is being employed in the legal system to increase its effectiveness and accuracy. This issue is real, from the electronic representation in court proceedings to the automated study of legal papers. A large portion of the constitutional procedural law is dedicated to ensuring that the rights of those who are most vulnerable are not infringed and that levels of inequity between the parties to the process are not growing.

The legal industry in Argentina is increasingly implementing AI technology, with a focus on the use of chatbots like ChatGPT. These natural language systems enable lawyers to automate monotonous chores, increase productivity, and offer better client care.

When asked if ChatGPT could be used in the Argentine legal system, the chatbot itself gave the following response:

Depending on the exact goal, ChatGPT implementation in Argentine judiciary may serve a variety of purposes. Some examples include:

ChatGPT can be used to evaluate a variety of legal documents, including statutes and judgments, to assist judges in making decisions.
Legal Document Generation: ChatGPT can be used to efficiently and accurately produce automated legal documents like orders and judgments.
Based on the patterns and connections found in the data, ChatGPT could be used to anticipate the likely outcomes of a case.

In general, AI can swiftly and reliably handle enormous volumes of data and documents, which can assist speed up the judicial decision-making process. It can also find patterns and links that humans might miss, as well as drastically decrease costs by automating operations and processes.

In the end, we must consider their hazards as well.

The chatbot’s potential for discrimination, which could replicate existing disparities or biases in the data used for training, is primarily apparent. This is a critical concern that could result in poor conclusions.

The “transparency” issue is a question. It is another advantage to take into account as it can be challenging to comprehend how AI makes decisions, which can make it challenging to review and evaluate them in addition to their ethical considerations, as was previously said.

Despite the fact that we are surrounded by cutting-edge technology, it is still under development, so it is important to point out that when this AI has been asked legal questions, it occasionally has given incorrect or incomplete answers. If this were to happen again, it would likely result in judges and/or attorneys reaching the wrong conclusions. Therefore, care must be taken when assessing these new AI tools; human supervision remains a crucial kind of insurance.

We have just recently been able to estimate some of the potential changes brought on by the GPT chatbot. The initial response that has been observed in study centers in the United States (New York, Seattle), which is naive and ineffective as it can be accessible using mobile data, has been to restrict them from those locations’ Wi-Fi networks.

In a situation where there is interaction between user and machine, honesty in what each one contributes, and transparency of responsibilities, a tentative conclusion leads to a case by case research. The tendency to have more engaged students in class and to use targeted, differentiated argumentation will make teaching more difficult. The bot prefers to respond in an automated and linear way; human intellect must be smart and multifaceted in order to exist in the contemporary world.

*Lucas de Venezia is a lawyer (UCA), a law PhD candidate (UNLZ), and a professor at both undergraduate and graduate levels of education (UCES).

By Editor

Leave a Reply