The AI has penetrated many areas of our daily lives, providing significant help to people. But what happens when the Artificial Intelligence (AI) is used in important subjects, such as in justice; Using an algorithm to determine a child's guardianship terms is not the same as suggesting Netflix which movie to watch.
The AI and automation already play an important role in American legal system. Last week, a conference was held in Portland, where they gathered legal professionals across the country to look at how they can modernize their systems. One of the topics of discussion was to what extent AI should be used.
Alan Carlson, a retired judge, said we should not use something just because we can. Before adopting new technologies, it is worth asking: "Do we really need AI"?
Automation is already used in courts and can be very effective. For example, it may make the justice system more accessible to people who are not financially able to hire a lawyer.
For example, in the Los Angeles Superior Court, there is the Gina, One online assistant which helps residents arrange calls received from traffic. Gina knows five languages and helps more than 5.000 customers the month. Gina is not exactly what we call AI. It is scheduled to be processed specifically data. Nevertheless, it laid the groundwork for more sophisticated automation in the courts.
According to reports, Los Angeles is preparing one new AI projectThe Jury Chat Bot. It is based on the platform Microsoft Cognitive Services and takes advantage of opportunities such as understanding the natural language, services translation etc.
Initially, the bot will be "trained" in specific features and actions, and then more will be added.
Meanwhile, other courts in USA adopt initiatives electronic dispute resolution (ODR) to deal with various conflicts. These systems help in cases where the two opposing sides are in a deadlock and find no solution.
This looks like “digital judge“. However, experts say that "It's not about creating a digital brain."
Judges say they don't just want to use it technology, but they want to see how technology can be used to help.
To apply it to the judicial system, it must exist trust in relevant technology. They already exist objections on the use of AI in determining the sentence or the time of serving the sentence.
It took time to implement the ODR. The ODR system progressed gradually, starting as an "observer" and learning how to best resolve disputes. Its gradual development has helped build a quality system that inspires customer confidence.
Conference participants said that it is relatively easy to create an AI system that delivers better results than humans. People often have prejudices, which influence to some extent the decisions they make.
In a few years, when technology will be an essential element of justice, it would seem to us that the "human period" of the courts was not very objective.
However, the it would be ideal to combine the human and technological elements. Judges should not be replaced by AI, nor should technology be left out.
Judge Wendy Chang of the LA Supreme Court expressed some reservations about automated systems.
“If an automated system makes its decision based on information who gets it, how do we train it to take into account other factors? For me some things are very subjective, they have to do with the moment. "
For example, Chang said, "If they are embarrassed or scared, I will start asking them questions and we may reach a completely different outcome."
If the courts decide to incorporate AI, they should methodically and yes take into account all risks, that may exist. Initially, there must be specific ones objectives. AI should only be used for the purpose for which it was created. For example, one tool that predicts whether someone will appear in court should not be used to predict whether someone will commit a crime.
Most judges and experts conclude that there must be one balance and in that the boundaries between technology and man must be understood.