EN

|

ES

Search

AI: Tool or obstacle for delivering justice

The judicial system needs to know how algorithms work before incorporating the use of crime predictors in countries such as Mexico.
illustration of a scale drawn with blue light and held by a robotic hand
The judicial system needs to know how algorithms work before incorporating the use of crime predictors in countries such as Mexico. (Photo: Getty Images)

In 2013, Eric Loomis was sentenced to six years in prison and five more years of parole because a criminal risk predictor software rated his profile as high recidivism.

This happened despite the fact that he was charged with misdemeanors: trying to flee from a traffic officer and driving a car without the owner’s consent.

Three years later, the case went to the Wisconsin Supreme Court for review of the sentence.

Juliana Vivar Vera, professor at Tec de Monterrey’s School of Social Sciences and Government, explains that the need to perfect this type of software and train judges to use it as a support tool are the factors that have halted its implementation in other countries.

Loomis’ defense argued that the software had violated the defendant’s “due process” by using a predictive algorithm that calls into question a person’s presumption of innocence.

Moreover, the use of a risk predictor biases the sentencing of a person without giving the defendant the opportunity to prove otherwise.

And something else. The methodology used by the software to predict risk is “a trade secret” and the machine is only required to report recidivism estimates to a judge.

That lack of transparency would be sufficient to challenge the sentence, the defense argued.

Justice with artificial intelligence

The Loomis case also brought up the fact that these machines were more likely to give higher risk ratings to offenders of color than to white offenders.

These are programs developed using machine learning methodology. This means that by using the data and statistics fed into the system, the predictor is able to perform increasingly accurate cross-referencing of information that, in theory and in judicial matters, should give sentences appropriate to the crime committed.

A Harvard Law School “State v. Loomis” analysis warns that judges should be a bias check on an artificial intelligence tool designed, precisely, “to correct for judges’ biases.”

In an interview with TecScience, Vivar Vera explains that training the judicial system in this area is one of the considerations that needs to be taken into account when using this type of technology in countries such as Mexico.

“Judges couldn’t make a decision with the help of an algorithmic predictor that they don’t understand,” warns the author of the article The Criminal Sentence, the Judge, and the Algorithm: Will New Technologies be Our New Judges?.

“There needs to be an understanding and an assurance from the state that judges will be trained and will be familiar with such expert machines and know that they can truly be an auxiliary or assistant,” she says.

The researcher adds that the use of this type of predictor was seen as an “aid,” but in the cases where it has been used, judges issued sentences without questioning the results of artificial intelligence, which led to errors that need to be corrected.

“(The judge) trusted the machine more than his or her own human judgment,” the professor says.

Listen to our podcast (in Spanish) about the two sides of AI

How legal informatics works

Decision prediction machines were first used in 1960 for file classification and notification.

In the 1980s, they were introduced in appellate courts in the United States, that is, in instances where a judge’s decision was being reviewed.

At the European Commission and the Inter-American Commission on Human Rights, these machines are only used to issue notifications. According to Vivar Vera, the technology is “efficient and astonishing”; however, its limitations must also be recognized.

The researcher began to review how private law predictions were made in other countries, especially in civil and commercial matters.

“I wanted to make the comparison between the characteristics of a human judge and the characteristics of a machine,” says Vivar Vera.

The platforms referred to are Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which is used in the United States in criminal matters, and the Prometea system, which is used in Argentina in civil matters.

Her analysis refers specifically to the implications of this type of virtual intelligence in criminal matters where a judicial decision will determine a person’s life.

Machines that continue learning

Predictors of criminal risk are statistical databases that make a calculation based on a person’s background, the incidence of crime in the population group he or she belongs to, and the sentences issued in similar cases.

These parameters have been criticized in criminal matters because they could lead to discriminatory decisions based on gender and social status, known as “algorithmic bias.”

In 2016, the civil organization ProPublica published the article “Machine Bias,” which could also be understood as “Algorithmic Bias”.

In that research, the journalist Julia Angwin found that black defendants were much more likely than white defendants to be judged with a higher risk of recidivism.

Moreover, white offenders with a proven criminal record were rated as having a low recidivism rate. This was in contrast to black citizens with no prior history who were rated by the algorithm as highly dangerous.

What algorithmic biases are

The Tec specialist explains that during the course of her research she realized that the problem with the criminal predictors is the information they are fed; in other words, they are given statistics from judicial systems that have also committed discrimination errors.

Moreover, the cross-referencing of statistical data does not reflect one of the main characteristics of a human judge applying the law with an understanding of the entire context the crime was committed in.

“The assistance of risk prediction algorithms has other considerations in judicial decisions linked to criminal matters. The analysis of the evidence needs to be weighed in relation to a person’s suffering or why a crime was committed and in what context,” explains Vivar Vera.

So-called machine learning seeks to make systems increasingly intelligent in order to provide more accurate recommendations based on “layered” learning. This refers to the fact that they are able to cross-reference information involving gender and human rights considerations, which even a human judge would lose sight of.

However, the specialist warns, these platforms are still being developed.

She cautions that deep learning is a human characteristic that has not been transferred to machines. “I don’t know if this can be achieved. I’m somewhat skeptical about it.”

Criminal risk algorithms

COMPAS is an algorithm created by the private company Northpointe that calculates the probability of someone committing another crime and suggests the type of supervision they should receive in prison.

The company also published articles supporting the efficiency of its program. Corrections were made and it is still in use in New York, Wisconsin, California, and Florida.

“It’s not the fault of the software but of the data input processed by the courts,” the interviewee insists.

The algorithm the software is programmed with— which contains statistics, defendant data, and a 137-question pre-interview— is known only to Northpointe. It is considered a trade secret and as such only the company knows the criteria COMPAS operates under.

Vivar Vera explains that one of the greatest risks in using this type of mechanism is that justice, which in principle should be administered by the State, is left in the hands of private companies.

“The transparency that criminal systems try to enforce is undermined by black box systems in which there’s a loss of control of the processing technique,” writes the specialist in her article, which has also been published in the Chilean Journal of Law and Technology.

Privatizing the control of violence

In her interview with TecScience, Vivar Vera elaborates on why the control of violence is the responsibility of the State.

“Control of violence is a part of judicial criminal decisions. If a machine starts making those decisions, and the judges aren’t trained and don’t understand the machine, then the state would no longer be in charge. Private companies would be in charge of violence control,” she says.

“Violence control is a very complicated and complex issue for states. If this is left up to companies, the responsibility would seem to shift and (the State’s) inability to deal with the situation would be revealed. What should be taken seriously is that this responsibility should not be left in the hands of private companies,” she says.

(With information supplied by Daniel Melchor)








Related news
Related videos
Play Video

Did you like this content? Share it!​