Many institutions have incorporated algorithms to make decisions thanks to artificial intelligence (AI). One such tool, known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), has become a cornerstone of the US justice system, but has not been without controversy due to the data used in the algorithm. The purpose of COMPAS is to assess the recidivism risk of defendants and help judges make sentencing, bail and probation decisions. However, several analyzes have revealed a worrying racial bias that could affect the impartiality of court decisions.

In 2016, an investigation by the nonprofit news organization ProPublica revealed that COMPAS tends to overestimate the risk of recidivism for black individuals while underestimating the risk for white individuals. The study showed that blacks who were categorized as “high risk” were nearly twice as likely to not reoffend as whites with the same categorization. Conversely, whites who were categorized as “low risk” were more likely to reoffend than blacks who were similarly categorized.

Mastering Copilot for business efficiency

Maximize your productivity with Copilot in Microsoft 365.

This result sparked a heated debate about the use of algorithms in the justice system. On the one hand, proponents of COMPAS argued that the tool was valuable to judges as it provided a data-based assessment. However, ProPublica’s work made it clear that the algorithm was not impartial.

 

Origin of racial bias

The problem of bias in COMPAS has its origins in the historical data used to train the algorithm. The system was fed with data reflecting racial discrimination and historical inequalities in the American justice system. This inherently biased data resulted in an algorithm that perpetuates the inequalities it was designed to combat. Moreover, COMPAS is a closed system; its scores are confidential and not available to the public.

An article in the MIT Technology Review supports this view, confirming that the algorithmic bias in COMPAS is deeply rooted in the unfair data used to develop the system. This situation has made it virtually impossible to redesign the instrument. The influence of COMPAS is not limited to the academic or theoretical sphere; its scores affect judicial decisions. For example, a person who is (wrongly) classified as high risk by the algorithm can expect a harsher sentence, higher bail or even a denial of probation.

 

Reform COMPAS

The controversy surrounding COMPAS has made it clear that the use of algorithms in the justice system urgently needs to be reformed. More transparency is needed so that independent experts can review the system's decisions and suggest changes to improve its accuracy and reduce bias.

Legislators must also create clear regulations that take into account the impact of AI on court decisions. This includes creating frameworks for the continuous review, correction and evaluation of the performance of these algorithms to ensure that their decisions are fair.

The COMPAS case is just one of many examples that show how the use of artificial intelligence in the judiciary can lead to biased decisions. As judicial institutions around the world seek to modernize and incorporate AI into their processes, they must proceed with caution.

Technology can improve the efficiency and accuracy of judicial decision-making. However, to be a truly fair tool, it must be free of bias and based on fair data. COMPAS has shown that without proper oversight, algorithms can perpetuate the biases they are designed to eliminate.