The impact of the use of artificial intelligence (AI)-based technologies such as automation in the legal system is hotly debated. Some argue that AI would benefit the judicial system by making case law more transparent, predictable and uniform, while others argue that AI may adopt or even magnify existing biases and pose challenges to the primacy of law as it would operate in a prescriptive manner, thereby creating a new kind of normativity.
In the Snow White fairy tale, the Evil Queen asks her Magic Mirror every morning: ‘Magic Mirror in my hand, who is the fairest in this land?’ Translating this question into the world of algorithms and justice, can we really expect answers to questions such as which party will win in a litigation process, which court should be chosen in a particular case, how much compensation should a particular plaintiff seek, or what is the likelihood that the accused person will reoffend?
Since the databases of courts have become easier to access electronically in a number of countries, an increasing number of companies have sought to benefit from the scientific analysis and systematisation of judicial decisions. Data – especially open data – are essential inputs for legal tech companies offering predictive justice services. In France a new legal tech marketplace called the ‘Legal Tech Store’ was recently launched, and others, such as ‘Lex Machina’ in the US, offer different services to both lawyers and non-professionals. The algorithms developed by these companies, which are based on the collection and processing of millions of raw records, assist customers in a number of ways. For example, they can be used to help customers decide whether they should go to trial or engage in online dispute resolution, to predict the possible outcome of a lawsuit and to choose the most appropriate court for a proceeding (forum shopping).
In certain cases courts can also make use of algorithms and the automation of decision-making processes. However, it is important to bear in mind that the sovereignty of a judge must be guaranteed over automated decisions.
The European Commission for the Efficiency of Justice (CEPEJ) adopted the ‘European Ethical Charter on the use of AI in judicial systems and their environment’ (Charter) at the end of 2018, in which it emphasised that AI has the potential to improve the efficiency and the quality of jurisprudence. Nevertheless, according to the Charter AI must be applied in a responsible manner with respect to human rights. The Charter states that technologies based on machine learning do not help to predict the courts’ decisions, but can support certain decision-making processes.
It must be highlighted that the use of AI tools should differ depending on the area of law to which the tools are being applied. The application of AI in the criminal justice system must be accompanied by proper safeguards, given the fact that the sanctions imposed in criminal proceeding can severely restrict the fundamental rights of the accused persons. In the criminal justice system AI is used in predictive policing (prediction of potential criminal activities in the pre-judicial phase) and to weigh the risk of recidivism when determining the length of sentences. However, the Charter expresses concerns that relying too heavily on AI technologies could amplify discrimination or be detrimental to the doctrine of the individualisation of punishment. In the field of civil and administrative justice, according to the Charter, AI can be applicable in on-line dispute resolution (ODR) or creation of scales (e.g. financial compensation, redundancy payment).
Although the fierce debates surrounding predictive justice are still on going, what is certain is that although AI is a helpful tool to support certain decision-making processes, the judicial decisions themselves cannot be replaced by automated decisions. I look forward to exploring this topic, and the regulation of AI in general, during the Internet Generation Day at ITU Telecom World this September.
It must be borne in mind that machine learning algorithms are always based on historical data and their quality depends on the data that is put into the system. Essential human rights guarantees – such as transparency or the right to a fair trial – must prevail over new technologies. In particular, in accordance with the principle of the equality of arms, and in order to ensure that the right balance is struck between the use of new technologies and the protection of fundamental rights, parties should be provided with access to the algorithms used in their cases. Equipped with such information parties may be in a better position to challenge the decisions before the courts.