In the previous two articles, you have read what is meant when we speak of artificial intelligence (AI). Mike has done the hard work. In this article, it is my honour to give an example of the application of this knowledge in debating, based on the following motion:
This House would ban the use of statistical risk assessments for determining penalties in criminal proceedings.
The crux of this debate will likely revolve around the characterization of the algorithm. After all, if it gives a perfect and fair prediction, it is probably largely a good idea. However, the proposition aims to outline an algorithm that does not give correct predictions at all or is extremely unethical.
The proposition can explain that AI algorithms train on supplied data, as Mike explained. An algorithm based on biased data will produce biased results. Many of these algorithms include variables such as education level, number of arrests, or gender. Those factors are both immutable and unfair; an individual should not be judged on that basis. For example, the data that predicts recidivism is unfair to weigh because those factors often correlate with privilege.
In contrast, the opposition paints a picture of what is done in place of these algorithms: the scenario in which the judge makes an assessment themselves. The context in which these algorithms are used is one in which the societal pressure is on punishment and not rehabilitation. Judges’ assessment would probably be more racist, given, among other things, the privileged background of many judges and the fact that they make judgments based on experiences with the already existing racist legal system.
After the debate about the algorithm itself, there are arguments to be made about the “checks and balances” that exist in each respective system. The proposition can explain here that the judges can shift responsibility to the ‘black box’ of algorithms. That makes the legal system very difficult to change, given that there is no accountability. The opposition says that accountability could be better because the algorithm takes defined variables into account which can be discussed. There may be social criticism about the fact that, for example, the level of education is included, after which that factor may be taken from the calculation. Judges are not defensive in this discussion because the algorithm was not designed by them and they are therefore more open to change.
Gigi is a debater of the Leiden Debating Union. She served as the training and development officer of the association in 2013-2014. She also served as a general board member (with a focus on training and development) of the Nederlandse Debatbond in 2019-2021.