Motherboard knows best?

 If decisions in future courtrooms are made by algorithms, how do people know they are fair?

If decisions in future courtrooms are made by algorithms, how do people know they are fair?

by Tebello Quotsokoane

If there’s anything that history has taught us, it is that people are not impartial. Racism, sexism and hetero-patriarchy were at one point or another codified in our institutions, organisations and communities. To quote one social activist, “prejudice seems to be in the air we breathe.” It is because of these realities that people, especially marginalised people, remain critical of institutions and structures. We know that human decisions are affected by implicit and explicit biases, not just in history, but even today.  

Studies have shown over and over again that humans make judgements based on faulty intuition. People are dissuaded from providing dissenting views due to the fear of upsetting group cohesion. Human judgement is plagued by stereotyping and prejudice - the challenger accident is an example of the poor decisions that groupthink often leads to. It is as a direct answer to these faults of human judgement that the fields of evidence-based decision making, behavioural economics and behavioural psychology are borne.

The rise of algorithms may seem a compelling way to avoid the suboptimal decisions that humans sometimes make. Algorithmic decision-making tools use large amounts of data to come to conclusions and inform recommendations. In 2010, Andrew McAfee heralded algorithmic decision making tools as the future of decision making in a Harvard Business Review article. Applications of algorithmic decision-making in the fields of radiology and predictive medicine are already bringing better patient outcomes. Algorithms have also gained wide traction in marketing, policing and fraud detection, with mixed results.

One might expect, then, that algorithms would be seen as more objective arbiters in disputes, or more impartial recruiters for jobs. But they are not. Researchers have observed that people rely on human judgement, even when informed about the shortcomings of human decision making. These observations will be critical for governments as the public sector looks to use artificial intelligence (AI) in improving public service delivery. People are not particularly alarmed by the application of technology in the form of online platforms like turbo-tax or dashboards. The problem usually arises when algorithms are used to inform judgements which have a significant impact on citizens’ lives, such as sentencing criminals or deciding who should receive welfare payments. Machines are becoming increasingly intelligent and capable of making informed and often accurate judgement, but it would appear that humans still trust fellow humans for some realms of decision-making.

We may not always understand the inner thought processes of a human, but when a person makes an error you can ask them why and perhaps even challenge the decision in a formal appeal process. Machines, by contrast, make decisions that are often out of the purview of both government workers and ordinary citizens. This opacity may create mistrust in the outcomes of algorithmic decision-making. Human involvement in decision-making can feel fairer, because people are given the right to participate in the process, and to understand the basis on which a decision is made.

Governments should think about how to ensure developers of algorithmic systems remain open about how these tools operate, and that developers are held accountable for the outcomes of their algorithmically-informed decisions. This is especially true in cases where an AI system is so complex that even those with a total view into it are unable to describe its failures and successes. How and whether governments address these questions is likely to determine how successful AI implementation in government will be, and whether it is perceived as legitimate by the citizens it affects.   

Tebello Quotsokoane worked as a research intern at Oxford Insights in summer 2017.