What were you thinking?

| August 26, 2018

Imagine a person on trial for a murder they did not commit.

During sentencing, the judge invites an expert witness to advise on the defendant’s likelihood of re-offending. The expert claims to be 90 per cent certain the defendant will re-offend and is an ongoing threat to the community.

But the expert refuses to explain their reasoning due to intellectual property concerns. The judge accepts the expert’s argument and the defendant is jailed for life.

Case closed.

This scenario is not too far removed from what is already unfolding in US court rooms. In 2013, Eric Loomis of La Crosse in Wisconsin, was accused of driving a car that had been used in a shooting. Loomis pleaded guilty to evading an officer and no contest to operating a vehicle without the owner’s consent.

He was sentenced to six years in prison, influenced by his COMPAS scale score; the Correctional Offender Management Profiling for Alternative Sanctions.

COMPAS is a machine-learning algorithm that estimates the likelihood of someone committing a crime, based on information about their past conduct. COMPAS considered Loomis a risk – but we don’t know why.

No comment

The algorithms that generate COMPAS decisions are considered intellectual property (IP) and so are not disclosed in court. Defendants and their lawyers cannot determine precisely why a judgement is made, and so cannot argue its validity.

Several studies have raised concerns around COMPAS assessments that correlate race with risk of repeat criminal behaviour, and indicate a higher likelihood of re-offending amongst African-American defendants.

In Australia, New South Wales police use a similar system to proactively identify and target potential suspectbefore they commit a serious crime. In Pennsylvania in the US, another application aims to predict the likelihood of a child being neglected by its parents.

We’re now using Artificial Intelligence (AI) to predict people’s future behaviour in the real world.

Blind trust

A University of Melbourne research project, Biometric Mirror, illustrates how AI data could be captured and provided to third parties, like employers, government or social circles, irrespective of its flaws.

Biometric Mirror takes a photo of your face and holds up a foggy mirror to how society sees you. It rates a person’s psychometrics, such as perceived level of aggression, emotional stability and responsibility, and physical attributes, like age and attractiveness.

The mirror offers no explanation for these ratings – it’s simply based on how others judged a public data set of photos and how closely your face resembles those photos.

And it highlights the potentially disastrous consequences of placing blind trust in AI.

Answering ‘why?’

As AI becomes increasingly ubiquitous, understanding why decisions are made and their correctness, fairness and transparency becomes more urgent.

Explainable AI (XAI) aims to address why algorithms make the decisions they do, so the reasoning can be understood by humans. Society needs to trust the decisions and legitimacy of actions arising from these algorithms.

The EU’s recently-announced General Data Protection Regulation legislation appears to support this and has been interpreted by researchers as mandating a right to explanation for algorithmic decisions.

AI faces complex hurdles including understandability, causality and creating human-centred XAI.

Understandability

XAI is currently driven by experts wanting to understand their own systems – it’s not motivated by trust or ethical dilemmas.

In the 1980s and 1990s, most AI applications contained knowledge about how to perform a medical diagnosis from symptoms or advise on mortgage loan suitability, for example, and this knowledge was encoded in a format understandable to other experts.

The researcher could provide a reasonable prediction about the output and trace the reasoning – like a software engineer debugging a standard computer program.

However, during the past five to six years, deep neural networks have dominated AI. These networks are so complex that, given a specific input and output, even an AI expert could not explain why an output occurred. So AI practitioners are interested in explainability to help them understand their own systems.

Causality

Causation indicates an event was the result of another event. Correlation is a statistical measure that describes the size and direction of a relationship between two or more variables.

For example, smoking is correlated with, but does not cause, alcoholism.

In AI, correlations can help make forecasts about future purchasing behaviours and likelihood of committing a crime but it does not explain causes and their effects. So it’s hard for us to identify with the outcomes because our brains are hard wired to understand cause and effect.

Human-centred explainable AI

Most Explainable AI research looks at how to extract causes of decisions from specific models to provide insights for experts in artificial intelligence. This doesn’t help people understand why they are going to prison or why Biometric Mirror called them irresponsible or unattractive.

In a human exchange, a highly-skilled communicator will infer exactly the point of misunderstanding using the context of the conversation, the structure of the question, the mental model of the person and their level of expertise, and social conventions. They then come to a shared understanding of the reasons via a dialogue.

Most Explainable AI research neglects this. It leaves AI researchers to determine a ‘good’ explanation, but experts cannot judge what makes a good explanation for a non-expert.

Our research into Explainable AI takes a human-centred approach. This collaborative research involves computer science, cognitive science, social psychology and human-computer interaction, and treats Explainable AI as an interaction between person and machine.

Truly Explainable AI integrates technical and human challenges. We need a sophisticated understanding of what constitutes Explainable AI; one not limited to an articulation of the software processes but that considers people at the other end of these responses.

XAI is not a panacea for all ethical concerns regarding AI. But explanatory systems that we understand and interact with naturally are a minimum requirement to establish wide-spread confidence in these algorithmic decisions.

Then maybe jailers in the US will be able to provide answers to the likes of unfortunate Eric Loomis.

This article was written by Dr Mor Vered and Associate Professor Tim Miller of the University of Melbourne.  It was published by Pursuit.

SHARE WITH:

One Comment

  1. Alan Stevenson

    Alan Stevenson

    August 26, 2018 at 11:11 am

    This subject has been raised before by myself, Max Thomas and others both from the point of view of software and hardware. While I was with Honeywell, we had a problem with one of our clients’ computers which appeared to be making a mistake. Analysis of the machine showed that we did not really know how it operated since it had been designed and basically built by other computers.

    This was over twenty years ago. When the world champion was beaten at ‘Go’ a couple of years ago, at least one of the moves was described as one no human would have thought of. It is difficult to understand how, if at all, a computer can be taught ethics or morality. In order to teach either of these concepts, each must be defined logically and I do not believe this is possible – they are human characteristics.

    Our reliance on AI is increasing daily as is the prospect of immoral or corrupt interference by individuals, groups or even countries as shown in recent elections.

    Many intelligent people subscribe to this forum, but to be a forum, we should expect intelligent responses to the articles presented. I believe that this AI problem should be explored far more deeply and publicly than appears to be the case. Surely there are some readers, more up to date with technology than myself who can make a contribution here?