The Pursuit of Reasons in the Age of Algorithmic Authority

| August 3, 2017

Today, an ever-increasing number of autonomous devices and algorithmic models – including personal A.I. assistants, chat-bots and smart appliances that learn our preferences – make decisions that affect us on a day-to-day basis. However, many provide little to no transparency and are immune from scrutiny for their decision-making process.


Pasquale stated that we are entering an era which he termed “black box society”, a culture that largely accepts we are in the dark (and thus blind) about crucial decisions that have direct consequences against us. [i] Algorithmic models are opaque in the sense that if one is a recipient of the output of the algorithmic model, rarely does one have any understanding of why or how that algorithmic model makes the evaluation or decision for us. When people are singled out by such opaque systems, for instance, if an algorithmic model scores someone as an “unreliable” worker and decided to fire him or her, it is often difficult or impossible to object to such decisions. The worker does not have the right to see how the algorithmic model makes the decision, nor does he have any information about the algorithmic model’s validity. In some cases, the aggrieved worker may not even know that an algorithmic model was used to make such decisions at all.

Secrecy prevails to some degree in all large institutions, but opacity seems to be at the heart of algorithmic models. Burrell suggested three distinct forms of opacity with algorithms. [ii] First, there may be intentional concealment by algorithmic model-users. For instance, Algorithmic models are often regarded as intellectual property. In the case of web giants like Google, Amazon or Facebook, their precisely tailored algorithms alone are worth hundreds of billions.[iii] To maintain the competitive advantage of algorithmic model-users, it is not surprising that most of algorithmic model-users set up safeguards to maintain the secrecy of their algorithmic models, and keep their algorithmic models from public scrutiny. Algorithmic models are, by design, mystical black boxes.

Secondly, there are gaps in technical literacy in society. For laymen, simply having access to the underlying code of an algorithmic model is insufficient to understand the logic behind it. To comprehend the code, one must understand programming and read hundreds if not thousands of pages about that specific programming code. Furthermore, some algorithmic model-users build algorithms (such as the Google search engine) by engaging different teams of programmers, producing different layers of coding systems that even ‘insider programmers’ may not understand well. [iv]

Thirdly, algorithmic models use Big Data analysis. Unlike traditional computer code which we could access and inspect, Big Data are often too intricate to understand. Burrell suggests there can be a problem of ‘interpretability’ which he defines as “a mismatch between the mathematical optimisation in high-dimensionality characteristic of machine learning and the demands of human-scale reasoning and styles of interpretation”. Even computer scientists or those with specialised training can neither fully understand and interpret the machine learning algorithms, [v] nor to trace and comprehend the basis for its decisions.[vi] This is because when an algorithmic model is built, it uses its own representation and classification without considering human comprehensibility.[vii] As Lisboa notes, “machine learning approaches are alone in the spectrum in their lack of interpretability.”[viii]

For these reasons, there is a lack of transparency for algorithmic models and individuals cannot be guaranteed any reliability, accountability, fairness, or legality. Opacity can lead to a feeling of unfairness even if algorithmic model-users have actually behaved themselves. For example, if you purchased a flight ticket at double the price of that of the person next to you, you might find it unreasonable. But if it was explained to you that the price is reduced because the person next to you is a student, then you might have a different perception. Transparency matters.

[i] Pasquale, F, 2015, ‘The black box society: The secret algorithms that control money and information’, p4

[ii] Burrell, Jenna, 2016, How the machine ‘thinks’: Understanding opacity in machine learning algorithms

[iii] O’Neil, Cathy, et al., 2016, ‘Weapons of Math destruction: how big data increases.’ P.29

[iv] Sandvig, Christian, et al, 2014, ‘Auditing algorithms: Research methods for detecting discrimination on internet platforms

[v] Id 1 at 10-13

[vi] Kenneth Cukier and Viktor Mayer-Schönberger, 2013, Big Data: A Revolution That Will Transform How We Live, Work, and Think, p.11-14at p.178

[vii] Id 1 at 13

[viii] Lisboa, Paulo JG, 2013, ‘Interpretability in Machine Learning Principles and Practice’ pp. 1521, Available at:

SHARE WITH:

One Comment

  1. Alan Douglas

    August 27, 2017 at 10:29 am

    In the mid ’70s IBM produced a little device which they called an “Executive Decision Maker” which was a battery operated oblong box with two lights and a switch. When turned on, the lights flickered rapidly, one then the other. The idea was to formulate a decision into two parts (virtually Yes or No), then hit the switch again. The basis of this concept was to push the idea that it did not matter what decision an executive made, but that once made, it had to be carried through. Research had found that the executives who failed were those who changed their minds after making a decision.
    Resent research has found that our brains tend to come to the correct decision most of the time, but that these decisions are often over-ridden by our consciousness. If we could only accept our ‘gut feelings’ we would lead better lives.
    Trying to write a program to make these decisions for us cannot succeed because no program could possibly have access to all the available data, however insignificant it appears to be. Our brains are very complex and error-prone but are still more capable than any computer created so far.
    Tests carried out by Cambridge university some time back showed that people who played the stock market generally had more success when they had less information than when they had access to as much data and comment as they wanted.