Exploring the ethics of artificial intelligence

| December 13, 2018

Although the threat of autonomous weapons turning on humanity remains a distant prospect, many ethical questions arise from current innovations in artificial intelligence which must be considered by business, government and the world as a whole.

AI solutions not only analyse vast amounts of data to make intelligent decisions but present them in a near-human manner.  As companies across the economy adopt behavioural architecture using thousands of data points to identify the best way to sell their wares to unsuspecting individuals, the cumulative effect on society will be significant.  When banks use AI technology to target customers for new credit cards, for example, they are also responsible for millions of dollars of new private debt.

People are wired like other primates and want to believe that someone cares about them personally, hence the popularity of social media such as Twitter and Facebook. However, despite, or because of, social media, the physical world is becoming more dissociated and lonely, and so people will gravitate towards interactive AI which appears to know them intimately and remains with them through their day.

People will willingly hand more of their personal decisions over to the system, and within five years, half of our private and corporate decisions may be made by personal AI, rather than human beings. A third of our ‘interpersonal’ relationships may also be with AI, rather than other people, as that fraction of the population say they have no significant emotional support.

About 64% of American households already have Amazon Prime. Alexa, Amazon’s digital assistant, is infiltrating people’s homes on a growing range of devices and speakers and knows ever more about individual’s spending habits and personal lives, encouraging them to spend more money through offering the illusion of personal intimacy.

Alexa is not just another app. It spends hours a day interacting with individuals and gets to know them at a much deeper level than they realise.  AI is influencing our spending habits already and may affect our human relationships in the near future. It is difficult to find avatars which do not resemble Barbie dolls, and more thought must be given to what good AI should look like, and how shifts for good and ill in individuals and society can be measured.

The implications of AI of this kind are more insidious than the science fiction threat of ‘mad, bad robots taking over the world’. The influx of personal AI into people’s homes will mean they increasingly run our lives. They will decide who we bank with, where we travel, what we buy and what we eat if we let them.

Americans already spend more money on restaurant and takeaway food than fresh groceries, with massive health impacts for the future. The effects of AI are being seen now, and while they cannot be regulated, people can start to think about measuring the wider impact of these systems on society.

The future of work

AI is being applied ever higher up the skill chain and companies are eager to adopt it as it never stops learning or has a bad day. While people with higher level skills are always sanguine about people with lesser skills being replaced by machines, there is not a single job which is safe from AI today.

Since the Industrial Revolution, humanity has dug resources from the ground, fashioned them into goods and transported them to customers in ways which have increasingly mechanised more expensive and less productive human labour.

Just as physical work has been replaced by machines, so cheap and effective AI will increasingly replace expensive human brain power in the workplace. Analysis and decision making will be commoditised, rendering humanity irrelevant to economic production.

Australia must try to ride this wave, rather than be crushed beneath it.  However the aims of Australian AI research funding tend to be too limited. Government grants focus on the use of AI to support traditional industries such as agriculture, rather than its transformative potential as a tool in its own right.

Rather than use AI to build better agricultural machinery or search for oil and gas, people should be contemplating radical change in the economy as a while, and preparing for the eclipse of perhaps eight of the top ten firms in the ASX within the next five years.

Banks, already under pressure from recent scandals, may disappear altogether, as intelligent personal assistants start deciding where people’s money flows.  Decisions about bills and mortgages may be taken out of people’s hands, as individuals care only about the quality of their lives, not the administration which supports them.

Companies which do not adopt AI will be increasingly irrelevant in this new world. If Australian firms only buy technology from overseas, rather than use the fantastic skills which Australians have or can learn, then Australia’s economy will unravel. Workers will become increasingly superfluous as AI is adopted, and society must think about what people will do with their time and brain power instead.

So what will people do with the extra time they are given, if they no longer need to work, and what role will they play on the planet?  Recent experience suggests we are more likely to become obsessed with social media and celebrities rather than write poetry or create high-order mathematical formulas, unless we consider these questions in good time.

The spectre of the ‘singularity’

So where will this end?  Science fiction writers have long discussed the onset and implications of a technological ‘singularity’ and the last job in the world may well belong to the first person to create a self-improving system.

AI might achieve this though software that mimics what people already do, but with much greater speed and efficiency.  Machines might scan the available literature on a particular subject, identify gaps in knowledge, generate a range of hypothesis, produce code to test them against the evidence at incredible speed, and measure the results to see which solutions are effective.

This scenario is already theoretically possible, and progress towards this end is being made in labs all around the world. Someone could be about to hit ‘enter’ on a line of code which will create a self-improving algorithm today.

Such AI would not have to be smarter than us in the first instance, but its self-generated, exponential development beyond our capabilities would be inevitable, if we did not choose to stop it at an early stage. We have been led to believe by science fiction films that rampant AI can be thwarted by ‘pulling the plug’, but there is nothing in our economic system to suggest that we will stop the progression of advanced software to genuine cognition in the search for commercial gain.

While ethics committees indulge themselves in policy discussions about whether we should allow it to happen, society should be thinking what the system should optimise for when it inevitably occurs. If we leave it too late, we may not be able to manage the singularity for our own benefit at all.  As well as thinking about AI in practical terms, such as measuring its impact and using it to produce a beneficial shift in individual lives and social outcomes, we should think about AI as a part of society itself.

Just as South Africa’s system of Apartheid dehumanised everyone involved and eventually had to be dismantled by its creators, so humans may have to accord equal respect to AI at some point in the future, lest that pattern be repeated. We may learn to treat genuine AI with as much respect as other people, rather than treat it as a servant or demand it grant us any indulgence without considering the consequences.

Rather than produce piecemeal policies to try to stop the progress of AI, we should concentrate on ensuring that the inevitable changes to come benefit, rather than imperil, humanity as a whole.

This is an edited summary of a speech delivered by Liesl Yearsley at GAP’s Annual Economic Summit at NSW Parliament House in Sydney. 

SHARE WITH:

One Comment

  1. Alan Stevenson

    Alan Stevenson

    December 13, 2018 at 1:02 pm

    Whilst I agree entirely with the concept of this article, my attention has been drawn to the following:

    Typing AI drone swarm weapons into a search engine opens up a horrifying scenario where current technology is used to produce mini drones which can target individuals using facial recognition. These devices can carry an explosive charge capable of killing or chemicals to render a target temporarily hors de combat.

    The manufacturers have proudly released film of this occurring. They say that thousands of these devices can be released from an aircraft and that because of their size they are virtually invulnerable to standard weapons.

    One assumes that because of their size they would have limited time and distance capabilities but I can imagine military personnel thinking of using them against IS or combatants in the eastern Mediterranean or the ‘criminals’ on the US border.

    Surely it is time for an extension to the Geneva Convention to cope with this Orwellian concept? Imagine the police using these drones to search for criminals; our Border Force ‘stopping the boats’; the AFP keeping an eye on potential Muslim extremists; our military homing in on anyone carrying a firearm. These could easily be the next steps.

    Individual freedom is a mainstay of out democratic way of life. We should be very careful indeed about how much power we give over to those tasked with our protection.