The law, ethics and the development of artificial intelligence

| May 27, 2015

Serious concerns about artificial intelligence have been voiced lately. David Coleman says there are problems which need to be overcome before AI machines can be ethical and legal actors in society.

In January, an open letter was published asking the scientific community for “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial … our AI systems must do what we want them to do.”

Artificial intelligence researchers at Google, Facebook, Microsoft as well as Bill Gates and Elon Musk all signed the letter. The first person to sign was Stuart Jonathan Russell, a professor at Stanford University well known for his contributions to the field of artificial intelligence.

Artificial intelligence is reaching a level of sophistication that now allows it to play computer games more effectively than human beings, even if only given the inputs that a human player would receive and with knowledge of what the goal of the game is. It is some time now since the development of IBM’s Deep Blue, which successfully beat the world Chess champion Gary Kasparov in a tournament. IBM’s Watson is now capable of beating human players at games of jeopardy even when it is not allowed a connection to the internet.

One of the great areas of promise of artificial intelligence is the problem of optimisation. Artificially intelligent systems are likely to be far superior to human beings at optimising the operation of systems for data acquisition and processing and at finding the most efficient solutions to problems given certain parameters.

In contemporary research into artificial intelligence, there are a few problems which will need to be overcome before artificially intelligent machines can become ethical (and legal) actors in society.

The first is hierarchical decision making. Computers currently have great difficulty with abstraction.  Humans group millions of smaller decisions into larger abstract concepts. For instance, if we want to get food, we have to take millions of tiny actions to achieve that goal: Get up, take one step, take another step, get in car, drive to supermarket etc. Computers at the moment have difficulty achieving actions like this because it is impossible to pre-program all of the possible actions that need to be taken to solve a problem, and algorithms for decision making of this type are still under development.

Computers also have problems in obtaining the correct context of their decision making processes. Known as the ‘paper clip problem’, it is a supposition that if you had an artificially intelligent paper clip making system, the system would eventually optimise itself so that it would take over the whole world and fill it up with paper clips, and then perhaps colonise other planets in order to extend its paper clip production capabilities. Computers are actually very good (already better than humans) at following instructions.  However, the system would simply not care about the impacts that it had on people or the environment – it would go on improving its paper clip production forever without regard for the consequences. Again, the solution to this problem is under research, and it is probably not insoluble.

ETHICAL AND LEGAL DECISION MAKING IN ARTIFICIAL INTELLEGENCE

Perhaps the biggest challenge in artificial intelligence is the problem of ethical reasoning. Artificial intelligence researchers are beginning to turn their minds to the question of whether computers can be programmed to act ethically and legally in society.

Automated military systems like Unmanned Aerial Vehicles and autonomous ground vehicles are already capable of prosecuting military campaigns independently at the command of soldiers half a world away. At this stage, however, military systems are not sufficiently ‘intelligent’ to autonomously distinguish between enemy and civilian or distinguish a surrendering soldier from an enemy combatant or a hostage from a terrorist. However, it may be only a matter of time before they exceed human capabilities in this regard.  Even in this arena, it is possible that artificial intelligence could achieve greater rates of success than human operators in correctly identifying who to shoot and who to rescue.

At the moment, computers are seen as being servants of their owners whether that is an individual, a corporation or a cooperative. When a computer does something illegal or unethical it is assumed that it was commanded to do so by a human being or organisation of human beings. Computer hacking and cyber crime have been accepted legal concepts for some time now. However, if a computer is capable of its own ethical and legal reasoning, it is presumable that we will enter an era where the actions of computers can only be attributed to their programmers in an indirect way. This concept of a computer as a servant of human beings is likely to be something that will be incorporated into the programming of artificially intelligent systems. But this poses the question of who the computer is designed to serve and to what extent it will neglect the interests of other human beings in order to serve the interests of its master.

The next question is whether it is possible to program a computer with a universal system of ethical reasoning that allows it to act ethically and legally in a human society.

One of the major problems with this is that human beings often disagree on what universally acceptable values are. Even a human being with a tertiary education in law or philosophy will often be confounded by the problem of how to correctly make ethical decisions. Indeed, in a way the legal system is a society-wide method for the resolution of disputes about the differences human beings have with other human beings about the correct application of human values. Perhaps the only expression of a universal set of human values we have is the ‘International Bill of Rights’, which is the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights. But even that is limited in its acceptances as a universal expression of human values.

In the ‘real’ world, the highest forms of expression of values are law and philosophy, and there are hierarchical systems of legal rules that govern how one is to determine the correct legal answer to a particular ethical problem. It appears that it would not be impossible to program computers with the same system of ethical reasoning, although rigorous testing of these systems would obviously be necessary before they could be implemented.

At this stage, artificial intelligence is limited to being an assistant of ethical decision makers. An interesting example is the marriage of the computer science disciplines of ‘big data’ and artificial intelligence. This could begin to change the making of legal decisions on the basis of an enormous data set of cases as it allows precedents for particular factual scenarios to be identified more accurately, and therefore allows an ethical decision maker access to more information about decisions that have been made in the past in similar ethical situations. Another example is the IBM Watson project which is now being applied to create a medical assistance program that can help doctors find the records of similar cases to the one that is presenting before them and therefore arrive at a better medical outcome.

SHARE WITH: