Let’s pause for thought about AI

| May 18, 2020

In the late 1970’s the Australian Computer Society began looking at programs which could learn. At that time I was also a member of the Australian Inventors Association which gave me the opportunity of accessing a wide variety of ideas.

The ACS group wanted to produce programs which would take over mundane, repetitive tasks in factories, etc and thus free the workers for more interesting activities. The initial focus was on the human brain and how it operated; however, it soon became clear that our brains are not as straightforward as we had assumed.

Research in the United States indicated that for every deep, meaningful thought concept, the brain changed its internal synapses. Could we build this into our programs?

It was decided that we would go away and work on the idea of a program which could mimic, to a certain extent, the human brain. We would then come back with our ideas and take it from there. I, at least, am still waiting.

The main problem with any program is that the bias of the programmer is built in. Most of us are unaware of our personal biases and misconceptions.

The Artificial Intelligence programs are simply a more advanced concept. They are given a tremendous amount of information and left to draw their own conclusions. The ones which appear to be more accurate are then tweaked and given more data. This continues until only a few are left.

However, at this stage no-one is able to determine how or why the conclusion is reached because the programming was done by the computer and the logic is often indecipherable to a human.

The games-playing programs which beat the world champions succeeded because they played in a non-human way. This is fine for games, but we are now using AI to determine medical research and even the food we eat.

For instance: Facial recognition is now being used worldwide. It was written in the main by white, male programmers and consequently is fine for Europeans. The Chinese have adapted it for their own purposes and it appears to work well for them.

However, both sets give inconsistent results when used for other ethnic groups. Language translation programs make mistakes due to regional dialects; fingerprint recognition programs were not told that a small proportion or people do have identical prints; speech to print programs are only about 93% accurate which means that up to 7% of the words are wrong. In a scientific or professional journal, this could be dangerous.

As humans we are so convinced that science can lead the way that we tend not to question the results of complex tasks produced by AI. In some instances where AI has found cancer and informed the patient’s doctor, operations to remove the cancer have been carried out even when it was later found that the operation was unnecessary and in some cases, dangerous.

The mere fact of having it pointed out convinced the specialist of the need to operate. We cannot program in either ethics or morals simply because they are too difficult to define depending on the situation involved. Humans do have ethical and moral standards but these vary between individuals and environments and even within a group would be very difficult to define.

Recently one university looked at an AI program in order to make some changes. They found a string of coding which seemed to do nothing – it just came to an end for no apparent reason. They removed it and then found the program no longer worked.

If we cannot understand either how or why a machine does something, surely this is the time to think very deeply about how and where we use it, especially now that we are beginning to build quantum super-computers.

SHARE WITH: