Robots and the law

| December 12, 2020

The military of a few countries, notably the US, Russia and China, have developed robotic aircraft which are able to fly to a target, search for a suitable quarry and destroy it without human intervention. The United States has recently showcased a ship which sailed the Atlantic and navigated its way through the Panama Canal without human input.

These things are already with us; they are being introduced into our lives, firstly through Siri and Cortana and in the near future via autonomous cars, medical procedures, household cleaning equipment, toys and personal assistants for the sick and elderly.

These are all robots, driven by artificial intelligence which, by its very nature cannot possess either ethics or morals – they do what they are programmed to do. When they leave the factory they act in accordance with their various programs.

However, during normal usage, they will ‘develop’ by absorbing information via the internet, via interaction with people, who often are not programmers or have much knowledge of computers, and through communication with each other.

We tend to be anthropomorphic in our attitudes to both animate and inanimate objects such as pets or teddy bears. This is a normal human characteristic and is easily transferred to an object with artificial intelligence to the extent that we can and probably do tell it more than we should about ourselves, thereby laying ourselves open to either misinterpretation or abuse by or through the AI concerned.

In the early 1940’s Isaac Asimov developed what he considered should be the three laws of robotics which were designed to keep humans safe. He then proceeded to write tales which showed how they failed in this regard, finishing up with the Foundation series.

It now seems to be the time when we should seriously consider the concept of a series of laws of robotics both for the present and the future generations. For instance, if an elderly person became very depressed to the extent of asking his/her robotic companion to end his/her life and the robot did that, who would be to blame? If a child asked his robotic companion to build a box kite capable of enabling him to fly in it and he then had an accident, who to blame?

We are able to foresee consequences of certain activities and decide the areas of safety without interfering too much with our requirements – safety versus excitement or need and this varies with each of us, but we are generally able to accept the outcome. I very much doubt that this would be the case if a robot made the final decision.

 

SHARE WITH: