Can driverless vehicles replicate the subtleties of the human brain?

| March 28, 2018

The death of a pedestrian during a test drive of a driverless vehicle (even as a backup human sat in the driver’s seat) calls into question not just the technology—which didn’t seem to detect the pedestrian crossing a busy roadway and therefore didn’t brake or swerve—but also the notion that driving is nothing more than a set of instructions that can be carried out by a machine.

The surprised backup driver seemed to have confidence in the inventors of driverless cars as he was looking down at his computer briefly just before impact.

Certainly, a real human driver might have hit this pedestrian who was crossing a busy street at night with her bicycle. But, of course, as a friend of mine pointed out, there is a big difference in the public mind between a human driver hitting and killing a pedestrian and a robot killing one. If the incident had involved a human driver in a regular car, it would probably only have been reported locally.

But the real story is “robot kills human.” Even worse, it happened as a seemingly helpless human backup driver looked on. The optics are the absolute worst imaginable for the driverless car industry.

It makes sense to me that a world of exclusively driverless cars with a limited but known repertoire of moves might indeed be safer than our current world of human drivers. But trying to anticipate all the permutations of human behavior in the control systems for driverless car systems seems like a fool’s errand.

I’m skeptical that the broad public will readily accept a mixed human/robot system of drivers on the roads. You can be courteous or rude to other drivers on the road or to pedestrians on the curb. But how can you make your intentions known to a robot? How could a pedestrian communicate with a robot car in the way that approaches the simplicity of a nod or a wave to acknowledge the courteous offer from a driver to let the pedestrian cross the street?

The idea that we can capture the complexities of human cognition, decision-making and even personality well enough to mimic them finds gruesome company in another idea that made the news recently: a startup firm that offers to preserve your brain in a chemical solution in the hope that the brain’s content can be uploaded into some future advanced technological matrix where you can live again.

The catch is that the brain must be harvested before death. In other words, you must consent to be euthanized.

Even if one gets past this fatal fact, there is the problem of semantics. The founders say on their company website that they are “[c]ommitted to the goal of archiving your mind.” The brain is a specific identifiable organ in the body that has an agreed upon location and morphology.

The mind is a much vaguer concept. We say the mind is the part of us that thinks, but also, according to the dictionary meaning, the part that feels, judges, reasons, wills, or even just perceives.

It is hard to imagine the mind doing all these things without the body or, for that matter, the world around us. Mind posits vast ongoing connections of which the brain is only one aspect. Instead of conceiving of the brain as a command center, we might also think of it as a switchboard. Can we expect to get all the properties of a network by merely preserving the switchboard?

Beyond this we forget that other cultures have put what we call mind within the confines of the heart. Even today, the link between consciousness and the heart warrants study. And, there is considerable evidence for a brain of sorts in the human gut, one that profoundly influences our moods.

So much for our feeling lives being merely in our brains. Once again, it is hard to imagine the life of feelings divorced from our bodies. We say we feel joy in our hearts or a tingling in our spines or butterflies in our stomachs. It turns out we do!

Just as we use the metaphor of the body to discuss the structure of organizations—we say so and so is the head of the company—we must recognize that the head as the seat of command for an individual is merely a metaphor.

A captain barks orders from the mouth in his head. It seems that his head is in charge. But the orders themselves require coordination of the lungs and the vocal chords and ultimately use information coming in through all the senses from the environment.

Moreover, identical orders take on different meanings depending on the body language and tone of voice of the captain. He or she might be telling a joke in the form of a ridiculously inappropriate order, something one cannot understand from the words alone.

The idea that we might duplicate ourselves on the machine model has fascinated humans at least since Mary Shelley wrote her famous novel Frankenstein. The machine, of course, is just another metaphor for conceiving how humans work, and it’s one far less sophisticated that those models offered by biology and ecology.

The most sophisticated attempts today to mimic human cognition and action go by the name artificial intelligence (AI). It is important to emphasize the word “artificial” in this description. No doubt AI can automate many more functions in society not amenable to our previously crude methods.

But it is doubtful the AI will ever equate to human intelligence and action. For in order to reproduce human intelligence and action, we would need to reproduce the body and its access to all interactions with the social and physical environment.

Even if we were intelligent enough to perform this monumental feat, there is no way for us to get outside the system in which we live to observe all the aspects we want to reproduce. And, of course, our actions and those of countless other beings, plants and physical processes keep altering the system we seek to reproduce.

The belief that machines will someday be “smart like humans” is based on one imperfect metaphor on top of another: brain as a command center in a body modeled as an electrochemical machine. Machines are already “smarter” than humans in some ways.

They can calculate answers to vastly complex equations that model weather and climate, for example. They can manage and monitor complex systems such as oil refinery operations or bank transactions with unparalleled speed and accuracy.

But, human cognition is not a thing. It cannot be reproduced without reproducing the entire system within which it operates. Human cognition emerges out of the system we live within rather than merely being embedded in it. Cognition is a process rather than a result. But so are the whole host of other processes we attribute to humans: feeling, judging, willing, and perceiving.

To reiterate, we humans cannot get outside the system in which we live. We are forever participant/observers, doomed to know the universe in which we live in only a partial manner dictated by our limited senses and the extensions we’ve been able to design for them.

We are indeed the human/tool hybrid described by William Catton as homo colossus in his pathbreaking book Overshoot. But we are still human which, as it turns out, is a complex construction involving us and everything around us, something that no amount of AI or other computer intelligence will ever be able to describe or reproduce completely.

This piece was published on Resource Insights.

SHARE WITH:

3 Comments

  1. Amelia

    Amelia

    March 29, 2018 at 12:40 pm

    The capabilities of AI are improving a lot faster than those of the human brain, and given the entirely brainless antics of car drivers I see every day going through red lights, speeding past schools and parking on pavements, the robots can’t come soon enough for me. The definition of what constitutes intelligence keeps getting raised to keep humans ahead, but really they can do all this and more, or will be able to very soon, as more and more data is collected, collated and analysed. Driverless cars will know accident blackspots, they’ll even know the number plates of poor human drivers and know to expect trouble ahead.

    It was said that computers would never beat humans at chess, or land a plane, but they do all this and much more, and it’s silly to think driving a car – something even the stupidest members of society are free to do – will be beyond them. Maybe pedestrians will have to amend their behaviour too, but surely driverless cars will be much more predictable than human drivers, some of whom seem to ignore the existence of everyone else and seem hell bent on killing you if it’ll cut five seconds from their journey to the next red light.

  2. Alan Stevenson

    Alan Stevenson

    April 1, 2018 at 10:31 am

    We are dealing here with two aspects of the word ‘intelligence’ – human and machine. AI has evolved from the input of phenomenal pieces of information and taking a mean level. This means that we can never fully understand the reasoning behind and decision an AI device takes. We might recall that the machine which won the game of Go made at least one move which a human would never think of. When placing our lives in the hands of a machine we are taking a risk which cannot be foreseen, whether the risk is rational or not is immaterial.

    To date, the programmers have failed to get a machine to learn about its environment in the same way a baby does. There have been many efforts but all have failed. The concept of getting a machine to learn by feeding it masses of information works, but only in a narrow context – the biases of the programmers are always involved, viz the Microsoft program which incorrectly determined two black faces were those of monkeys.

  3. Don

    April 6, 2018 at 10:03 am

    I don’t think AI cars will text while driving or talk or be late for work and have to forget road rules or even tailgate, it wont take much to be safer than a rushed or impatient driver. Yes if every driver gave their 100% attention to roads and 100% followed the road rules it would be hard to better that but at 5am in the morning in peak hour traffic I don’t see much of that