Computer dreaming
Recently a lawyer in the US was censored by a judge for using precedents in a defence which, on analysis proved untrue. It appears he had asked a GPT to help but had not checked the result. The computer made up the precedents.
It appears that language processors give words weights in determining overall meanings to data. We are not aware what exactly the unfortunate lawyer asked but English is not a good language to use when formulating a request in a strictly logical fashion.
Our language as been moderated over the centuries by the Celts (from eastern Mediterranean via Russia), Angles, Saxons, Jutes (from Europe), Vikings, Romans and the Normans. Since then input has come from the Greek philosophers, Istanbul and northern Africa as well as various trading partners.
The result of this is that we have many words which appear the same but have different meanings and also many different words with the same meaning. As well as this different countries have alternate meanings.
For example, an American who says “presently” means “immediately” while a British speaker would mean “shortly, after this occurs”. Our language is also constantly changing: an English speaker would have difficulty reading a poem written 1,000 years ago. A Celt would have no difficulty reading one in the Celtic.
“AI systems are being developed extremely quickly and sometimes integrated into products too early,” says Schramowski, who was also involved in the development of AtMan (a system which can check the output of a GPT for accuracy). “It’s important that we understand how an AI arrives at a conclusion so that we can improve it.”
That’s because algorithms are still a “black box”: while researchers understand how they generally function, it’s often unclear why a specific output follows a particular input. Worse, if the same input is run through a model several times in a row, the output can vary. The reason for this is the way AI systems work.
Modern AI systems—such as language models, machine translation programs or image-generating algorithms—are constructed from neural networks. The structure of these networks is based on the visual cortex of the brain, in which individual cells called neurons pass signals to one another via connections called synapses.
In the neural network, computing units act as the “neurons,” and they are built up in several layers, one after the other. As in the brain, the connections between the mechanical neurons are called “synapses,” and each one is assigned a numerical value called its “weight.”
If, for example, a user wants to pass an image to such a program, the visual is first converted into a list of numbers where each pixel corresponds to an entry. The neurons of the first layer then accept these numerical values.
A neural network is an algorithm with a structure that is modelled on that of the human brain. It consists of computing units that act like neurons and are labelled with suitable weights which are determined by the training.
Next, the data pass through the neural network layer by layer: the value of the neuron in one layer is multiplied by weight of the synapse and transferred to the neuron from the next layer. If necessary, the result there must be added to the values of other synapses that end at the same neuron. Thus, the program processes the original input layer by layer until the neurons of the last layer provide an output.
But how do you make sure that a network processes the input data in a way that produces a meaningful result? For this, the weights—the numerical values of the synapses—must be calibrated correctly. If they are set appropriately, the program can describe a wide variety of images. You don’t configure the weights yourself; instead you subject the AI to training so that it finds values that are as suitable as possible.
This works as follows: The neural network starts with a random selection of weights. Then the program is presented with tens of thousands or hundreds of thousands of sample images, all with corresponding labels such as “seagull,” “cat” and “dog.” The network processes the first image and produces an output that it compares to the given description.
If the result differs from the template (which is most likely the case in the beginning), the so-called backpropagation kicks in. This means the algorithm moves backward through the network, tracking which weights significantly influenced the result—and modifying them. The algorithm repeats this combination of processing, checking and weight adjustment with all training data. If the training is successful, the algorithm is then able to correctly describe even previously unseen images.
This is all rather complex for the average person and shows that even asking simple questions to a GPT should only be done by a very savvy computer oriented person.
One final example: a question was asked – finish the quotation “My favorite side-dish with hamburger is …..”
The result came up “salad and chips”. When the adjective was spelled “favourite”, the response was “fried onion and beetroot”. The GPT had assumed the original person was American and the latter was British, so the weight was maximum on the adjective. GPT might have access to the entire internet but can also make incorrect assumptions and dream its own dreams.

Alan Stevenson spent four years in the Royal Australian Navy; four years at a seminary in Brisbane and the rest of his life in computers as an operator, programmer and systems analyst. His interests include popular science, travel, philosophy and writing for Open Forum.

