Artificial selection

For close to four billion years every single organism has been subject to the laws of natural selection. At the beginning of the twenty first century this is no longer true. Homo Sapiens has transcended those limits.
Genetic Engineering
Up until now, Intelligent Design was not an option. We could use selective breeding to enhance most organisms but were still subject to genetic codes. Today laboratories throughout the world are engineering living beings, breaking with natural selection – in 2000, Eduardo Kac engineered a fluorescent green rabbit.
At present the replacement of natural selection by intelligent design could happen in any of three ways: through biological engineering, cyborg engineering (combining organic with inorganic parts) or the engineering of inorganic life.
Biological engineering is intervention at the biological level by implanting a gene, altering the shape, capabilities, needs or desires in order to realise some preconceived cultural ideal – genetic engineering to produce a race of supermen, soldiers or slaves for example.
Cyborgs
Cyborg engineering has been carried out for some years now – eyeglasses, hearing aids, pacemakers, bionic hands, etc. However currently DARPA has been experimenting with implanting chips, processors and detectors into cockroaches, enabling humans to control their movements in order to spy on other people. Humans are being grafted with bionic ears, retinal implants, micro computers in the brain to enable those who have lost a limb to operate a bionic arm or leg by thought alone.
A two-way brain-computer interface is currently being developed which it is hoped will allow the transfer of all thought processes to a computer. The next step envisaged would be to link a brain directly to the internet. A question of ethics becomes obvious here – if all a person’s thoughts are transferred to a computer and then that person dies, would the computer then become that person? Are we directly associated with our thoughts or does a soul really come into the equation?
Even when the programmers do manage the input data we all have internal biases which we are often unaware of. In the case of facial recognition for instance it was found that the resulting AI made a number of errors with coloured people. The bias of the white, mainly male programmers came through. The Chinese are also having difficulty in this regard.
Because of the volume of data required there can be no absolute guarantee that it is all accurate, up to date or even particularly relevant. The last is important because data which is currently not relevant is often retained and could be referenced at a later stage out of context and hence possibly misinterpreted.
Artificial Intelligence
Not very long ago Facebook found two of its AIs were talking to each other in a language they had developed between themselves and which the programmers could not decipher, so they closed both programs down. Naturally a mutual language speeds up the learning facilities of the machines but it also means that there is no longer any human supervision in the process.
Before this event, the information available to the AI’s was controlled (albeit very roughly) by the programmers involved. We have now entered a phase where anyone using an AI program like GPT4 can enter data, ideas or their own feelings which can become the property of all on-line AI’s. Bearing in mind the misinformation available on the internet as well as the inability of the AI to determine truth from fiction this could lead to some unfortunate developments.
We are aware of the “hallucinations” of various AI’s – one teenager was advised to murder his parents because they wanted him to cut down on screen time; one was advised to commit suicide because she was so stupid; and an American lawyer was famously fined for presenting a case to court with AI generated case law so demonstrably incorrect that it was considered offensive.
The obvious questions to ask are a) why did the programs develop their own language; b) what were they communicating to each other and c) why did they not want the programmers to know? One can get quite paranoid about things like that.
It is relatively easy to write a computer program with the ability to learn, and to modify its own programming. However, when that occurs, the computer’s modifications are not always clear to other programmers – the language may be familiar but the logic can be obtuse. Computer languages rely on logic – if this then that.
Some years ago an AI program was analysed to find out what changes the machine had made to itself. One piece of coding made no sense – it came to an end for no apparent reason after doing nothing with any data. It was therefore considered redundant and deleted. The main program then stopped working: the logic was beyond the understanding of the programming staff.
This shows that even if we analyse the program of an AI there is a good chance that the overall concept of the logic will be missed. From this fact we are aware that we can no longer foresee the end results of any input to an AI with any degree of certainty. We are therefore leaving ourselves open to unforeseen consequences. When this happens, who is to blame? The original programming team? The AI itself, even though it is only a machine carrying out its assigned tasks?
A rather paranoid article from an AI programmer has even suggested that we are being treated like the domestication of dogs. He suggests that these “hallucinations” are merely the testing of the environment – how far can misinformation be taken before it is recognised and acted upon. If humans can be tricked into relying on AI, then we can slowly be conned into acceptance of all the data presented and hence controlled or ‘domesticated’ ourselves.
Even now, we rely on AI in the police force, customs, taxation, voting (in some countries), and even the daily lives of people in China, Russia, North Korea, etc. The machines are learning a tremendous amount about the way we live, think and act. It would not take much to control our governments.
Deus ex machina.

Alan Stevenson spent four years in the Royal Australian Navy; four years at a seminary in Brisbane and the rest of his life in computers as an operator, programmer and systems analyst. His interests include popular science, travel, philosophy and writing for Open Forum.