Making the most of ChatGPT

| April 11, 2023

Many people have worried about the effects of the new AI on society. Most of these have concerned logical interactions. Computers are logical machines; however, humans are not.

Discussions with friends have indicated that the general population is not able to compose a question for the search engine which will elicit an accurate response.

Few people I know are aware of Boolean logic which is the basis of search engines’ criteria for delivering the desired result. Many searches produce tens or hundreds of pages of headings to choose from – most being irrelevant.

Even simple use of AND, NOT, OR, NOR or the use of parentheses or inverted commas to contain word sequences can drastically reduce the search engine results to more manageable proportions. Logical XOR, XNOR, NAND and IMPLIES complete the sequence but are probably too complex for a lot of casual users.

This inability to formulate logical queries tends to create responses which are probably near the mark but could easily mislead the questioner. Humans tend to accept the first seemingly logical response to a question and base further research on that. Depending on the intelligence, education and background of the user this can result in further actions based simply on misunderstanding.

For this reason I believe that Boolean logic (or algebra) should be taught in schools at a fairly early age. If children can use computers and mobile phones before their teens, the ability to formulate logical queries for their appetites for knowledge should be a priority. Those who wish to learn logic can find enough on the internet to make a very good start.

Is it time to put the brakes on the development of artificial intelligence (AI)? If you’ve quietly asked yourself that question, you’re not alone.

In the past week, a host of AI luminaries signed an open letter calling for a six-month pause on the development of more powerful models than GPT-4; European researchers called for tighter AI regulations; and long-time AI researcher and critic Eliezer Yudkowsky demanded a complete shutdown of AI development in the pages of TIME magazine.

Meanwhile, the industry shows no sign of slowing down. In March, a senior AI executive at Microsoft reportedly spoke of “very, very high” pressure from chief executive Satya Nadella to get GPT-4 and other new models to the public “at a very high speed”.

The open letter published by the US non-profit Future of Life Institute makes a straightforward request of AI developers:

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

So what is GPT-4? Like its predecessor GPT-3.5 (which powers the popular ChatGPT chatbot), GPT-4 is a kind of generative AI software called a “large language model”, developed by OpenAI.

GPT-4 is much larger and has been trained on significantly more data. Like other large language models, GPT-4 works by guessing the next word in response to prompts – but it is nonetheless incredibly capable.

GPT-4 and models like it are likely to have huge effects across many layers of society.

On the upside, they could enhance human creativity and scientific discovery, lower barriers to learning, and be used in personalised educational tools. On the downside, they could facilitate personalised phishing attacks, produce disinformation at scale, and be used to hack through the network security around computer systems that control vital infrastructure.

OpenAI’s own research suggests models like GPT-4 are “general-purpose technologies” which will impact some 80% of the workforce.

Right now, technology is accelerating much faster than our capacity to understand and regulate it – and if we’re not careful it will also drive changes in those lower layers that are too fast for safety.

SHARE WITH: