Australians don’t trust Artificial Intelligence
Trust is an issue when it comes to artificial intelligence (AI) according to a University of Queensland study that found 72 per cent of people don’t trust it, with Australians leading the pack.
Trust experts from UQ Business School, Professor Nicole Gillespie, Dr Steve Lockey and Dr Caitlin Curtis led the study in partnership with KPMG, surveying more than 6000 people in Australia, the US, Canada, Germany and the UK to unearth attitudes about AI.
Professor Gillespie said trust in AI was low across the five countries, with one nation particularly concerned about its effect on employment.
“Australians are especially mistrusting of AI when it comes to its impact on jobs, with 61 per cent believing AI will eliminate more jobs than it creates, versus 47 per cent overall,” Professor Gillespie said.
The research identified critical areas needed to build trust and acceptance of AI, including strengthening current regulations and laws, increasing understanding of AI, and embedding the principles of trustworthy AI in practice.
The survey also revealed that people believe most organisations use AI for financial reasons – to cut labour costs rather than to benefit society.
It found that while people are comfortable with AI for task automation, only one in five believe it will create more jobs than it eliminates.
One positive finding was that people have more confidence in universities and research institutions to develop, use and govern AI in the public’s best interests.
Professor Gillespie said the research showed that distrust came from low awareness and understanding of when and how AI technology was used across all five countries.
“For example, our study found while 76 per cent of people report using social media, 59 per cent were unaware that social media uses AI,” she said.
Professor Gillespie said despite the gap in understanding, 95 per cent of those surveyed across all countries expected organisations to uphold ethical principles of AI.
“For people to embrace AI more openly, organisations must build trust with ethical AI practices, including increased data privacy, human oversight, transparency, fairness and accountability,” she said.
“Putting in place mechanisms that reassure the community that AI is being developed and used responsibly, such as AI ethical review boards, and openly discussing how AI technologies impact the community, is vital in building trust.”
Emma Pryor is the Communications Manager at the School of Business at the University of Queensland.
Alan Stevenson
June 24, 2021 at 11:39 am
Emma Pryor’s research shows that we are not happy about AI in general. The reason for this is that AI is still an unknown quantity. The engineers who build the computers powerful enough to handle the concept do not understand it; the software programmers who design it also do not understand it. Consequently the resulting data cannot be trusted.
It is logical but not verifiable because in order to obtain a result a tremendous amount of data has to be fed in to the system. These data are all produced by experts with their own biases. Most of them are male and white. While each individual entry can be regarded as honest and peer reviewed, it does carry the bias of the researcher involved. These biases add up and don’t always negate each other. We all have biases, whether we like it or not – from our parents, teachers, friends and our own research.
The main problem with AI is that there is so much data that has been fed into the program that it is almost impossible to determine the logic behind the end product. There have been many instances where AI has shown researchers new insights and guided them to new understandings. However, as in the case of facial recognition, major mistakes have been made simply because of the errors made in the original data.