How can businesses build trust between humans and AI?

| August 23, 2020

Artificial Intelligence (AI) permeates our everyday lives, deciding things like who gets welfare, who is shortlisted for a job, and even detecting diseases in plants on farms. But AI also presents a myriad of challenges, especially for the businesses designing AI systems.

In the last or so 30 years, AI has left the laboratory and appeared in our everyday lives. “That’s the funny thing about AI – it is touching people’s lives but they may not even realise it,” says Toby Walsh, Scientia Professor of AI in the School of Computer Science and Engineering at UNSW Sydney, leads the Algorithmic Decision Theory group at Data61.

Prof. Walsh was recently named one of the 14 prestigious Australian Laureate Fellows, being awarded $3.1 million by the Australian Research Council to continue his research into understanding how to build AI systems that humans can trust.

The challenges AI presents today

Every time you read a story on your Facebook or Twitter feed, it is AI that is recommending those stories to you, while a third of all the movies you watch on Netflix are also coming from algorithms, explains Prof. Walsh. But even such harmless-sounding activities passed over to machines can have corrosive effects on society, he warns.

Indeed, his research is uncovering just how wide-ranging the adverse effects can be. “They can create filter bubbles, where we end up with either fake news or the possibility of elections being tampered with,” he says.

While AI has the ability to make society a fairer and more just place by taking away menial jobs, it can also be used to hurt the cohesion of society. In fact, Prof. Walsh says he is more worried about more mundane misuses of AI today than he is of superintelligent AI in the future. “I’m much more worried about the increasing inequality that automation is driving in society,” he explains.

Algorithms can be just as biased as humans and this is one of the biggest challenges facing AI today. “Worse, they are not accountable in any legal way and they are not transparent,” continues Prof. Walsh.

So in his research, Prof. Walsh is looking at how to build AI and verify that AI systems make fair decisions, which can be traced, explained and audited, and are respectful of people’s privacy.

But what about companies that are already using AI, namely the Big Five (Facebook, Amazon, Apple, Microsoft, and Google) – how do such powerful tech behemoths design AI processes that people can trust?

Google designs AI based on seven principles

In March 2018, Google announced it had partnered with the Pentagon on ‘Project Maven’ where it helped analyse and interpret drone videos via AI. Following this, the tech giant released a set of seven AI principles that form an ethical charter, guiding all development and use of AI at Google, and which prohibits its use in weapons and human rights abuses.

But some say many of the considerations since 2018 have not gone away. The Human Rights Watch recently published a report calling for a ban on “fully autonomous weapons”. Indeed, the potential to use AI for good is immense, but such powerful technology raises equally powerful questions about the use and impact of this technology on society, admits Google Australia’s Engineering Director Daniel Nadasi.

He warns anyone developing AI should hold themselves to the highest ethical standards and carefully consider how this technology will be used for the benefit of society as a whole. Indeed, for the past few years, Google has been using AI for myriad tasks like identifying items in an image, automated translation and making smart suggestions for your emails.

“AI can help us solve problems for billions of people – from breaking down language barriers with apps like Google Translate to improving food safety and detecting air quality, to helping doctors detect diabetic eye disease in India and Thailand… but unless people trust that they will be treated fairly and that this technology will benefit them and the people they care about, they won’t feel comfortable having it in their lives,” explains Mr Nadasi.

So AI systems should be designed following general best practices for software systems, such as privacy and security, together with considerations unique to AI. “The AI principles help make sure that we continue to develop this technology responsibly for the benefit of everyone,” explains Mr Nadasi.

Microsoft’s approach to AI has three stages

Microsoft uses AI across a broad range of our business; from finance and capacity planning in our core business all the way up to predictive text in Outlook and design ideas in the PowerPoint office tool. It builds AI systems from the input of three diverse teams that come together to build the tools it delivers through its cloud services.

But Microsoft was recently called up for issues concerning face recognition and bias and decided not to sell its facial recognition software to police until there is a federal law regulating it – following similar moves by Amazon and IBM.

“We have an Ethics & Society team that brings a diverse, non-IT lens to our development and they consider the societal impact, inclusivity, and human experience of AI systems. This ensures that we start from a people point of view and human experience at the heart of the process,” explains Lee Hickin, National Technology Officer at Microsoft Australia.

“Next we have a Technology & innovation approach that is across our engineering, research and business teams that looks at the potential for what’s possible in technology, hardware and software and explores what we can do to deliver something unique and valuable to the market,” he continues.

“Finally – we have a team that looks at the responsible application of AI, this is where we can also work with customers and partners to share tools, learnings and guidance on the responsible use of AI tools in solutions,” he says.

Trust is the core of successful AI

“Businesses creating AI systems should work to make these systems understandable to the people who use them and to put as much control as possible in the user’s hands. This can be built into the software development process right from the design phase,” says Mr Nadasi.

But maintaining trust with your customers is what good business is built on – regardless of whether AI is there or not.

“For AI to be deployed in a responsible, trustworthy way, it is clear that we need action at many levels. And whilst I welcome the actions of Big Tech companies to develop AI principles and frameworks, we also need better regulation,” says Prof. Walsh.

“In high stake areas like facial recognition, we already see many companies will not sell their facial recognition software for police use until a national law is in place – players like Microsoft and Amazon are calling for such regulation.”

For the full story, A closer look inside AI at Google and Microsoft, please visit BusinessThink.

SHARE WITH: