New Australian institute to explore ethical Artificial Intelligence

| December 14, 2018

The social, moral and business implications of artificial intelligence are increasingly discussed, not least on Open Forum, and riding this trend the IAGCSIRO’s Data61 and The University of Sydney have announced the creation of the Gradient Institute, an independent not-for-profit organisation which will research the ethics of artificial intelligence (AI) and develop ethical AI-based systems to improve outcomes for individuals and society as a whole.

The Institute will aim to help create a ‘world where all systems behave ethically’. This will be done not just through research, but also through practice, policy advocacy, public awareness and training people in ethical development and use of AI.

The Institute will use research findings to create open source ethical AI tools that can be adopted and adapted by business and government.

A solutions-focused approach to ethical artificial intelligence

The Institute’s CEO will be Bill Simpson-Young, who will transfer from his post as Director of Engineering and Design at CSIRO’s Data61. He will work with Dr Tiberio Caetano, Co-founder and Chief Scientist at Ambiata, a wholly owned subsidiary of IAG, who will direct the Institute’s research into ethical AI as Chief Scientist.

Mr Simpson-Young says issues around artificial intelligence pose a challenge but also open an opportunity to discover which design choices for AI will lead to positive outcomes for people and society.

“Artificial Intelligence learns from data and data reflects the past – at the Gradient Institute we want the future to be better than the past. By embedding ethics into AI, we believe we will be able to choose ways to avoid the mistakes of the past by creating better outcomes through ethically-aware machine learning.

“For example, in recruitment when automated systems use historical data to guide decision making they can bias against subgroups who have historically been underrepresented in certain occupations.

“By embedding ethics in the creation of AI we can mitigate these biases which are evident today in industries like retail, telecommunications and financial services,” Mr Simpson-Young said.

Government, industry and academia collaborate as founding partners

Julie Batch, Chief Customer Officer at IAG, Australia’s largest general insurer, said being lead partner of Gradient Institute reflects IAG’s focus on embracing innovation to create better customer experiences.

“Leaning into the challenges and opportunities of AI requires considered thinking about fairness and equality. No government or business can do this alone. We need to work together across sector and we need to do this with urgency, which is why we’re proud to be founding partners with two of Australia’s strongest science and academic leaders – Data61 and the University of Sydney.

“Ethical AI will improve trust in how automated machines make decisions. IAG hopes to be an early adopter of the techniques and tools the Institute develops so we can provide better experiences for our customers,” Ms. Batch said.

“Establishing the Gradient Institute as an independent not-for-profit organisation is critical in bringing its purpose to life and we hope that other organisations will join us to contribute to this research.”

Adrian Turner, CEO of CSIRO’s Data61, the digital innovation arm of Australia’s national science agency, says the founding of the Gradient Institute is an important step, as AI and machine learning will have an impact on Australian society and every sector of the country’s economy.

“As AI becomes more widely adopted, it’s critical to ensure technologies are developed with ethical considerations in mind. We need to get this right as a country, to reap the benefits of AI from productivity gains to new-to-the-world value.

“We are pleased to be a founding partner of Gradient Institute, which combines some of the country’s greatest minds in AI. This is a great example of Data61’s national network model, operating with porous organisational boundaries to help bring coherence and accelerated, national-scale outcomes for data-related research challenges, for the benefit of Australia,” Mr Turner said.

Professor Duncan Ivison, Deputy Vice Chancellor, The University of Sydney said there is a need to build an ethical framework for AI that combines deep knowledge of the technological possibilities and limits of artificial intelligence, but also ensures it is primarily shaped by human needs and interests.

“Research intensive universities like the University of Sydney – who are able to draw on their deep intellectual resources across the sciences, engineering, humanities and social sciences – are well placed to work collaboratively with government, industry and community groups to achieve this approach.

“We need to collaborate, critique each other and engage the community to tackle what is emerging as one of the great ethical challenges of our time,” Professor Ivison said.

Gradient Institute’s five areas of work will comprise:

Research — The Institute will collaborate with research institutions to help develop scientific ethics for AI, and share these findings across the academic community through publications and presentations.

Practice — The Gradient Institute will collaborate with public and private sector organisations in finance, education, justice, health, and social services to design ethical AI software systems for use in these organisations.

Policy advocacy — The new body will work with public and private sector organisations to translate its research findings into policy proposals to achieve positive outcomes.

Public awareness — The Institute will contribute to the public debate with the aim of building awareness and knowledge of the ethics of AI.

People — The institute will attract the best researchers and practitioners in ethical AI and help train future leaders in the field. It will also provide training and education to people responsible for the technical, managerial, as well as policy and decision-making aspects of AI-based decision systems

SHARE WITH:

One Comment

  1. Alan Stevenson

    Alan Stevenson

    December 14, 2018 at 9:05 am

    When we think of ethics and morals we generally refer to those known and accepted by ourselves. However, those of the average Australian are not necessarily the same as those of Chinese, Russian or Middle Eastern peoples. The concept of ethics is fluid by its very nature.

    Whilst I applaud the idea of creating ethical standards for AI I find it difficult to visualise a coherent program which could control its operations. Everyone has his/her own bias or biases which, generally we are unaware of and these biases will be programmed in to the system. We have already seen this with facial recognition failing in the coloured communities and police precognition initiatives which have targeted high migrant areas.

    Introducing the idea of morals and/or ethics into AI would allow authorities to rely to a dangerous extent on devices which, by their nature, have no human compassion, empathy or concern for the outcome. The final decision must come from a trained human.