Experts react to the Federal government’s AI discussion papers

| June 7, 2023

The Albanese Government recently released two papers considering a range of steps to ensure the growth of artificial intelligence technologies (AI) in Australia is “safe and responsible”.

The Government’s Safe and Responsible AI in Australia Discussion Paper canvasses existing regulatory and governance responses in Australia and overseas, identifies potential gaps and proposes several options to strengthen the framework governing the safe and responsible use of AI.

The National Science and Technology Council’s paper Rapid Response Report: Generative AI assesses potential risks and opportunities in relation to AI, providing a scientific basis for discussions about the way forward.

While Australia already has some safeguards in place in relation to AI, Minister Ed Husic argues that it’s appropriate that Australia consider whether these regulatory and governance mechanisms are fit for purpose.

Australia was one of the first countries in the world to adopt AI Ethics Principles and the Labor government promised to invest $41 million in the most recent Budget for the responsible development of AI through the National AI Centre and a new Responsible AI Adopt program for small and medium enterprises.

A range of Australian experts in the fast developing field of artificial intelligence reacted to the government’s stance, although the capacity of the Federal government to understand, predict or control the short term impact of generative AI on the economy and society, and the potential opportunities – and existential threat – of artificial general intelligence in the future remains to be seen.

You can offer your own submission to the Government’s consultation process here. 

Dr Jonathan Kummerfeld, of the School of Computer Science at The University of Sydney, notes that

“The government’s discussion papers strike a good balance between banning a handful of extreme use cases, setting risk management requirements for medium risk cases, and allowing many uses to continue unhindered. Determining the boundaries of these categories will be difficult, particularly since the potential applications of AI expand every day. It is excellent that the initial ideas were drafted with expert guidance and that feedback is being sought for policy refinement.

The recent jump in capabilities of Large Language Models (LLMs) demonstrated by ChatGPT may make legislating feel urgent, but it is very unclear what trajectory development will follow in the coming years. With GPT-4 training on essentially all publicly available English text, it will be hard to improve the core model further by scaling up its size. This suggests a cautious approach to legislation is needed, with broad consultation, allowing time for the research community to refine our understanding and for the industry to explore possible applications. In the meantime, as the report points out, existing laws can be applied to many cases in which people unwisely use AI today.

In parallel to contemplating restrictions, we need to further educate people about the limitations of AI models and the potential risks they create. Specifically, anyone who is going to use AI in a product or service needs to understand common types of errors and biases, as well as strategies to test for and mitigate those issues. That includes everyone from expert programmers to non-programmers who use off-the-shelf AI systems.”

Professor Michael Blumenstein, the Acting Dean of the Faculty of Engineering and IT at The University of Technology Sydney, is sanguine about the impact of AI, arguing that

“The field of AI has been around for decades, and the latest public interest and media reporting has primarily been generated by a fundamental misunderstanding of the most recent iteration of the technology, which to this day, still relies on algorithms that are understood and have been under development for years by researchers and expert practitioners. The fear in relation to these technologies is misplaced and the panic generated which has caused the government to consider banning “high risk” AI is totally unfounded and should not become a knee-jerk reaction, which if it comes to fruition, will stifle the healthy research and industry innovation ecosystem in Australia as pertains to AI.”

Dr Sandra Peter, the Director of Sydney Executive Plus from the Business School at The University of Sydney, is more wary of this new technology.

“The government’s consideration of a ban on “high-risk” uses of AI is a good first step, but not enough. We need business and community leaders who are fluent in AI, who can responsibly take advantage of the promise and potential benefits of these technologies for our economy and society. This should be done while ensuring that its use is acceptable and protects the safety and fundamental rights of people in our communities. It is imperative that we build widespread understanding of AI technology among regulators, executives and the general public.”

Professor Chennupati Jagadish, the President of the Australian Academy of Science, agrees that

“Generative AI is changing the way we live and work. The Artificial Intelligence discussion paper released by the Government and its focus on governance mechanisms to ensure AI is developed and used safely and responsibly in Australia is timely and important.

The RRIR highlights that while Australia has capability in AI-related areas like computer vision and robotics, and the social and governance aspects of AI, our fundamental capacity in large language models (LLMs) and related areas is relatively weak.

The RRIR also highlights that while specific regulatory frameworks to address generative AI—including LLMs and multimodal foundation models (MFMs)—are currently being developed, they have not yet been deployed in Australia or overseas. There is a growing recognition that a range of institutional measures and policies are required to mitigate public risks.”

Professor Lisa Given, the Enabling Impact Platform Director at RMIT University, dismisses warnings about the existential threat of AI in favour of focusing on its immediate benefits.

“‘The risk of extinction from AI is highly speculative and does not compare to the real and immediate global risks humanity faces, such as climate change and the COVID-19 pandemic – real and tangible concerns that governments need to address globally.

When institutions issue warning statements like this, they risk creating unnecessary panic about future, potential technologies that may never materialise.

They also focus people’s attention away from the real risks posed by AI tools today (such as misinformation, bias, lack of transparency, potential for abuse), which are causing real harm.

The public has been exploring daily the usefulness of AI tools yet are often unaware of the real limitations of these systems and the risks of adopting these emerging technologies.

Tools that use copyrighted materials without consent, that present false information using a convincing and empathetic tone of voice, that pre-screen job applicants against biased datasets, and that enable image-based abuse – are just some of the real harms we are seeing today. This is where regulation, transparency, and scrutiny are needed urgently.

AI tools have many benefits to offer humanity, but we need to be critical and careful about how these tools are used for the betterment of society.

This requires us to question who has control of these tools, what people and companies may gain (or lose) from how they are used, and what steps we need to take, as a society, to ensure people and companies use these tools appropriately.”

Professor Geoff Webb, of the Department of Data Science and Artificial Intelligence at Monash University, backs calls for action to ensure most people benefit from the technology, rather than losing out.

“The Government’s move to update Australia’s guardrails around AI are timely. These technologies are going to transform the ways in which we live and work, and generate untold wealth. It is critical that we focus on ensuring that the benefits of these technologies are shared by Australians rather than just serving the interests of the nations currently investing in their development.”

Dr Daswin De Silva, the Deputy Director of the Centre for Data Analytics and Cognition (CDAC) at La Trobe University, also encourages government oversight.

“This is a much-needed, high-priority regulation following the recent transformative and disruptive developments in the generative AI space. The EU is leading this globally, with the “AI act” already ratified in the EU parliament, also in some ways the GDPR [General Data Protection Regulation] makes generative AI tools like ChatGPT illegal in the EU. More to the point, for the first time in human history, we have created a “type” of intelligence that is equivalent to human intelligence, and able to rival the most advanced species on Earth.

While generative AI gets it wrong occasionally, it demonstrates superior intelligence across a large number of topics that is not humanly possible. This is due to the prevalence of large volumes of digital data to learn from and high-end computational capabilities that super speed up that learning process. A single human cannot achieve this level of intelligence which is the primary reason to declare a threat and work towards the regulation of unsafe, risky and harmful AI.

All AI applications must be classified in terms of its level of risk, and then regulated based on how harmful, physically, psychologically and socially, this risk presents itself. At the same time, we must not lose the opportunity to leverage AI to solve/address the most challenging problems of our times, such as climate action, global poverty, industrial waste, equitable universal healthcare and education. Regulation is necessary but it should not prevent us from capitalising on this new technology for the advancement of humanity.”

Professor Jeannie Paterson, the Co-director Centre for AI and Digital Ethics at the University of Melbourne, is in favour of government regulation.

“We should focus on the corporate governance of those developing AI applications and not be swept away by hysteria about the technology itself. We need a systematic and robust regulatory response to AI. This includes measures for transparency and accountability. But just as the use cases for AI are different, so too the regulatory response must be targeted and proportionate to have an impact. Importantly key is strong well-resourced regulators. Or law reform will be meaningless.”

By way of contrast, Professor Matt Duckham, a researcher in Geospatial Sciences at RMIT University, is scornful about the most extreme fears of AI, but acknowledges the potential for its misuse in the hands of criminals and propagandists.

“The statement could be viewed as an astonishing overreaction, and something I expect its signatories will look back on sheepishly in the coming month and years.  

To compare this technology with the truly existential threats we face today such as climate change, war, pandemics, is simply absurd.  

No matter how surprising or remarkable the new AI capabilities are, this technology is just statistical models of word frequencies. 

The technology nevertheless marks a remarkable and exciting milestone in AI. It is surely causing big changes and disruptions in many industries and sectors of society, which are only set to grow over the next few years.  

Many of those disruptions will be negative, but hopefully many more will be positive. None will be apocalyptic though.  

The real harm of this technology lies in their subtle amplification of discrimination, inequity, exploitation, and entrenched advantage – which are already evident and growing in the use of AI in industry, institutions, and society for some time.”

Professor Matthew Warren, the Director of the Centre for Cyber Security Research and Innovation at RMIT University, agrees that

“Doomsday scenarios on AI are nothing new. Remember in 2014, Stephen Hawking warned AI could end mankind? Or in 2014 when Elon Musk warned that, with artificial intelligence, we are summoning the demon?  

Now the Center for AI Safety claims AI systems could lead to human extinction, it is all hype that takes away from real challenges from global warming, pollution, famine, and more.  

Will these generative AI models end humanity? No.  

AI systems should be embraced as they will help improve the societies in many ways and in ways we do not even understand now. 

Where they will have a negative impact on society is widespread job losses, disinformation, and the creation of deepfakes. 

The biggest AI risks the world face are how authoritarian countries such as China and Russia will develop and apply AI systems, one of which potentially to develop military applications with autonomous weapons. This is where pressure and controls should be applied. 

Western countries, such as Australia, must develop AI frameworks that will guide the development of AI and identify areas where AI systems will not be developed.   

The Australian government has now outlined its intention to regulate artificial intelligence, saying there are gaps in existing law and new forms of AI technology will need “safeguards” to protect society based upon a risk approach.  

This move should be focused and supported. We shouldn’t focus on speculative doomsday scenarios.”

Fan Yang, a Research Associate from the School of Media and Communication at RMIT University, argues that

“the extent to which AI can be risky and impose human extinctions depends on how AI is designed, programmed, supervised, and used. 

Technologies can be very different if humanity/care/environmental sustainability is centred, as opposed to productivity or efficiency, such as ChatGPT.  

The problem lies in social prejudices, inequalities and injustice that have been embedded and inscribed in the long history of science, technology, and society (which many of us are not aware of until we experienced that our keyboard auto-corrected our name to an English word, Alexa cannot recognise our accents, or our Instagram’s filter reassigned us another race/ethnicity).  

The headline that AI can cause human extinction is as eye-catching as the risk of technologies is more likely to be disproportionately distributed to the groups of people who are already socially disadvantaged – women, minorities, people of colour, etc.  

The global pandemic and nuclear wars tell us the same story.  

AI is part and parcel of capitalism where disposable labour has been historically used for the financial gain of the capitalist and intensifies exploitation and alienation among the groups of people who are already disadvantaged.”

Dr Erica Mealy, a Lecturer in Computer Science at the School of Science, Technology and Engineering at The University of the Sunshine Coast, acknowledges that

“There are a number of big risks with AI use becoming mainstream – I am not certain that they are extinction-level yet, but certainly proactive legislation to prevent abuse is a good step. The kinds of activities to be legislated include its use in any sort of safety-critical setting – medical or mechanical.
Recent news has discussed how Tesla cars weren’t stopping for pedestrians – this AI agent deciding when a multiple-tonne vehicle will stop is a matter of concern.
The text built by ChatGPT style tools should be treated with scepticism. In terms of quality, it is worse than a tertiary source. Not only can they hallucinate (which is create information that was not in their training set), they cannot explain how their answer was reached and because they are trained on the internet at large, extra care must be taken to exclude inaccuracies.
My one of my biggest concerns about the use of Generative AI tools like ChatGPT lies in its potential to further reinforce biases. The output of these models is based on their input – in the truest sense “you get out what you put in”, or my favourite “Garbage in, garbage out”. So, for instance, it will never say the Pope is a Buddhist because that is not in its training data – but it will faithfully reproduce any biases that are. For example, there are only a small number of immunology experts the world over, but due to the democratising of information on the internet, everyone gets a voice, and those non-expert voices can and do drown out the experts. We saw this with the COVID-19 pandemic.

This problem is further compounded by the fact that the statistical models used in AI and machine learning are based on how often a piece of data appears in its training set, so the more times something appears, the more likely it is to be given as an answer.
It is absolutely appropriate that some sectors such as transportation and medical do not use these tools as a point of truth.”

Dr Vinh Bui, an IT and Cybersecurity expert from Southern Cross University, accepts that

“Generative AI, a powerful technology with vast potential, has sparked debates regarding the need for regulation. So in place of outright ban, it is important to have a concise assessment of the benefits, risks, and mitigation strategies associated with generative AI.
Generative AI offers valuable benefits across domains such as healthcare and creativity. For example, Stanford University’s synthetic MRI images aid in medical diagnostics, while OpenAI’s MuseNet enhances musical creativity. However, concerns arise regarding the spread of misinformation and algorithmic bias. UC Berkeley’s deepfake video experiment demonstrated the potential impact on public trust and election integrity.
Rather than a ban, proponents suggest regulating generative AI through transparency, accountability, and mitigation strategies. OpenAI’s efforts to highlight limitations and biases in its models showcase transparency. External audits and certifications can ensure adherence to ethical guidelines. Technical solutions, including deepfake detection and authentication mechanisms, require collaborative research and industry participation.
Public engagement is crucial for informed decision-making. Policymakers must involve experts, industry representatives, and the public to shape regulations reflecting societal values.
In conclusion, regulating generative AI calls for a comprehensive approach. While benefiting various fields, addressing risks and ethical concerns is essential. Through responsible regulation, transparency, and technical advancements, we can harness the potential of generative AI while safeguarding against potential harms. Public engagement remains central to this process, ensuring a collective and responsible future.”

Dr. David Tuffley, a Senior Lecturer in Applied Ethics and Socio-Technical studies at Griffith University in Australia, observes that

“ChatGPT and its cousins are very good at telling stories and creating narratives, and this would make them a potent weapon in an election campaign arsenal. For this reason, we do need to have a way of knowing if it is an AI that is talking to us or if it is a bone fide election candidate. These battles will be fought on social media platforms.

The government needs to oblige the platforms to take responsibility for what is published on them, and not hide behind the “we’re just a channel for others to exercise free speech”. The same rules should apply to them as applies to established media organisations.”

Toby Walsh, a Scientia Professor of AI at The University of New South Wales (UNSW) and Adjunct Fellow at Data 61, is an AI advocate, arguing that

“The biggest risk is the risk of not investing in all the opportunities that AI offers. The iPhone, Siri, and WiFi were all the result of government investment in people and basic research. When the UK made a similar announcement a few weeks back to look at risk of generative AI, they also announced an additional GBP 1 billion investment in AI. Australia risks missing this boat without such similar ambition.”

Professor Robert Sparrow, from the Department of Philosophy at Monash University, was

“…pleased to see the government being open to regulating AI. If this technology is as revolutionary as its advocates suggest then it is entirely appropriate that the public gets a say in what sort of world it might bring about.

Banning election-hacking via deepfakes and ChatGPT sock puppets and trying to combat bias would be a good start but, as recent statements by leading scientists suggest, the pursuit of AI itself is high risk for us all”

Professor Paul Salmon, the Co-Director, Centre for Human Factors and Sociotechnical Systems at The University of the Sunshine Coast, believes that

“Though advanced artificial intelligence (AI) could bring significant and widespread benefits, there are also many risks for which we currently do not have adequate controls. It is important to note that these risks do not relate only to the malicious use of AI, rather, there are also many risks associated with the creation and use of well-intentioned advanced AI. Some of these risks are even existential in nature.  

Given that early testing of GPT-4 suggests it exhibits elements of general intelligence, it is critical that we halt the development and use of advanced AI so that adequate controls can be developed. These include appropriate governance structures, such as an AI regulator, laws around the use of AI in different sectors, an agreed-upon ethical framework, and design standards to name only a few.

If we continue on the current trajectory without the necessary controls to ensure safe, ethical, and usable AI, we will likely see catastrophic outcomes across society.”

Kathy Reid,  a PhD Candidate from the School of Cybernetics at The Australian National University, notes that

“Recent advances in speech synthesis technology are just as impressive as those seen in image and text generation, but present different challenges for regulators.

While it may take many hours of footage, or several hundred images of a celebrity to create visual deep fakes, it is now possible to clone voices with only a few seconds of speech [e.g. reference]. This could be recorded on a phone call, while ordering food, or harvested from podcast or video recordings.
Generative AI technology significantly challenges how we present and preserve our personal identities in a datafied world.
A national regulatory approach to ensuring these technologies yield financial and societal benefits while addressing harms is timely, necessary and a marker of Australia’s growing AI maturity.”

Dr Zulqarnain Gilani, a Senior Lecturer and Raine Robson Fellow from Edith Cowan University, also calls for more regulation to protect society.

“Will the law allow a 5-year-old to drive a car? The simple answer is no, because the 5-year-old driving a car is a risk to themselves as well as the community. I think that the same goes for the unrestricted use of AI, especially by individuals who have no idea about the repercussions of the synthetic or “fake” data created. Since the generative AI models have now evolved to a stage where even a person with very little coding skills can apply and exploit them, strict regulations around their use are warranted now more than ever before.

Generative AI is very beneficial for humanity in many domains, especially medicine and health. However, its use in the public sphere by industry and private companies must be regulated by the State. This would protect the citizens from its very harmful effects like deep fake images, videos, and audios.”

Professor Enrico Coiera, the Director of the Centre for Health Informatics in the Australian Institute of Health Innovation at Macquarie University, and Founder of the Australian Alliance for Artificial Intelligence in Healthcare wants controls on the medical use of consumer AI.

“We do need to put a pause on the use of non-medical grade AI in clinical practice. Systems such as ChatGPT are not designed for use in clinical settings, and have not been tested to be used safely in any aspect of patient care.

Australia already regulates medical-grade AI that is used in devices through the TGA, and we need to communicate very clearly to consumers and clinicians that any AI that has not gone through rigorous testing is not considered fit for use in patient-facing settings.”

Dr Luke Balcombe, a Researcher from the School of Applied Psychology and Australian Institute for Suicide Research and Prevention at Griffith University, supports

“…international collaboration in AI as well taking a risk-approach with safety at its core. However, the Australian Government should also consider how the EU’s draft AI Act proposes prohibition of “subliminal techniques” when there isn’t yet evidence that AI can or will cause technology-enabled mind control.

From the perspective of mental health researcher with an interest in informatics, I see that regulation is one piece of the puzzle. I see some alarmist media but I think that the likes of Elon Musk and Open AI’s CEO, Sam Altman, want there to be progress in AI but there are limitations, and thus a need to explore the potential for problematic AI use. There are various advantages and disadvantages of AI, it is worth considering a mixed version also: human-AI solutions.

However, subliminal advertising has been going on since Coca-Cola and popcorn images in theatres were made to boost sales. I looked into YouTube in a recent study because it is an eminent, useful platform that has mental health as part of its mission and regulations. It is apparent that YouTube promote mental health awareness, education, and support. It has various channels for information seeking and sharing about mental health. But it also was found to have more of a negative impact on loneliness and mental health. I have since been looking into AI more in-depth and discovered a range of issues such as AI hallucinations, sentience, sentiment as well as subliminal systems.

YouTube is the most used streaming platform in the world, with over 2.6 billion monthly active users. However, a recent integrative review by Emeritus Prof. Diego De Leo and me including a synthesis of 32 papers has shown that regular YouTube users, particularly those under 29 years of age or who watched content about other people’s lives, were most negatively affected by YouTube. They reported higher levels of loneliness, anxiety, and depression than non-users or occasional users. Although the underlying cause is unknown and further studies are required, it was outlined that high to saturated use of YouTube (2 to 5+ hours a day) may indicate a problem.

There could also be errors and bias in the recommendation algorithm, as well as a narrow range of viewed content which may exacerbate pre-existing psychological symptoms. The lack of transparency in how YouTube’s system works stems from its recommendation algorithms being focused on marketing strategies. YouTube’s user satisfaction optimization has a diminishing value effect and opportunity cost for the users. In other words, at what point should YouTube intervene in terms of a duty of care for mental health when there are indications of problematic use? 

The use of AI has been going on for years and only is now becoming more scrutinised because of the opportunities and threats that AI poses. Meanwhile, platforms like YouTube have using machine learning to its advantage for years. The issues of AI hallucinations, sentiment, sentience and subliminal systems need more research but it is an underfunded area. I have been looking into GPT and its potential to help answer tough questions about the complex relationship between psychiatric disorders, and suicide. The need for faster, better solutions means global preventive interventions should consider AI-powered solutions in research methods.

We found GPT to be a promising research tool although prompt engineering is needed to effectively train the AI to produce more insightful and factual responses. It appears that while ChatGPT is the better writer, it struggles at times with facts. Bing is better with facts and actual improvement in responses. So there is a potential to use AI safely, although human factors in human-computer interaction need attention, to get human-AI working well.”

Nataliya Ilyushina, a Research Fellow from the Blockchain Innovation Hub and ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University, places AI within broader social parameters.

“Algorithmic bias and the proliferation of deep fakes are commonly cited as reasons to advocate for AI regulation. However, these issues are not unique to AI, as other technologies like social media have caused significant harm through the spread of misinformation.

It is important to recognise that algorithmic bias stems from the training data used by algorithms, and it is not an isolated problem exclusive to AI. Therefore, a comprehensive review of existing laws and regulations should be conducted to assess the extent of user protection already in place.

Completely banning AI in Australia could have disastrous consequences for productivity, especially considering the country’s declining productivity growth over the years. In fact, one of the key factors contributing to this decline, as reported by the Parliament itself, is the slow rate of innovation and adoption of modern technologies by firms.

The government discussion appears to overlook the crucial role of technology in the Australian economy. Furthermore, it fails to clearly define who should be considered the decision-maker in the context of AI. For instance, in the case of deep fakes, it remains unclear who should bear legal responsibility if regulations are implemented. Therefore, it is essential to address these challenges and establish a clear framework that identifies the accountable parties for AI-related activities.

Overall the discussion on banning AI appears to be disconnected from the consequences to Australian productivity and lacking assessment of the existing legislation to prevent overregulation.”

Alex Jenkins, the Director for the WA Data Science Innovation Hub at Curtin University, is gung ho about AI’s potential.

“Blanket bans on AI techniques will stifle research and innovation in Australia. The potential for AI to drive immense economic growth and scientific discovery needs to be balanced against the immediate and longer term dangers of AI technology. We can do this in a risk proportionate manner without over-regulation.”

Julian Vido, a Lawyer and PhD Student from the School of Cybernetics at The Australian National University, wants a transparent process.

“It is important to draft legislation that is open to scrutiny, and capable of general application. This is essential given what we know about the speed of technological innovation and the potential for unforeseen impacts when these move into new fields. Accordingly, it would be ill advised for any new regulations to define risk categories according to overly narrow or exhaustive definitions of specific technologies or use cases. This very issue was encountered by the European Union, when it realised that ChatGPT was not appropriately regulated under its draft AI legislation.

The success of any new AI legislation will depend on it also being enforceable. As the drafting of the legislation itself is important, attention should also be paid to the structure of any regulator and the capacity of everyday Australians to pursue breaches impacting them through the justice system. The lack of redress available to the millions of Australians impacted by recent privacy and data breaches is an example of the problems that might lie ahead.”

Dr Dana McKay, a Senior Lecturer in Innovative Interactive Technologies and Data Science from the School of Computing Technologies at RMIT University, backs government scrutiny of the sector.

“It is reassuring to see the Australian government taking a stance on the role of AI in society. The current proposed approach may address some of the worst harms of AI, however there are some gaps. The current proposal aims to ban ‘automated decision making’ as a high risk activity, but promotes the use of AI for medical image analysis.

While faster and more accurate medical image analysis is beneficial to society, whether AI in its current or near future form offers this is an open question: we know for example that AI is poor at detecting dermatological conditions on non-white skin: what would AI medical image analysis mean, then, for the many Australians who are non white in terms of skin cancer diagnosis? We also know that people perceive computers to be impartial, and that the outputs of such algorithms are taken on face value. 

A further risk to Australian society based on the rise of Q&A based search models is a possible loss of capacity for innovation. We know that serendipity, which is driven by diverse information sources and presentations is key to innovation: will a single, blandly presented textual answer to every question offer the possibility for novel insights offered even by a set of individual search results, which each represent an individual perspective? We simply do not know, but we do know that skill in generating responses from e.g. ChatGPT is likely to play a significant role in this.”

Professor Nicholas Davis, the Co-director of the Human Technology Institute at The University of Technology Sydney, says law makers are already falling behind.

“Laws in Australia and around the world have failed to keep pace with the rise of AI. This is true in three ways. First, regulators have not been enforcing – and in many cases have not had the resources to enforce – existing legal obligations that apply to AI systems. Second, critical laws such as Australia’s Privacy Act are two decades out of date, and in urgent need of reform to appropriately capture the harms and risks posed by AI systems. Third, high risk AI systems require special treatment to keep Australians safe. 

We have a generational opportunity to shape the guardrails for AI systems. We should engage across sectors and with our international trading partners to create thoughtful, nuanced laws that put human needs at the centre of AI systems.”

Rebecca Johnson, a PhD candidate from the School of History and Philosophy of Science at The University of Sydney, also backs legislative oversight.

“The two reports signal that the Australian Government is prepared to take AI risks seriously, but it needs to respond more quickly than it has in the past. 

A government AI Ethics framework was released in 2019, but as noted by CSIRO-Data61 director Jon Whittle in March this year, little had been done to operationalise the principles. Principles alone cannot guarantee ethical AI and have no impact on malicious actors.

The reports provide a summary of other responses around the globe. The EU is taking a strong regulatory stance as indicated by its demand for OpenAI and similar companies to be more transparent about the inner workings of their products. 

In the US, the Senate “launched a proposed regulatory framework to deliver transparent, responsible AI while not stifling critical and cutting-edge innovation”. 

The short of it is, we have a choice to make; are we going to swing to the US approach, which is more strongly economic focused or the EU approach ,which prioritises the good of society and the rights of the individual. This is our AI acid test.”

Professor Uri Gal, of the he Business School at The University of Sydney, agrees that

“The move towards a regulatory response to advanced AI technologies, such as generative AI, is a step in the right direction. Particularly in the context of high-risk scenarios.
However, we should put in place regulations that are reasonable. By that I mean three things:
We shouldn’t over-regulate by enforcing wide bans that could harm individual freedoms. While it’s essential to protect against AI misuse, regulations must balance this against respect for privacy and freedom of speech. Too stringent rules could result in the infringement of these liberties, leading to a potential misuse of power and surveillance issues.

We should also be mindful that too much regulation can stifle innovation. AI technology is a dynamic field, and strict regulations might hinder its development. Start-ups, in particular, could struggle under heavy regulatory burdens, limiting competition and innovation.

We should seek to put in place regulations through collaboration with other nations. Given the global nature of AI development, any regulation would require international cooperation to be truly effective. Without a global consensus, companies might just migrate their AI development activities to less regulated jurisdictions, essentially leading to a regulatory race to the bottom.”

Kylie Walker, the CEO Australian Academy of Technological Sciences and Engineering (ATSE), believes that

“Australia has an opportunity to be a global leader in responsible AI, backed by our world-class research, existing regulatory frameworks and early adoption of AI Ethics Principles. 

This is a critical national conversation, and we welcome the Government’s leadership in facilitating it. We must focus on both the opportunities and the risks of widespread adoption; the scope and adequacy of national planning and policies; the fitness of legal and regulatory approaches; and the implications of increasing geopolitical competition and geo-specific regulation in AI-related technologies and industries.

We welcome a common-sense approach to regulation that recognises the importance of these technologies and the role they can play in assisting economic productivity, health and social wellbeing. 

It’s also critical that we examine the significant environmental cost of these technologies, especially the huge power and water requirements, and resources consumed by upgrading hardware. 

And we need to ensure data is used ethically and follows the principles of privacy and security, as well as being mindful of Indigenous data sovereignty.

To inform the Discussion Paper, ATSE, in collaboration with the Australian Council of Learned Academies, the Australian Academy of Humanities, and the Australian Academy of Science, authored a Rapid Response Information Report for Australia’s Office of the Chief Scientist.  

Our report demonstrated that generative AI is fundamentally and rapidly reshaping business, government and the community, and highlights the need for better understanding, integration and design of AI technology. “

Professor Karin Verspoor, the Dean of the School of Computing Technologies at RMIT University, also backs some measure of government control to protect the public interest.

“We should absolutely be looking at regulation of the use of AI technologies, and seek evidence on both the efficacy and potential harms of AI applications. However, we should be careful not to throw the baby out with the bath water. There are many positive uses of AI that we can benefit from as a society.

We must invest in the development of a sovereign capability in AI and in research to evaluate AI carefully. We must establish a national policy framework for an AI-ready workforce and for regulation of the safety of AI applications. The pace of AI innovation will continue to accelerate. We must engage with it, rather than bury it.”

Professor Mary-Anne Williams,  a Michael J Crouch Chair in Innovation and Director of the Business AI Lab at The University of New South Wales, says

“It stands to reason that any form of artificial intelligence (AI) deemed unsafe should not be used or deployed. The challenge, however, lies in defining and deciding what constitutes ‘unsafe’ AI. The question of responsibility and regulatory oversight remains largely unanswered, with ambiguities persisting within scientific, engineering, educational, and legal spheres.

AI, much like electricity in its nascent stages over a century ago, is a revolutionary, general-purpose technology with the potential to overhaul every industry. Just as we didn’t outlaw electricity due to its inherent risks, banning ‘unsafe’ AI is problematic. Instead, we implemented rigorous safety measures such as the use of insulation, circuit breakers, and other measures, coupled with robust regulations and standards.

Australia’s current investment in AI research, education, societal adaptation, innovation, employment opportunities and jobs creation seem insufficient as we face the magnitude of the impending transformations. Prioritising safeguards and facilitating smooth societal transitions are vital, and we need an integrated effort across all fronts to advance our understanding and preparedness to lead post-AI world.”

Yee Wei Law, a Senior Lecturer from the University of South Australia, argues that

“There are ample examples of how AI and even ML (subset of AI) can lead to major problems (e.g., accidents involving autonomous vehicles, defamation) even without being misused (e.g., deepfakes for fake news) or attacked (through techniques of adversarial machine learning). However, the associated risks need to be clearly identified and assessed, before any ban is imposed on any application. Furthermore, risk-mitigating measures need to be sought immediately and proactively to avoid hampering Australia’s progress in AI/ML.”

Professor Jill Slay AM is a SmartSat Professorial Chair in Cyber Security from the University of South Australia, rounds off the debate by noting

“There seems to be a lack of understanding in Australia of the AI issues that do need management and regulation.
As an early adopter of AI and Machine Learning to defensive develop cyber security controls, I support the proposed ban on the use of particular forms of ‘weaponised’ AI. These currently allow for fake information to be spread, cloned images and false ‘truth’ to be propagated and can be used to support the development of cyber or kinetic warfare if left unregulated. This can happen by deliberate misuse of the algorithms (or mathematical approaches) that underpin developed information or technology. However, controlled use of these techniques supports the development of useful technology and is beneficial to the Australian economy.
In the same way, the controlled use of application such as ChatGPT and other generative AI is very useful when used by those trained and educated in its use. There appears to be a great deal of fear in the educational sector as to how these should be applied in say school or university coursework. My opinion is that educators can experiment and train themselves in the technologies so as to train and assess students. These do not necessarily provide more risk than any other digital technology and to ban them appears to be an over-reaction.”