Controlling AI

| March 5, 2025

Technology can and does shape society, but society can also shape technology. We are still able to formulate the path AI should take in order to produce the best outcome for humanity.

First, we can hold the platforms more accountable for the content they serve. The digital platforms should no longer hide behind the argument that they are just an intermediary, especially  now that they are increasingly delivering AI-generated content that they created.

Second, we can protect those who are easily manipulated or harmed, especially the minds in their formative years which are not yet able to protect themselves from sophisticated manipulation. The recent decision to prohibit deepfake pornography is a good example of protecting those who are being harmed. But it is just one of the most obvious harms that needs to be addressed. There are many others, such as doxing and cyberbullying. And sadly, unless we do something about it, AI will likely only make such harms more prevalent.

Third, we can regulate to ensure truth in political advertising. We are aware that data is being manipulated by sophisticated algorithms to suit the tastes of individuals using the internet, so now each browser can be given a slightly different truth which not only creates confusion, it makes intelligent debate difficult.

Fourth, we can regulate against deepfakes. Fake content will undermine our faith in many of our institutions. Compare this to another area where fake content would be dangerous. We rightly worry about fake money undermining our confidence in the financial system, so we have strong penalties for counterfeiting money. We need to do the same with deepfakes.

Unfortunately, it is not as simple as banning fake content. We have to balance this against maintaining freedom of speech. For instance, our ability to parody politicians is an important part of the political process. We must walk the delicate tightrope between reducing fake content and supporting satirical debate.

Fifth, we need to embrace technological measures like digital watermarking to reduce the impact of deepfakes and protect intellectual property rights. This is, in fact, the perfect application for the blockchain — we have finally found something good to do on it!

Sixth, we can restore financial support to the news media for its role in shining a bright light on our democratic institutions. The News Media Bargaining Code was an attempt to do precisely this. We can double down on such initiatives, taxing the technology companies to ensure we can protect democracy by uncovering the half-truths, lies and corruption.

Seventh, we can use digital technologies to increase transparency and protect whistle-blowers like Julian Assange so helping to preserve democracy by shining a light on wrongdoing. The internet was, and still is, a powerful force to improve democratic transparency.

Eighth, we can strengthen our digital privacy. In many respects, we are at the technological low point in terms of our privacy. To do anything interesting, we need to share our data with the tech giants and their AI algorithms. But advances like federated learning, where AI models are trained without sharing our personal data with the tech giants, promise to give us back our privacy. Indeed, AI will increasingly be smart and small enough so that our data doesn’t leave our devices.

Nineth, we can ensure that access to digital technologies is a fundamental human right. If we do not, the world will divide into the digital haves and have-nots. Access to the internet is becoming as important as other basic rights, like freedom of speech. We learnt during COVID lock-downs how many children in Australia did not have access to a single device at home on which to access the web. This cannot continue.

Ultimately, digital technologies like AI have the potential to increase trust. But we need to make some good choices to ensure that they do. Our children are set to inherit a worse world than we were born into due to a raft of problems, from the climate emergency to global insecurity, and a distrust in the very institutions that we now need most.

The future requires us to be careful, smart, and committed to using AI to build, not break, our trust in society. And if we use technologies like AI wisely, we might look back at this time as the start of a new era of increasing, and not decreasing, trust in our democratic institutions.

So, do we trust AI? Maybe not yet, but with the right choices, we could.

Data for this article extracted from Age of Doubt: Building Trust in a World of Misinformation, edited by Tracey Kirkland and Gavin Fang, released in March by Monash University Publishing.

SHARE WITH: