Australia needs to move beyond simply pleading with internet platforms for better content moderation and instead implement new legal frameworks that empower citizens directly. For a model of how to achieve this, policymakers should look to the innovative legal thinking emerging from Denmark.
Australia’s modern, multicultural society is built on high trust and social cohesion. This quiet asset now faces a profound challenge: the rise of generative AI and deepfakes.
The fundamental threat is not the technology itself, but rather its unchecked proliferation as technology platforms fail to self-regulate. After a decade of broken promises, we can’t keep waiting for the tech industry to solve problems it created. Instead, the responsibility falls to democratic governments to pioneer an effective policy solution.
The danger of deepfake technology is its capacity to dissolve the shared basis of facts that helps society function. The business models of our largest platforms, optimised for engagement, have created the perfect incubator for digital pollution.
Their attempts at content moderation have proven to be an endless task, always one step behind the next viral falsehood. This environment of non-interference enables harassment, political disinformation and fraud on an unprecedented scale. Existing policies, such as defamation laws, are ill-suited to fighting this.
Denmark’s proposition could be an essential way forward. The Danes are exploring a legal framework that shifts the focus from policing fake content to empowering the authentic original by granting citizens intellectual property rights over their unique biometric identities—their faces and voices.
The genius of this approach lies in its reframing. Rather than being a wholly novel idea, it repurposes one of our most established legal tools—intellectual property—for a new and urgent purpose.
Under current copyright laws, the most people can do is claim a copyright infringement on specific images used to create deepfakes. This can be difficult to prove due to the opacity of deeper AI processes. It also relies on the individual owning the original image, which may not be the case, even when the image is of that individual. Under the Danish model, however, individuals would own the rights to their likenesses as though they were copyrighted images, affording them far greater control over their digital identities.
Before this model can be adopted, policymakers must assess its feasibility. The legal hurdles are significant: copyright law protects original expressions, but the human face does not fit neatly into this box. The most prudent path would likely be to create a new right that borrows principles from copyright but is explicitly designed to protect biometric identity. This would require robust exceptions for fair dealing to protect news reporting, satire and artistic expression, similar to current fair-use provisions.
Enforcement is equally challenging, but the true value of a biometric copyright framework is its capacity to impose clear legal liability on platforms that have evaded meaningful responsibility. Clear, property-based rights would provide a mechanism for individuals or the eSafety Commissioner to issue takedown notices for deepfakes with the same legal force as a major film studio issuing a notice for a pirated movie. A platform’s failure to act would cause significant statutory damages, incentivising proactive removal rather than passive hosting.
The federal government’s power over copyright provides a basis for a national law. Biometric rights would complement, not replace, existing laws. The Privacy Act governs how organisations handle data, while biometric rights would govern how our identities are represented and used. These changes would strengthen the mandate of the eSafety Commissioner, creating an unambiguous category of illicit content and a powerful new tool to protect Australians online.
The primary appeal of this model is its potential to reinforce social cohesion. It offers a tool to protect vulnerable communities disproportionately targeted by digital harassment, helps restore trust in public institutions by making it easier to delegitimise fabrications and fosters a healthier public sphere where individuals can engage without fear of having their identities exploited.
Given the clear shortcomings of our current model, the time is right to consider a new strategy. There is a clear willingness within the Australian government to address this threat: just this month, Arts Minister Tony Burke discussed the threat of deepfakes, saying, ‘I don’t fear technology. I just know we need to be able to respond to technology.’
The government is well positioned to lead this effort by establishing a parliamentary working group to determine the feasibility of establishing a biometric copyright framework. This should involve experts from the Attorney-General’s Department, the eSafety Commissioner and civil society to create a balanced and effective Australian model.
Liberal democracies now have to fill the regulatory vacuum that technology platforms have created. Australia’s greatest strength is its social fabric, and by learning from Denmark, we can reinvest in the shared, verifiable reality that binds our nation together.

