Freedom of speech v freedom from deception

The rapid pace of technological advancement, coupled with attention overload driven by content algorithms, targeted marketing, memes, clickbait, and sensationalism on social media has created digital silos that reinforce biases and create a fertile ground for mis- and dis-information. With generative artificial intelligence becoming increasingly sophisticated, manipulated media is set to escalate, challenging governments and tech platforms alike to navigate its risks and opportunities.
However, legislative solutions targeting peddlers of disinformation face considerable obstacles. the recent failure of the Combating Misinformation and Disinformation Amendment Bill to secure Senate approval highlights this issue. The bill was rejected by both the Liberal-National Coalition and the Greens, citing concerns over censorship and a lack of clarity, sparking ongoing debate about balancing free speech.
Disinformation is nothing new; it’s an age-old tactic that has been weaponised in the cyberspace to polarise, provoke, and profit. Despite policy updates on platforms like X (formerly Twitter), efforts to curb disinformation have fallen woefully short. Social media platforms allow complex issues such as international conflicts, the rising cost of living, and immigration to be presented in simplistic formulas, leading to quick reactions and confirmation bias while discouraging critical thinking.
But the stakes are high, as seen when a deepfake audio of Kamala Harris calling Joe Biden “senile” and herself a “diversity hire” went viral ahead of the US elections. Originally labelled parody, the clip circulated widely—including by Elon Musk—without a disclaimer identifying it as AI-generated.
Such incidents spotlight the alarming potential of unmarked AI to distort reality and sway politics. While satire and parody sit outside strict mis- and dis- information definitions, failing to label manipulated media violates policies of certain platforms such as Meta and TikTok and risks misleading audiences.
The Australian Context
Misinformation has increasingly permeated the public discourse and digital media in Australia, with the Bondi stabbings serving as a relevant example. The Bondi Junction Stabbings occurred on 13 April 2024 at the Westfield shopping centre in Sydney’s eastern suburbs, where Joel Cauchi, a 40-year-old man diagnosed with schizophrenia and off his medication, fatally stabbed six people and injured 10 others. Initial false claims that the attack was a “religious terrorist act” triggered anti-Muslim and anti-immigration rhetoric on social media. Influencers and accounts with large followings amplified the unverified claims, deepening societal divides.
As the story unfolded, opposing narratives accused the perpetrator of being a Zionist named Benjamin Cohen. In the chaos, an innocent University of Sydney student became the target of baseless allegations. These claims were amplified by Channel Seven’s Sunrise Program, which they later retracted and issued a public apology, acknowledging that the allegations were “entirely baseless and without basis.” This misinformation frenzy, linked to the Israel-Gaza conflict, illustrates how various groups have weaponised the conflict to push political agendas in Australia. The Bondi case demonstrates how unverified social media content can rapidly spiral, crowding out accurate information.
Comedian David Baddiel summed up the role of social media in fuelling misinformation in a post on X: “The crazed rush to insist that the Bondi killer was Muslim by some and Jewish by others — when in fact he was neither — is another example of how much this platform is just somewhere people come to have their confirmation biases further confirmed.”
Unverified claims labelled “breaking news” are circulated unchecked, with news outlets often amplifying these stories in a race to outpace competitors. A report by the Columbia Journalism Review highlights that digital media frequently disseminates misinformation in an effort to drive traffic and social engagement, with a key issue being the lack of proper verification and the misrepresentation of rumours as facts.
Similarly, an article by The Journalist’s Resource explores the area of “homegrown” deceptive or inaccurate information from news media outlets to attract a wider audience. In this scramble for attention, the public’s right to accurate information—and protection from manipulation— is sidelined. The result? A society increasingly vulnerable to deception.
Hits and Misses
Media manipulations has become pervasive on social media platforms, posing challenges for those who rely on these platforms for news. The Oxford Internet Institute reported organised social media manipulation campaigns in all 81 surveyed countries, highlighting the growing threat of such tactics to democracies worldwide. Similarly, the Australian Electoral Commission (AEC) has warned of its limited capacity to counter AI-driven manipulation of electoral processes, underscoring the urgency of legislative action.
The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 aimed to empower the Australian Communications and Media Authority (ACMA) to hold digital platforms accountable for proliferating misinformation that harms Australians while also preserving free speech. However, the bill’s failure to pass the Senate leaves Australia at a crossroads.
The bill sought to compel digital platforms to manage content responsibly and remove inauthentic activities such as bot-generated content and coordinated disinformation campaigns. Transparency requirements would hold platforms accountable, exemplified by TikTok—a signatory to the Australian Code for Practice of Disinformation and Misinformation—and the disruption of seven covert influence operations and removal of 9,743 accounts associated with previously disrupted networks in September 2024. By focusing on platforms popular among young users, the bill attempted to shield vulnerable audiences from echo chambers of manipulated information.
By exempting professional journalism that adheres to industry standards from the proposed bill, it recognised the importance of independent reporting. Yet, concerns lingered over whistleblowing, which risked being censored under the bill. Additionally, despite the exemption of professional journalism, professional journalists themselves can become conduits for misinformation due to industry pressure to report news rapidly to beat other news outlets or the advanced strategies employed by bad actors.
For instance, during the Boston Marathon Bombings on 15 April 2013, two improvised bombs exploded near the finish line, triggering a surge of activity on social media platforms by individuals on-site, journalists, and law enforcement. False rumours—such as a Reddit claim misidentifying a suspect and CNN’s premature report of an arrest—spread rapidly, forcing corrections. This highlights how the rush to publish breaking news often leads to unverified claims reaching the public.
The Human Rights Law Centre (HRLC) backed the bill, emphasising the need for industry-developed misinformation standards that are consistent with human rights and stronger accountability measures. Self-regulation, as past reports revealed, has often fallen short due to commercial interests of digital platforms. However, despite its merits, the bill faced substantial backlash.
Critics pointed to vague definitions of “serious harm” and the unchecked power granted to fact-checkers, which could lead to bias or misuse. Concerns about government overreach in regulating speech and the potential stifling of free expression further fuelled opposition.
Moreover, the bill’s focus on younger audiences overlooked older demographics, who are equally vulnerable to misinformation. It also risked driving forms of mis- and dis- information to less-regulated platforms and private messaging applications like WhatsApp and Telegram, creating isolated echo chambers where misinformation thrives unchecked, making detection and intervention even more difficult.
Media literacy campaigns are often proposed to combat misinformation, but alone they are insufficient. Teaching individuals to identify false content assumes a rational audience—one that has access to all the information, engages with it critically, and makes an informed judgement—an ideal often undermined by confirmation bias and entrenched beliefs.
A 2021 study by MIT Media Lab found that false news spreads wider in polarised networks as individuals work harder to convince others of their views. Without systemic reforms, this dynamic will persist, regardless of individual education efforts. Platforms also struggle to manage the sheer volume of flagged content, leaving many reports unresolved.
AI empowered misinformation
The Global Risks Report 2024 identified AI-generated mis- dis- information as significant threats, alongside growing societal and political polarisation. With the bill’s failure, the government’s focus has shifted to AI regulation under the Ministry for Industry and Science. However, this does not respond to the core issues surrounding mis- and dis-information, and abandoning efforts in the face of impasse is not the solution. The urgency to address disinformation cannot be overstated, especially with Australia gearing up for the 2025 elections.
In the absence of robust legislative action, democratic processes and public trust remain vulnerable to a worsening mis- and dis- information ecosystem. While insufficient, educating Australians about these threats and implementing effective safeguards are essential next steps. Ultimately, policymakers must rise above partisanship to craft and implement solutions in favour of Australian citizens and not party interests.

Tanisha Shah is an undergraduate student at Macquarie University studying Security Studies and Arts. She is a content writer for The New Global Order, an editor for Young Diplomats Society, and a Women in Strategic Policy program participant. She welcomes LinkedIn connections.