Bots, buzzers and AI-driven campaigning

| July 31, 2025

As up to 64.3 per cent of Southeast Asians actively use social media, both small and large political parties are increasingly turning to online platforms to raise their public profiles and garner votes. But campaigning on social media has also opened a pandora’s box of problems that threaten regional electoral integrity. The risks can be best mitigated through a multi-stakeholder approach.

Candidates, political parties and voters have benefited from enhanced political communication and citizen-led campaigning through social media engagement in Southeast Asia. But online disinformation — intentionally disseminating false information — plays an increasingly greater role in elections.

Disinformation is difficult to fact-check in short campaigns, enabling malicious state and non-state actors to influence elections and public opinion. Freedom House found electoral disruption in 24 countries in 2020. In 2019, the Oxford Internet Institute reported state-sponsored online harassment of activists and opposition members in 47 out of 70 countries surveyed, including the use of unlawfully gathered personal data. As the Cambridge Analytica data scandal showed, improperly obtained personal data can be used for micro-targeting which is exacerbated by artificial bot campaigns.

Social media manipulation is a well-organised industry in Southeast Asia. Public relations and digital consultancy firms offer digital campaign manipulation services to paying political clients. These companies engage in targeted campaigns using ‘buzzers’ — bots, celebrity influencers, cyber-troopers and cyber-trolls to generate ‘buzz’ around candidates and undermine opposition through historical revisionism, hate speech, personal abuse, memes and parody accounts online.

Thousands of bots flooded Twitter with anti-opposition messages in Malaysia’s 2018 general election. During Indonesia’s 2019 general election, both Joko Widodo and Prabowo Subianto’s campaigns employed teams of ‘buzzers’ to spread fake news and undermine each other.

In the Philippines, former president Rodrigo Duterte’s 2016 campaign saw the rise of a consultancy industry for online public relations, where an organised disinformation campaign was used to ridicule the opposition. Influencers with small but loyal follower bases were hired by parties and candidates to insert political content into their non-political posts on social media. In the Philippines’ 2022 presidential election, falsehoods disseminated through live streams were nearly impossible to fact-check in a timely way.

Disinformation is exacerbated by artificial intelligence (AI) tools enabling anyone to create lifelike images and videos known as deepfakes. Indonesian political campaigns hire creators to use text-to-art tools such as Midjourney and Pika Labs to cause confusion and sow mistrust. In the 2024 presidential election, the Prabowo campaign used AI to generate wholesome and chubby-cheeked avatar images of the candidate to ameliorate the former general’s poor record on human rights. Prabowo’s AI image was emblazoned on billboards, sweatshirts, stickers and featured on ‘#Prabowo’-tagged posts, receiving 19 billion views on TikTok.

Other AI innovations include the creation of PrabowoGibran.ai — an AI image-generation platform for voters to insert themselves in photos of activities like hikes or safaris with Prabowo, enabling his roughly 15,000 online volunteers to track online sentiment and engage with voters.

AI-powered disinformation tools and bots raise concerns about ethics and fairness. Online content is uniquely difficult to identify, fact-check and deter. Encrypted messaging platforms place further limits on the detection of viral falsehoods. Governments and digital platforms are usually unable to fact-check quickly enough during short campaign periods to prosecute perpetrators and minimise harm.

Effective fact-checking of these coordinated campaigns also depends on the jurisdiction, nature of and people involved with the disinformation. Though some AI content that has been uniformly shared by bots thousands of times can be detected and refuted, other cases present more difficult challenges. Outdated regulations exploited by transnational media and data analytics companies complicate tracking malicious internet users.

Southeast Asian governments have responded by devising legislation to compel data protection, promote user privacy and content moderation from individuals and platforms. Electoral governance frameworks limiting AI-generated political content exist in several Asian countries. Deepfakes are not allowed in the Philippines but AI-generated content is permissible if disclosed. In Thailand, the Computer-Related Crime Act empowers the government to sanction anyone for disseminating false information.

Yet arbitrarily worded laws can be misused to suppress opposition and dissent. In Thailand, opposition politician Thanathorn Juangroongruangkit was indicted under the country’s Lèse-majesté and Computer Crime Act after criticizing the government’s COVID-19 vaccine program during a Facebook Live stream. In Singapore, the Protection from Online Falsehoods and Manipulation Act has been used by the government against journalists and opposition party members supposedly to correct negative media stories. Such cases shows how legal tools may be misused for partisan gains and repression of democratic participation.

Social media rules monitor, fact-check and remove harmful content but they vary across platforms and may be arbitrarily changed. Soon after acquiring Twitter in 2022, Elon Musk reversed the platform’s ban on political advertising.

Western-based platforms do not have the infrastructure to monitor harmful content produced in many of Southeast Asia’s diverse languages. TikTok hired Burmese-language moderators only after the military was found to sponsor hate speech on its platform. X — formerly Twitter — has turned to Community Notes to crowdsource content moderation. Although Community Notes can increase trust in fact-checking, it does not deter serial fabricators.

The rapid spread of false information, AI-driven online manipulation, coordinated bots and third-party manipulators undermines confidence in electoral institutions and processes. The sophistication of AI tools makes it hard to track the creators and sources of malicious deepfakes. Legislation and self-regulation by digital platforms have proved insufficient to address the varied forms of fast-evolving threats on social media. Active and inclusive engagement between governments, social media platforms, voters, election management bodies, fact-checking organisations and civil society is the best way forward.

Governments and electoral management bodies should engage with tech firms to develop codes of conduct and best practices based on international standards. Despite ‘starting’ to end partnerships with third-party fact-checking organisations in the United States, Meta maintains its government-approved systems in Taiwan and the Philippines. ASEAN has also engaged stakeholders to combat disinformation through digital literacy.

Robust transnational linkages avoid duplicating fact-checking efforts and help develop codes of conduct beyond a particular country or platform. Public and private stakeholders need to collaborate, enhance ICT capacities and develop media-literacy skills to enable effective and coordinated debunking of online disinformation. Only internationally collaborative efforts will help strengthen electoral integrity in the era of digital campaigning.

This article was published by The East Asia Forum. It was written by Aiden McIlvaney and Netina Tan, a University Scholar and Associate Professor in the Department of Political Science at McMaster University in Canada.

SHARE WITH: