Facebook’s stance on human rights

| April 29, 2021

The news that Facebook would launch a corporate human rights policy and fund human rights defenders facing online threat is welcome. While it is a march in the right direction, the context of the decision should be examined.

Facebook is touted by Mark Zuckerberg as a “bastion of free expression.” Yet this mega social media platform with 2.8 billion global users has been used by multiple state and non-state actors to facilitate riots, genocide, mass murder, surveillance, and the imprisonment of democracy activists, free speech advocates, and human rights defenders.

In all fairness, the tech giant has also provided a platform for reconnecting with old friends, supporting human rights and free speech campaigns, forming new relationships, boosting revenues for small and medium enterprises, and raising funds for good causes. However, until the moment when the US and EU conducted high-profile scrutiny of Facebook, it took a lacklustre approach to human rights and democracy.

For example, in Myanmar, Facebook began to censor Tatmadaw (Burmese military) personnel from 2018, though they had been using the platform since 2012 as a tool to systematically promote hate towards the ethnic minority Rohingya population. This hate campaign eventually led to wider popular support among Burmese Buddhists for what the UN has called mass-atrocity crimes against the Rohingyas.

In 2017, against the backdrop of a genocidal violence that killed 10,000 Rohingya Muslims and violently uprooted over 700,000 more Rohingyas, the  spotlight shifted to Facebook’s inactivity in curbing online hate speech and fake news leading to grave human rights violations, as it was widely perceived that “Facebook is the internet” in Myanmar.

In the face of the widespread criticism, Facebook commissioned an independent human rights impact assessment on its role in Myanmar and acknowledged in a public post in 2018 that they were slow to respond and they “should do more.”

Since then, the company has banned many accounts tied with the Myanmar military, and in 2021 after the military coup, they banned all personal, media, and business accounts linked to the Tatmadaw. However, the Rohingyas have experienced catastrophe, and their lives are unlikely to return to normal.

The platform is still being used by many conspiracy theorists to destabilise peace and undermine science. The radical right in the West, Hindu radicals, and Islamist terrorists in the past have used the platform for potential recruitment, while the memory of Christchurch mosque shooting via Facebook Live video still haunts us. However, Facebook has been stepping up its effort to curb such nefarious activities in its platform, which is commendable.

At a time when a leading Indian expert has warned that India’s pluralism is at risk, it was reported in the Wall Street Journal that Facebook had ignored systematic promotion of hate speech embedded in Hindu nationalism by some of the ruling party members of Bharatya Janata Party (BJP).

It was  alleged that Facebook did not take effective action in fear of losing its largest market in the word. Facebook’s former policy head reportedly told employees that “punishing politicians from Mr Modi’s party would damage the company’s business prospects in the country.”

A recent Guardian report alleged that Facebook harnesses bias towards the ruling party in India. However, Facebook India has denounced this claim. That could be true too as a recent report said Indian employees working in Facebook and Twitter India were threatened with jail time if they didn’t share data about farmers’ protests.

Due to its vast outreach capability, Facebook has become the number one choice for social media manipulation for promoting disinformation, organised trolling, hate speech, and fake news through computational propaganda in 56 states. Elsewhere, it was among the top two or three favourite platforms.

In 45 democracies, including Germany, Italy, Israel, Australia, Austria, the Netherlands, South Korea, United States, and United Kingdom, politicians and political parties were found to use computational propaganda in social media, including Facebook, by “amassing fake followers or spreading manipulated media to garner voter support.”

Computational propaganda refers to “the use of algorithms, automation, and big data to shape public life.” This is problematic because this technique is used to micro-target and mobilise voters or opinions through the presentation of partial facts.

The algorithm predicts users’ behaviours based on their likes, shares, and clicks, and computational propaganda tends to influence users’ opinions through carefully curated political content. This practice weakens trust in democracy, opens up spaces for conspiracy theorists, and offers legitimacy of radical actors.

Further, government agencies in 26 authoritarian countries, including China, Turkey, North Korea, Rwanda, Iran, Zimbabwe, Venezuela, Qatar, Saudi Arabia, and Bahrain, deploy cyber troops or troll troops through computational propaganda “to suppress public opinion, press freedom, discredit criticism and oppositional voices, and drown out political dissent.”

These troll troops create disinformation and manipulate social media by mass-reporting content or accounts, doxing or harassing dissents or critics, and amplifying content. The essence of such practice is to establish complete hegemony over information and suppress critical information about these regimes.

Perhaps at the time of its creation, Mark Zuckerberg didn’t realise his platform to promote free expression could endanger freedom and rights. However, informed rights activists and human rights researchers have highlighted how repressive governments and their agents use Facebook to violently thwart freedom and rights.

And at the heart of the problem is that Facebook makes money out of our likes, love, sadness, ha-has, anger, posts, photos, and videos. It makes approximately about USD$7-$8 per user.

That means those languishing in secret or formal prisons, extrajudicially killed because of their activism on Facebook, or those terrified family members who suspect that their loved ones may disappear or be arrested because of their Facebook activism in support of democracy in authoritarian regimes have been tied to the company’s business model and the reactions to their critical writings and shares have earned Facebook revenue.

That means the online hate towards Rohingyas that has unleashed genocidal violence in Myanmar has also financially benefitted the company.

Nevertheless, Facebook seems to appreciate its mistake and has taken action. It is increasingly working with journalists, academics, and researchers to build their capacity to track public insight data through crowd tangle and disbursing fund to university researchers to understand how to tackle hate speech.

Finally, Facebook’s recent moves toward officially standing up for defending human rights activists in danger and a new corporate human right policy means it is redefining its business model, company brand, and identity within a human rights framework and that truly is an encouraging news.

This article was published by the Australian Institute of International Affairs.

SHARE WITH: