Belling the ‘Hate Speech’ Cat

Neel Karnik

In August 2020, Facebook removed a post made by the President of the United States, Donald Trump for spreading misinformation regarding COVID-19. The event is not uncommon and social media platforms like Facebook and Twitter routinely pull-down posts that violate their rules. This role played by social media platforms for content moderation is essential to address trolling, hate speech, and misinformation that plague these platforms.

(Source: inc42.com)

Why is addressing hateful content important?

Hateful content has been around for a long time, as can be attested by the various defamation and libel laws designed to tackle hateful speech. However, hateful content online is different from offline hateful content. The networked nature of social media allows users a tremendous reach which cannot be matched offline. For instance, a topic trending on Twitter in India can reach Twitter’s worldwide trends, leading to users from different countries having the opportunity to interact with the trending hashtag. Furthermore, the rapid spread of content on social media makes it easy to disseminate ideas. This potent combination of speed and reach makes it possible for harmful content to spread rapidly on platforms. The violence against Rohingyas in Myanmar was fuelled by Facebook pages and groups which spread anti-Rohingya content. Similarly, hate speech shared on Facebook in Ethiopia is flaring ethnic violence and has raised fears of genocide.

Furthermore, trolls are known to abuse and harass users thereby discouraging them to use the social media platform due to the unwelcoming climate that has been created which discourages discussion and free sharing of ideas.  Thus, social media platforms have to police harmful content to not only promote a civil discourse but also to protect the larger societal fabric which can be weakened by harmful content.

Why is content moderation challenging? 

Determining what constitutes hate speech is incredibly subjective, and ultimately a political decision. A particular word may be used as a slur to attack a group of people, but the same word used in a different context can be used by the group as a means of empowerment.

Although content moderation is needed, it is difficult to enforce it. There are various reasons why content moderation is a difficult beast to tame. Determining what constitutes hate speech is incredibly subjective, and ultimately a political decision. A particular word may be used as a slur to attack a group of people, but the same word used in a different context can be used by the group as a means of empowerment. On social media platforms where users may not use their real identities or may choose not to reveal certain aspects of their identity, it becomes difficult to pinpoint and identify slurs.

Another related aspect pertains to the tone and intention of the user while discussing ideas. Satirical content or humour as a means of social commentary is different when compared to using ideas to incite hatred or promote prejudices against a group of people. Furthermore, this humour might be either in good faith or in bad faith. Video content may help determine the tone and intention, but it is difficult to do so in text-based content or visual content like memes.  

This decision-making problem is exacerbated by the sheer number of users and the amount of content that is created daily on social media platforms. For instance, Facebook had 1.82 billion daily active users on average in September 2020.  There is a human element involved as well, where content moderators who deal with hateful content on a daily basis have reported mental health issues. Facebook agreed to pay $52 million to its content moderators as compensation for mental health issues developed on the job.

Social media platforms have tried using Artificial Intelligence (AI) to help moderate content to reduce the load on their human content moderators. While AI-based moderation has achieved success in identifying images depicting violence or nudity, it is unable to understand nuances and variety in videos as effectively.  This is particularly evident when it comes to live streaming. For instance, the 2019 Christchurch attacks were streamed live on Facebook and then downloaded and subsequently shared on multiple platforms. This is a challenge for AI-based moderation since it requires a strong base of training data to function effectively. Thus, the current limitations in not only the algorithms but also the quality of data imply that AI cannot be a one-stop solution for content moderation and requires human participation as well.

The decisions that social media platforms eventually make regarding content can end up being politicised with the companies often being accused of bias. For instance, Mark Zuckerberg was criticised by civil rights activists for not pulling down a post by US President Donald Trump that advocated violence against protestors.  On the other hand, Twitter restricted the Trump tweet, and this led to the organisation being on the receiving end of an executive order signed by President Trump reducing the protections available to Twitter against civil claims. This is due to the unique role played by social media companies i.e. they are private enterprises that regulate free speech. Regulation of free speech has been a role that was the sole domain of the government but is now shared by private companies as well. Unlike governments who enjoy a certain degree of legitimacy, the same cannot be said for private enterprises especially when the rights of individuals are at stake.

Biased Content Moderation

Biased content moderation is dangerous as it favours one side over the other in the free marketplace of ideas, leading to ideas that would normally be rejected to emerge as acceptable.

Social media companies thus have a difficult job on their hands. On one hand, they have to moderate the content on their platforms to ensure that existing users are able to access the platform while remaining attractive to new users. On the other hand, decisions involving content moderation are often political and may lead to backlash against the organisations. One of the ways social media platforms have adopted to secure legitimacy for their actions is by catering to the Government. This can take the form of complying with government orders and requests and in extreme cases, actively picking sides and promoting biased content moderation.

The recent Ankhi Das controversy in India over the Facebook India Policy Head’s alleged bias in favour of the Bharatiya Janata Party serves as an example of such biased content moderation. According to the Wall Street Journal, Facebook’s hate speech rules were not applied to Hindu nationalist groups and individuals. Similarly, in the United States, Facebook has been accused of throttling progressive news pages and boosting right-wing pages.

Biased content moderation is dangerous as it favours one side over the other in the free marketplace of ideas, leading to ideas that would normally be rejected to emerge as acceptable. For instance, fascism was widely rejected after the Second World War but is making a resurgence partly due to social media. The increased polarisation in societies makes it difficult to develop a common societal consensus which may eventually lead to conflict and violence.

Conclusion

It is clear that content moderation is important not only for social media platforms but also for society at large. Currently, the main role of content moderation has been performed by private entities which have raised several questions on their ability to deal with content that is political in nature. An issue like content moderation will not have simple solutions, but the first step towards developing solutions is to ensure transparency. Over the years, several social media platforms such as Facebook, Twitter, and Reddit have published their annual transparency reports with information on content removed, categories of the violation, etc. Furthermore, social media platforms should be encouraged to share data with academicians, researchers, and policymakers. Twitter shares its data through its Twitter Developers account platform, and a similar model can be adopted by others to promote better understanding which can help policymakers develop suitable policies. Sunlight is the best disinfectant, and thus, transparency can be a powerful tool in developing an effective model of content moderation and promoting public welfare.

(Neel Karnik is a student of the Masters in Public Policy program. A graduate in electrical engineering, his areas of interest are Internet and social media, energy, agriculture, international relations, and policy evaluation. He can be reached at karniknitin@nls.ac.in)

Leave a Reply

Your email address will not be published. Required fields are marked *