Our favourite app is problematic: TikTok’s content moderation double standards

File picture: Reuters/Dado Ruvic

File picture: Reuters/Dado Ruvic

Published Feb 26, 2023

Share

TikTok has become popular in the past three years. The app has been ranked the 6th most popular social media network globally, with 1 billion users spread over 154 countries.

With such popularity and a huge following, the platform is expected to be reliable.

Despite being for entertainment purposes, people also use social media networks for news or other important information they might need to acquire.

However, the reliability of TikTok has been questionable. There have been various allegations of its inability to control misinformation on the platform.

NewsGuard’s findings show that at least 20% of the videos on Tiktok contain some form of misinformation. In a time when people rely on social media so much for information, on current affairs or whatever else, this is extremely reckless.

TikTok’s response to the outcry of the misinformation on its platforms is not a reflection of what exactly is happening on the platform. Its response leads to the platform’s community guidelines, which state that “we do not allow harmful misinformation and will remove it from the platform”.

Furthermore, a spokesperson from TikTok had also mentioned that they are in a partnership with a fact-checking organisation to assist in the accuracy of the content. However, these responses seem like a way to silence the public while the misinformation continues.

The activity and algorithm on TikTok do not reflect much fact-checking.

A study on the misinformation further showed that, during the Russia-Ukraine War, the first 20 search results after “Ukraine”, “Russia”, and “war” were fake and misleading news.

This is not the only instance of misinformation on the platform; during the peak of the Covid-19 pandemic, the platform was consistently criticised for misinformation.

South Africa has also been a victim of the prevalence of misinformation on TikTok. The DA was a victim of deepfake videos, where a video making statements such as “kill the EFF” was posted on their platform. This video reached about 300 000 views before it was taken down. Information like this heightens political tensions and can be an instigator for violence.

The Durban floods that took place in early 2022 were also another instance of mass misinformation. This resulted in the KwaZulu-Natal Department of Social Development having to plead with citizens to verify the information they acquire on the platform.

Once again, it demonstrates how quickly misinformation spreads and the implications it has on society.

Worryingly, there are some double standards regarding content moderation on TikTok. It seems to be common knowledge to content creators of colour or minority groups that TikTok shadow-bans individuals or deletes content related to racism or violence against minority groups. This then raises the question of the platform's ability and effort to single out and silence individuals, but harmful misinformation is where they draw the line. If the platform can single out individuals and delete their content, how can the same effort not be put into misinformation?

The #Blacklivesmatter is a good example of inconsistent platform standards. No #Blacklivesmatter content was visible on the platform, and TikTok responded by stating that they had a “technical glitch”, hence the content disappearing.

Interestingly, this “glitch” happened during a protest against systemic anti-black racism but not during a Proud Boys rally. And how could it be that all the other times that the platform has had harmful information about diseases, elections and wars, there was never a hindrance to this content? It is very alarming that the moderators and the artificial intelligence on the platform continue to shadow-ban content on important issues like race and minority groups, even post the #Blacklivesmatter; however, misinformation is still there. It raises questions about the priorities of ByteDance, the Chinese company that owns TikTok.

TikTok needs to be aware of the risk the platform poses to society. It needs to understand that by virtue of being a short video platform, people are likely to find it convenient and hence use the platform regularly and for important information. Accurate information should then be a priority for TikTok. Having said this, it is important that TikTok understands this responsibility and that, if anything, it becomes a credible source to their users. While others might argue that people need to fact-check information, it is important to note that people consume so much information daily from social media platforms and others. Hence, it does not make sense to expect them to fact-check everything. It is estimated that globally 30.8 million IOS users and 14.43 million Android users use TikTok daily. It is impossible to expect each one of these individuals to fact-check.

The accuracy of the information should be the responsibility of TikTok, not that of the users. The users are less likely to assume that the information on the platform is fake; their first instinct is that the information on TikTok is accurate. They are under the assumption that the platforms already have working systems to rule out any misinformation.

TikTok must honour its end of the relationship by minimising or completely eradicating misinformation.

* Vhonani Petla, Junior Researcher, Digital Africa Research Unit, Institute for Pan African Thought and Conversation at the University of Johannesburg.

** The views expressed to do not necessarily reflect the views of IOL or Independent Media.

*** JOIN THE CONVERSATION: Email your submissions to [email protected], and be sure to include a short bio, image of yourself, contact number and physical address (not for publication).