SABC deepfake videos pose a significant concern for the media

Fake videos are currently circulating of SABC anchors promoting an investment product and also including a fabricated interview with South African-born tech billionaire, Elon Musk to entice people to invest in the product. Picture: Screengrab

Fake videos are currently circulating of SABC anchors promoting an investment product and also including a fabricated interview with South African-born tech billionaire, Elon Musk to entice people to invest in the product. Picture: Screengrab

Published Nov 21, 2023

Share

Internet users may have to adopt a zero-trust approach and exercise caution before placing trust in online content, suggests one cybersecurity expert.

This comes in light of the SABC having to warn its viewers about deepfake videos using the likeness of their reporters to peddle fake news and scams.

Deepfakes often use a form of artificial intelligence (AI) to allow the manipulation of a video or sound recording, and to replace what is said or how people in a video look, in a way that appears real.

This is an issue that the SABC has dealt with before, but it has now escalated to the broadcaster having had to warn its viewers in an on-air interview last week.

SABC Group Executive for News and Current Affairs, Moshoeshoe Monare, warned people about the fake videos generated using AI, which showed bogus representations of the broadcaster’s journalists and presenters promoting an investment product.

Fake videos are currently circulating of SABC anchors promoting an investment product, also including a fabricated interview with South African-born tech billionaire, Elon Musk, to entice people to invest in the product.

Monare lashed out against the videos and emphasised that the SABC's editorial policy does not allow any of its employees to advertise products on its broadcast platforms. He reiterated that the videos are fake.

He said the SABC as the public broadcaster is committed to accuracy and protecting its credibility.

“It is scary, but the most concerning aspect of it is that they are using my colleagues’ faces and voices, knowing that because they are from an institution, and a broadcaster with some credibility where there is public trust,” says Monare.

“My worry is that anything that dents that public trust is affecting us as SABC news and affecting the anchors, who are seen as leading journalists.

“The most worrying thing is that we are in an era where we want to see what is the good and the most beneficial thing about artificial intelligence, but the worry is that if artificial intelligence is going to be abused this way, then it really makes people to be sceptical -- especially given the fact that there will be the good and the positive that we need to embrace.”

Monare says it is a worry for SABC bcause the first time it happened was with one of the broadcaster’s morning live news anchors and it was difficult to trace those responsible.

He says they are in a difficult situation because previously when disinformation, scams and fake advertising popped up, the SABC would block it, but the only thing they can now do is to go public and to go on campaigns to educate people.

“We are in the business of news, and what we do is based on credibility and accuracy, on seeking the truth and on trust that we’ve built throughout the years between us and the public.”

Forensic criminologist and director of security consulting firm Cybareti, Laurie Pieters-James, said that three deepfake fraudulent trends are pornographic exploitation, business vulnerabilities and financial fraud -- similar to the SABC situation.

“The unique threat posed by deepfakes is a significant concern for the media. Many within the media industry remain unaware of the potential risks associated with deepfakes and their detrimental impact.

“However, fostering awareness among employees, management and stakeholders can significantly mitigate the risk of falling victim to deepfake attacks,” Pieters-James said.

“Enhancing defence mechanisms involves imparting extensive training and raising awareness. Empowering employees with the ability to recognise social engineering attempts rooted in deepfake technology is pivotal.

“Comprehensive training should highlight the utilisation of technology in malicious activities and provide tools to identify such manipulations. Establishing a security-conscious culture within media organisations demands a systematic approach.

“Educating employees on security threats and preventive measures equips them to better identify and counter deepfake attempts,” Pieters-James said.

To safeguard against such manipulations, Pieters-James said it’s essential to adopt a zero-trust approach and exercise caution before placing trust in online content.

“This principle should be applied rigorously to news stories, images, and videos. Utilising web tools can aid in discerning and identifying misleading information, empowering individuals to recognise and reject false content.”

[email protected]