Tech giants are getting serious about fake content generated by AI

Taylor Swift poses on the red carpet as she attends the 66th Annual Grammy Awards in Los Angeles, California, US, February 4. File photo: Reuters

Taylor Swift poses on the red carpet as she attends the 66th Annual Grammy Awards in Los Angeles, California, US, February 4. File photo: Reuters

Published Feb 13, 2024

Share

Recently, disturbing and sexually explicit fake pictures of Taylor Swift appeared online and were circulated swiftly.

According to the New York Times, one image shared by a user on X was viewed 47 million times before the account was suspended.

The social media platform had to suspend several accounts that posted the fake images of Swift, but the images were shared on other social media platforms and continued to spread despite those companies’ efforts to remove them.

What happened to Swift can happen to anyone online. It has become very difficult to distinguish between authentic and fake content online. Online users have to contend with fake videos, calls and photos.

This has prompted leading technology giants to come up with solutions.

Google announced that they will collaborate with companies like Adobe, the BBC, Microsoft and Sony to fine-tune technical standards to address the fake content challenge. Meta (Facebook parent) also announced a similar commitment.

The social network company wants to promote standardised labels to help detect artificially created photo, video and audio material across its platforms.

If such efforts become a reality in the near future we will see clearly marked content. An image or video will indicate that shows that it’s AI generated.

It will probably look like what Samsung has done with their new device, the Samsung S24. Images that are AI generated are marked with tiny stars to indicate that they were touched by AI. These efforts will go a long way in protecting people from negative AI content.

This may, however, create a challenge for positive AI content. Not all AI generated content is fake. Some content creators use AI to assist them in the process of creating content. Such content may be authentic in nature but now may need to reflect the AI label.

As much as labelling of content will be positive for readers and online content consumers, it will not serve the interests of well-meaning creators. At the same time, efforts to label AI content will elevate the status of original content creators.

The so-called AI threat to writers will be minimised by the process of labelling content. In future, original content will be highly valued compared to AI-generated content. Unique original voices and creative works will occupy a unique position.

As societies battle with the deluge of fake content, there’s an opportunity for original content publishers. At some level those who will rely on original methods will gain the trust of readers and content consumers.

Major publishers will be able to cut costs by relying on AI, but at the same time they run a risk of losing the trust of readers. On the other hand, independent publishers can grab this opportunity and dedicate themselves to authenticity and quality. This approach may yield positive dividends.

At the end of the day content consumers will benefit from such developments. The process of labelling fake content will create better awareness about AI-generated content. It will also save people from falling for deceitful content.

Content consumers will be able to choose between artificial content and authentic content. Software companies have started to fights fake content, now there’s a need for hardware companies such as Apple to also join efforts to reduce fake content online.

Wesley Diphoko is a technology analyst. Follow him on X via: @WesleyDiphoko.

BUSINESS REPORT