The risks of AI-generated content in PR

There is no doubt that AI is playing an important role in supporting public relations.

There is no doubt that AI is playing an important role in supporting public relations.

Published Apr 7, 2024

Share

“Sports Illustrated” has received a lifeline in the form of a new publisher in Minute Media.

The magazine was dragged through a protracted scandal after it was found that its publisher, The Arena Group, allowed artificial intelligence (AI)-generated content in the magazine, the subsequent fallout, including its CEO being fired, and a missed license payment.

The closure of the episode shines a bright light on the danger of AI-generated content broadly, and on brand and PR content specifically.

There is no doubt that AI is playing an important role in supporting public relations (PR).

However, PR efficacy relies heavily on the trust of the media and its readers. If AI-generated content is passed off as original, it breaks all trust and can have serious consequences. There is no substitute for strong, original content and PR agencies cannot afford to betray journalists and editors with AI content passed off as original, and just as importantly, they need to protect the reputations of the businesses they represent.

Good PR specialists build long-term trust relationships with journalists and editors who are inundated with emails, WhatsApps and phone calls daily with pitches for content they’d like to see published.

In this environment, where the media is under-resourced and more stretched than ever before, an editor must have full confidence that what they receive is genuinely original, insightful and valuable content for their readers.

While it is unclear whether media desks routinely put content through AI-generated detection tools, it will probably become the norm, especially as the use of generative AI becomes more widespread.

It is fairly easy for a seasoned reader to instantly pick up on AI-generated content through its over-reliance on US-dominant metaphors, style tropes and jarring sentence structure.

Beyond this, there are serious concerns about the datasets used to train generative AI algorithms. If content generated by AI includes or alludes to copyrighted content, it raises a host of legal and ethical concerns. A PR agency should protect its clients from wading into intellectual property storms, while potentially dragging a publication into such a fallout would destroy trust between the agency and the media.

AI-generated content doesn’t just raise red flags from a media coverage perspective. Businesses are increasingly relying on digital platforms to build an online presence, drive referral website traffic and potentially generate leads. Much of the same concerns exist here.

It might be tempting to go all in on using generative AI tools to create the content needed to enhance organic search rankings, but this can result in content that is generic and lacks the viewpoint of an industry or domain expert.

Google has warned against using AI to boost SEO (search engine optimisation), saying that the “use of automation, including generative AI, is spam if the primary purpose is manipulating ranking in search results”.

Google does not explicitly penalise blog posts generated by ChatGPT yet, however, it does have a policy against using automated content generators to create, duplicate or produce low-quality content. If your blog posts are generated by AI, not original or informative and, subsequently, ranked poorly, it negates SEO efforts. A recent core update by Google will reduce low-quality, unoriginal content in search results by 40%.

There is no substitute for original, thoughtful content that adds value to the lives of readers and it is incumbent on an agency to ensure that all content it places in front of media, on social media and on businesses’ own platforms lives up to this standard. Similarly, it is crucial to keep clients abreast of best practice and the risks technology may pose.

Judith Middleton is DUO Marketing and Communication CEO

BUSINESS REPORT