AI media threatens to break our strained ability to agree on truth

Disinformation experts, while acknowledging the potential of AI to deceive, remain relatively calm about its ability to sway broad public opinion. Picture: Gerd Altmann/Pixbabay.

Disinformation experts, while acknowledging the potential of AI to deceive, remain relatively calm about its ability to sway broad public opinion. Picture: Gerd Altmann/Pixbabay.

Published Nov 14, 2023

Share

In a world grappling with the perils of disinformation, the recent arrest of two members of the Indian wrestling team in New Delhi shed light on the concerning potential of AI-generated images.

The two women were arrested after protesting against alleged sexual harassment by the president of India’s national wrestling federation. Afterwards, two photographs of the wrestlers in a police van appeared online – one where they were sombre and the other where they were grinning happily.

The manipulated photo of the women smiling (probably created with any of the easily available AI-powered photo editing apps such as FaceApp) was widely circulated on social media by supporters of the wrestling federation president.

The incident serves as an omen of a disconcerting future where discerning reality from fiction becomes an increasingly daunting task.

The proliferation of AI-generated images, videos and audio poses a significant threat to the veracity of information, with potential consequences ranging from political disinformation to the undermining of efforts to authenticate crucial events such as campaign gaffes or even distant war crimes.

Manipulated images are, of course, not a new phenomenon.

From the first fake photograph in 1840 by Hippolyte Bayard to Stalin's airbrushing disgraced officials out of photographs, pre-digital history is rife with instances of visual trickery.

However, the advent of AI tools such as Midjourney, DALL-E 3, and Stable Diffusion takes the manipulation to unprecedented levels, introducing accessibility, quality and scale that could flood the digital landscape with convincing fake images.

Disinformation experts, while acknowledging the potential of AI to deceive, remain relatively calm about its ability to sway broad public opinion.

They argue that scepticism is prevalent among the public, especially regarding images from untrusted or unknown sources. Yet, the real danger lies in a public conditioned not to trust their own eyes, a consequence of the ever-advancing capabilities of AI to create convincingly realistic content.

The concept of the "liar's dividend", where bad actors benefit from a culture where false information is widespread, emerges as a significant concern.

In an era saturated with fake news and AI, the legitimacy of information becomes increasingly questionable. Authoritarian regimes, in particular, could exploit the atmosphere, dismissing inconvenient truths as lies or Western propaganda.

It also opens the door to criminal defendants claiming that any visual or audio evidence has been doctored or fabricated.

Efforts to address the challenges posed by AI-generated content are under way. Companies like Intel and OpenAI have developed AI-powered tools to detect manipulated media.

Some developers have introduced watermarks to identify synthetic images. Initiatives like the Coalition for Content Provenance and Authenticity, involving major players such as Microsoft, Adobe, the BBC and “The New York Times”, aim to tag authentic images with encrypted information.

However, the deployment of such systems by reputable media outlets could risk creating “knowledge cartels”, perceived as monopolies on truth.

Centralised control over what is considered real can lead to serious abuse of that power. That said, while this kind of centralised “truth” could gain legal status (and thus, significant power), it seems unlikely to be able to capture real public trust.

People across the world, and especially the younger generation, are increasingly sceptical of managerial authority and multinational corporations.

This could probably become a deeply politicised, partisan issue, where individuals trust only what their preferred political tribe says is true.

We are feeling the effects of a partisan world where sides cannot agree on fundamental reality, and AI-generated media threatens to throw petrol onto that fire.

As technology races forward, the future of evidence may shift towards the analogue realm, requiring confirmation from on-the-ground sources to counter the flood of AI-generated content.

The combination of open-source intelligence with investigative journalism and robust reporting that can gain the trust of the public will become crucial in navigating this evolving landscape.

James Browning is a freelance tech writer and local music journalist.

BUSINESS REPORT