Since the emergence of AI tools in journalism, including ChatGPT, scepticism has sparked numerous wonders about the role of journalists and their possibility being replaced.
Many AI tools are built to help content creators generate an entire article or an essay based on the information known as data gathered on the internet. Instead of searching for specific keywords around a topic and clicking on each website to extract the desired knowledge, even if it is a text, the AI programmes can do the work in a nutshell, citing all the used sources. This saves time and can help content writers produce many written texts on different topics in a day, rather than days for one topic to surrender all possible angles.
Alongside these content-generating softwares, language-checking programmes such as Grammarly and QuillBot detect any grammar mistakes or typos and can paraphrase a whole paragraph. They are designed to assist writers in proofreading their writings at a faster pace, as they are vulnerable to making mistakes when they write quickly to submit their articles under tight deadlines, and they should be perfect in all aspects.
Besides, there are AI image generators through which a couple of descriptive words are entered in the writing bar, creating an image depicting one’s imagination.
Such tools are very addictive to rely on for accomplishing tasks faster without any mistakes. As nothing good lasts free forever, these tools have a free limit of trials, whereas a subscription has to be paid either monthly or annually to enjoy the top-notch features.
Though the aim of the AI tools is assistance, fear has accumulated in the scene as the zealous young journalists who are ambitious to join the field will be neglected since AI is now the quicker alternative to rely on.
Big agencies use AI
So far, AI tools have been used in different newsrooms across the world. News reporters can use it for video and audio transcription and to spot trends. Investigative journalists can use it to sift through the flood of data to help find or confirm a story. Social media reporters can use it to automate content and analyse trends. When reporting about traffic and weather, newsrooms use AI to generate traffic and weather updates. The news desk can use it not just to help create stories but also to disseminate and curate news (e.g., article recommendations and personalised landing pages). Sports desks can use it to provide a results service and create stories. Finance reporters can use it to provide regular market updates. They can be used in translation during live coverages.
One of the most outstanding news agencies worldwide is the Associated Press (AP), which has long used AI to generate stories. The New York Times stated that AP has used AI to produce financial reports since 2014, but they remain a small fraction of the service’s articles compared with those generated by journalists. After that, outlets, including the Washington Post and Reuters, developed their own AI writing technology.
AI can produce fake news pieces
Despite all of the amazing features of AI that should aid in generating high-quality content, it can make up news and content in different forms, like videos or images, about any topic from scratch. In other words, it can produce Fake News!
In March 2022, two videos appeared on Twitter (X now), one starring Ukrainian President Volodymyr Zelensky and the other starring Russian President Vladimir Putin. Zelensky’s video was urging the Ukrainian people to put down their weapons as they were surrendering to Russia. On the other side, Putin’s video announced a peace declaration with Ukraine. Believe it or not, Zelensky’s video was published on the hacked website of the Ukrainian TV network Ukrayina 24. When Sky News announced that these two videos were fake, Meta and YouTube took down Zelensky’s deep-fake video. The BBC reports that the Putin video has been going around for a few weeks and that Twitter has classified it as distorted material manipulated media by Twitter.
Figure 1AI fake photo of Donald Trump getting arrested
Another one hit social media with floods of reactions. In March 2023, Eliot Higgins, the founder of the open-source investigative outlet Bellingcat, tweeted a fake picture of Donald Trump being arrested. In an interview with the Washington Post, Higgins said that he used an AI art generator, giving it simple prompts such as ‘Donald Trump falling down while being arrested’.
“I was just mucking about’," Higgins is quoted as saying, “I thought maybe five people would retweet it.”
“By the time I took this screenshot, it had been viewed 6.4 million times.”
And 49 websites providing material wholly developed by generative AI were revealed by News Guard, a journalism and technology platform that ranks the reliability of news and information websites and analyses internet disinformation. Those websites produce text in seven languages: Chinese, Czech, English, French, Portuguese, Tagalog, and Thai. They appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication. Most of their generated topics revolve around science, history, and what-to-do questions, which rely on information gathered from the internet from various sources and combine them into an article.
Ethics to follow whiling using AI in newsrooms
Since AI is widely employed in newsrooms throughout the world, the Journalism Trust Initiative (JTI) Standard includes two provisions that specifically mention AI. JTI was launched by Reporters Without Borders (RSF). It’s developing ways of helping newsrooms indicate the trustworthiness of their journalism. It promotes and rewards compliance with professional norms and ethics. It comprises more than 850 media outlets in 80 countries including the Global Forum for Media Development, the European Broadcasting Union, and Agence France Presse.
Each newsroom should set down policies and editorial processes around the use of AI and algorithms in curation, content, and dissemination. To be compliant, audiences should be aware when content is created using AI.
AI introduces risk to any aspect it touches in the newsroom, so every user has to ensure they have processes in place safeguarding accuracy against issues such as misinformation, bias, hate speech, and the protection of data.
Journalism overthrows AI
Yet though AI can operate as much as a human mind with much quicker evolution due to the data storage that it develops over time to update its system, AI still has limitations that won’t beat human journalism content.
AI will always lack human stories in poverty areas and regional wars, as these demand journalists doing onsite reporting to observe people’s habits and activities to survive their daily struggles. Journalists need to engage with people, be able to read people well, and ask relevant questions to convey their emotions and wishes in their articles so their voices can be heard by many decision makers who might feign ignorance about their issues. This skill is beyond the production of any software tool! Living with people and quoting them assures that the story isn’t plagiarised, as it is easy to fabricate information and deliver it as true.
In a panel discussion about press freedom that was conducted by the Journalism and Writers Foundation, Melissa Mahtani highlighted that every news agency needs "diversity, whereas different opinions about a certain topic or news are gathered to produce content that tackles different angles.”
This concludes that it is a tough break for AI to take over journalism’s nature of grasping people’s sighs and moans. Journalism can delve into investigative topics and unveil new information and facts that aren’t previously pinpointed. By this only human perspective, journalists are the only accurate ones to beam in to the breaking news. Additionally, writing styles, idioms, proverbs, and vocabularies in new languages can be influenced by journalism coinciding with people's frequent activities and innovations.
In that regard, AI can lend a hand in facilitating data collection that the journalist is scribbling to gather through which they can construct a new angle and prepare news bulletins.
To detect any AI-related content, AI detector websites have been coded to check for plagiarism or cheating.
“It has been proven that AI detector tools aren’t accurate in verifying whether the content is produced by AI or not,” said Arwa Kooli, a PhD holder journalist majoring in information detection, during an Arabic webinar run by Free Press Unlimited.
Kooli explained that now many journalists are using traditional search tools rather than AI tools that would help summarise long texts to evade fake ones.
Mirna Fahmy is an Egyptian journalist roaming for investigative topics related to the environment, culture, economy, and other controversial issues