Society’s struggle to tackle AI's role in online exploitation

Published May 12, 2023

Share

Mpho Rantao

THE growth of artificial intelligence (AI) has transformed various industries. Pioneers John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon from the US initially developed AI to create intelligent systems that automated tasks and helped humans.

The increased sophistication and capabilities of AI systems, demonstrated by Google’s chatbot (Google Assistant), Open AI’s ChatGPT, and Samsung’s chatbot BARD (Be All-round Assistant Robot), have led to concerns about data privacy and the potential misuse of AI.

Dr Japie Greeff, from North West University, said Microsoft's investment in OpenAI and ChatGPT’s user-friendly approach to obtaining information aimed to challenge Google’s market dominance in search engines.

However, the rapid advancement of AI technology raised ethical questions that needed to be addressed.

Greeff emphasised the importance of establishing policies to evaluate ethical and appropriate ways of deploying and regulating the technology.

In an open letter by the Future of Life Institute, Elon Musk and Steve Wozniak, among other tech experts, have called for a pause on AI development due to the potential risks to society and humanity as AI programs become more powerful and difficult to understand or control.

“There are many risks and benefits from the emergence of both large language models as well as generative artificial intelligence, like text-to-image and text-to-video generators like Midjourney and Synthesia respectively, as well as other synthetic media generators like deepfakes and other sound, image and video editing tools,” Greef said.

“The biggest risk, in my opinion, at this point with these tools, is their capability to generate fake news. Additionally, the impact that automation and AI has on jobs cannot be underestimated,” he said.

Cyber activists and AI ethicists had also raised concerns about potential abuses of the technology, such as plagiarism and misinformation, said Rianette Leibowitz, a cyber-wellness and safety expert, and the founder of SafetyNet.

Leibowitz said there were AI engine systems that had become dangerous when manipulated by humans, as seen by the creation of deep fake content, news, Not Safe For Work (NSFW) chatbots, and AI-edited photos.

“We’ve seen this with fake news which can cause mass hysteria and a ripple effect in other societal issues. If people don’t become aware of and actually investigate the legitimacy of a message, then it could have devastating effects.

“It’s important to remember that there's a human being behind the search, in terms of AI. A quote to consider is: ‘While AI creates potential risks, it can also be used to detect risks.’ If we remember that as with most tech, it depends on who is using it,” she said.

Leiboitz said that in consideration of the illicit content accessible through the internet, the use of AI-generated searches or systems could exacerbate the rates of and crimes associated with the acts.

Dr Greef said that given the degree of mass manipulation that had been demonstrated on social media, through sizeable bot-driven campaigns and massive data harvesting by many organisations, the targeted dissemination of fake news posed a real threat to the social fabric of communities.

“I don’t think that all development should be allowed to continue unchecked and, in fact, the more advanced we become in our technology, the more we should invest in establishing policies that evaluate the ethical and appropriate ways in which that technology should be deployed and regulated.

“Cyber security is something that should be invested in from a governmental, academic and industry perspective as it is a risk that is ever present. The emergence of the current collection of AI tools simply heightens the risk that has been there for a while,” Greef said.

The risk of AI developing into an engine that could make its own decisions, not in line with humanity’s good, was a major concern, said Greef and Liebowitz. They said that balancing the advantages and disadvantages of sharing private data with organisations, and not restricting access to the internet, was essential.

Cyber activists and specialists had advised tech firms to assume responsibility for creating and disseminating the chatbots. They should develop more sophisticated filters and monitoring programmes.

South Africa was known for its capabilities and expertise in universities, institutions, and ICT industries to compete globally in software development, automation and data science solutions. Africa had also seen investment at academic and economic levels in the ICT sector.

Greef said the government should invest in and research those fields to ensure that the country was not left behind in the technology race. It was crucial for governments to take into account the changes in data and AI and invest in local innovation in the fields.