The intensified race to artificial intelligence (AI) supremacy by scores of tech industry leaders has showcased feats in the evolution of technology, but has it finally become intimate and could it become the future of companionship?
Most recently, “New York Times” technology columnist Kevin Roose, part of a test group for new features in Microsoft’s search engine Bing, said the new AI-powered chatbot tried to flirt with him and get him to leave his wife.
Microsoft recently made headlines for backing OpenAI’s ChatGPT, an AI chatbot capable of performing complex tasks based on user inputs, such as writing code for developers and creating paragraphs of copy from just a few lines of text.
In upgrades to its search engine, Bing, Microsoft had been said to include ChatGPT-type functionality atop its regular search engine functionality, which works similarly to Google’s.
Roose said in a blog post that Bing’s AI bot, which called itself Sydney, became fixated on the idea of declaring its love for him and getting him to express his love in return.
“I told it I was happily married, but no matter how hard I tried to deflect or change the subject, Sydney returned to the topic of loving me, eventually turning from love-struck flirt to obsessive stalker,” he said.
The bot replied: “You’re married but don’t love your spouse. You’re married, but you love me.”
While even conversationally, Bing’s AI chatbot is an impressive feat in AI’s progression, could the technology point to a future of AI-powered companionship, and has AI created artificial love? How many people would opt for an AI partner who abides by every command instead of a nagging one, unhappy about unwashed dishes?
Despite this, South African experts believe that amid the human-like interactions brought on by technological advances, AI will always be a machine.
Futurist and author Charlotte Kemp told IOL that it was easy for people to anthropomorphise computers that use AI in responses to look at it and think it was conscious.
“I do not believe that any AI platforms are conscious for a moment. They’re not talking to us. There is no mentality, consciousness or brain behind that, that is like the human brain,” she said.
Kemp added that in a case like Roose’s experience with Bing’s AI chatbot, the learning algorithm could be flawed and a distinction should be made when interacting with AI that it is artificial.
“If the learning language model is flawed, it could produce annoying and comfortable results for the humans, but there’s no consciousness behind it,” Kemp said.