Google fires tester who claimed their chatbot developed feelings

FILE PHOTO: The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York City, U.S., November 17, 2021. REUTERS/Andrew Kelly/

FILE PHOTO: The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York City, U.S., November 17, 2021. REUTERS/Andrew Kelly/

Published Aug 3, 2022

Share

Johannesburg - After first being put on paid leave, Google has now fired the tester who released chat logs from Google’s latest chatbot alongside claims that it had developed into more than just a machine.

The story started last month when a Google software tester checking LaMDA, their latest tool for creating machine learning-powered chatbots, for derogatory or hateful speech, published a compilation of conversations with the AI system.

The tester, Blake Lemoine, uploaded the conversation to his personal blog and the story was quickly picked up by various outlets over the following week. Lemoine felt that LaMDA had a level of self-awareness and that there may be a sentient mind behind the model’s impressive verbal skills.

Those who haven’t been keeping up with the improvements in AI for language applications since 2016 may be similarly impressed by Lemoine’s conversation with LaMDA (Language Model for Dialogue Applications).

LaMDA has been trained on a vast amount of text data. In fact, it’s not an overstatement to say that the latest language models coming out of industry leaders like Google and OpenAI are being trained on almost all the digitised (English) text that humanity has created. This includes all of the internet, communication and chat logs, and a decent chunk of all books with a digital version.

LaMDA has seen many, many examples of human conversation and does fantastically at its task of generating fluent, convincing, and useful conversation. Lemoine’s published logs from LaMDA contain conversations about philosophy, emotions, and the model’s purported sense of self. For the average person, it would be natural to feel that there must be an intelligent agent behind the screen, with its own wants and desires.

Google dismissed Lemoine’s conclusions and placed him on paid leave for violating the company’s confidentiality policy. Google released a statement saying that it takes the responsible development of AI “very seriously”, and that it has met with Lemoine to discuss his concerns 11 times over many months.

The statement continued, “"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information."

It is also worth noting that prior to this, Google had already released a blog post about how LaMDA functions and pre-empting concerns that may arise from people personifying the system, as has happened here.

In fact, Google’s complete research paper on LaMDA includes a whole section detailing the dangers of personifying these kinds of models. Language is deeply human, and we are used to humans being the only thing capable of generating convincing conversation.

Combined with the human proclivity to assume agency and motivation behind actions and speech, powerful language models like LaMDA can leave one with the visceral sense that they can’t be speaking to a simple machine.

Luckily for regulators and ethicists, current language models are indeed just machines. Not quite simple, perhaps, but orders of magnitude less complex than an animal brain, nevermind a human mind.

These models work by being shown an enormous amount of data, acting as examples of what human-created text looks like. When you give one of these models an input, it will output what the most statistically likely next word or phrase is based on what it saw in the example data.

In that way, this model is not ‘thinking’ in the sense of answering questions according to its knowledge, desires, motivation or values. These models will just give the most likely continuation of a conversation. This is simple in concept (hiding a lot of clever engineering), but very powerful when scaled up with the huge amount of text data now available.

It is important to be critical of developing AI technology, especially when it is taking place under the control of some of the biggest tech giants. The previous decades are replete with examples of corporate scandal, corruption and negligence due to the seemingly irresistible pull of financial incentive overriding any sense of empathy of responsibility.

As such, it’s hard to take these AI research groups at their word. Fortunately, machine learning is an engineering field with a relatively open culture where new technology is accompanied by research papers detailing their development and function.

Most experts in the field, including from Google’s competitors, maintain that anything that could be described as self-awareness, sentience, complex thought or emotions is far beyond our current tools. Language models are powerful statistical machines, but lack any real form of memory or reasoning.

Lemoine has now been fired from his position in the Google Responsible AI team. There have been no further statements from the company or Lemoine, other than that he is seeking legal advice.

IOL Tech

Related Topics:

GoogleTechnologyTech