Talking robots close to becoming a reality

Asimov came up with the three rules of robotics.

Asimov came up with the three rules of robotics.

Published Sep 18, 2012

Share

London - Robots and computers could soon be having meaningful conversations and even arguments with humans, potentially within three years.

A new research project at the University of Aberdeen will develop systems that allow men to debate decisions with robots – opening up the possibility of human operators discussing action plans with robots and, if necessary, ordering them to break rules.

While science fiction author Isaac Asimov fans might balk at that last possibility, it does open up a world where intelligent technology will make life easier for humans.

The computers would be able to argue in favour of decisions or inform their operators that certain tasks were impossible.

Lead re-searcher Wam-berto Vasconcelos, from the university, said the aim was to get people to trust intelligent technology and early versions of the software could be available in just three years.

“Autonomous systems such as robots are an integral part of modern industry, used to carry out tasks without continuous human guidance.

“Employed in a variety of sectors, the systems can process huge amounts of information when deciding how to act.

“However, they can make mistakes that are not obvious to them or people.

“Evidence points to mistrust when there are no provisions to help a person understand why an autonomous system has performed a specific task at a particular time and in a certain way.”

Talking computers with the ability to converse with humans have long been a mainstay of science fiction.

Examples include Hal, the deadpan-voiced computer in the film 2001: A Space Odyssey, which goes mad and sets out to murder crew on a spaceship.

The system Vasconcelos is developing will communicate with words on a screen.

Potential applications could include unmanned robot missions to planets or the deep sea, defence systems and exploring hostile environments such as nuclear installations.

A typical dialogue might involve a person asking a computer why it decided on a particular decision, what options there were and why these were not followed.

“It gives the operator an opportunity to challenge or overrule the robot’s decision,” said Vasconcelos.

“You can authorise the computer system to break or bend the rules, if necessary, for instance to make better use of resources or for safety.

“Ultimately, this conversation will ensure the operator is comfortable with the system. But the dialogue will be two-way. The supervisor might not like a particular solution, but the computer might say: ‘Sorry, this is all I can do’.”

The computer’s responses must not seem threatening, rude or confrontational. A psychologist has joined the team to help with this aspect of the research.

Conversing with robots would make humans more accountable since failures could not be conveniently blamed on computer error, said Vasconcelos.

“With power comes responsibility. The dialogue will be recorded, so there’ll be someone to blame if anything goes wrong.”

Asimov came up with the three rules of robotics, though his short stories make it clear the rules are not infallible:

1 A robot may not injure a person or, through inaction, allow someone to be harmed.

2 A robot must obey orders from a person except where they conflict with the first rule.

3 A robot must protect its own existence as long as this does not conflict with the other two rules. – Daily Mail

Related Topics: