ARTIFICIAL intelligence (AI) and machine learning have made considerable progress in the last few years. This progress has been to such an extent that robots are currently replacing humans in many jobs all over the world.
According to auditors PwC up to 30% of jobs could be automated by 2030. Other reports predict that the number of automated workers would be 51 million by the year 2030.
Much of this growth will come from professional services robots that perform useful tasks for humans. Robots will be found in most industries, such as factories, the military, exploration, hazardous environments, healthcare, entertainment and the personal service industry.
In the years to come, robots will become an integral part of our daily routine. No wonder that some people fear artificial intelligence and robots becoming sentient, taking over the world, and replacing humans as the dominant intelligence on the planet.
Many apocalyptic science fiction books and movies fan this fear with fictional scenarios depicting super intelligent robots rising up against humans.
The physicist Stephen Hawking, Microsoft founder Bill Gates, and entrepreneur Elon Musk, have therefore all expressed their concerns about the possibility that artificial intelligence could develop to the point that humans could no longer control it. According to Hawking, this would "spell the end of the human race“. All three, therefore, advocated preventive measures to ensure that future super intelligent machines remain under human control.
Nick Bostrom and others have expressed similar concerns that machines with advanced artificial intelligence would be able to reprogram its own source code and constantly increase its own intelligence. The result could be a recursive intelligence explosion leaving human intelligence far behind. This is one of the motivations for the development of the brain-computer interface (BCI), which is being developed by Elon Musk’s Neuralink Company.
But, having said all this, could a recent incident during the Moscow Open chess tournament, where a chess robot broke the finger of a seven-year-old player when he moved his chess piece too soon, be the beginning of the impatience of robots with the lesser intelligence of human beings?
The incident that happened on July 19 made headlines all over the world and kept the social media buzzing. According to the widely circulated video and reports by bystanders, the robotic arm took a chess piece – that it has apparently taken – and deposited it into a small bucket next to the chessboard.
Immediately, the 7-year-old, Christopher, sitting opposite the robotic arm, hurried the AI powered robot by moving his arm and hand across the chessboard on his side of the board and reaching for a chess piece. Simultaneously, the robot – that has not completed its move – apparently reached for the same piece, but grasps the finger of the child and fractured it.
The gripper at the end of the arm held tightly onto the finger and seems unwilling to let go. Several adults quickly intervened and struggled to free the child’s finger. Eventually, they freed the boy and moved him away from the table.
According to Sergey Smagin, the vice-president of the Russian Chess Federation, the boy should have waited for the robotic arm to complete its move, but violated the prescribed safety rules by rushing his move. The chess robot was deemed safe by the tournament organisers and was used throughout the rest of the chess tournament.
Luckily, the boy could continue with the tournament the next day and completed his chess matches – perhaps without personally recording his moves himself!
When the story went viral, many doom prophets pointed out that this is a glimpse of the future where robots will take over and crush humanity. Is this behaviour of the chess robot really the beginning of the end for the human race? The December 2021 story of Alexa misleading a 10-year-old girl into almost putting a coin across the prongs of a phone charger plugged halfway into an electrical socket, is still fresh in our memories.
Since IBM’s Deep Blue has beaten the famous champion Gary Kasparov in 1997, there is little doubt that robots are often better chess players than humans. But can we trust robots? Are they safe?
The organisers deemed it safe since the robot has never done this before in its fifteen years of use. Furthermore, the child was allegedly at fault. As long as the end-user, or child in this case, obeys certain rules, the robot is considered safe. However, relying on the end-user (children in the chess robot case) to keep to certain rules to avoid eliciting aggressive behaviour from the robot, is risky. People often do not follow rules.
AI is today frequently anthropomorphised as though it has typical human-like qualities, such as common sense and would always act in the best interest of human beings – at least if the end-user is mindful and follows certain rules. However, the onus should rather be on the robot. In fact, the robot should be designed, programmed and monitored in such a way that it is unable to perform any human-harming acts.
This would mean that the chess-playing robotic arm should be programmed not to reach anywhere on the chessboard unless there is no human limb present. And if the human nonetheless put their hand somewhere on the chessboard after the robotic arm has already committed itself to doing the same, the robotic arm should be programmed to stop immediately and move to a safe position.
The robot should also have sensors to detect when the object is a finger and not a chess piece. A pressure sensor could lead to an automatic release by the gripper. It is in any case unexpected that a force just enough to move a chess piece was able to break a child’s finger.
The above situation is what is typically referred to as a minimum risk condition (MRC) where the AI curtails its actions without causing any potential harm. If a self-driving car gets into trouble, the AI should carefully decide if it is best to steer the vehicle to the side of the road (where there might be a steep cliff) or to halt in the middle of the road (where other cars might ram into it).
In South Africa where many drivers and taxis do not follow the rules of the road, this will be extremely difficult. Self-driving cars will have to be programmed to carefully consider intersections since in South Africa red lights are often violated. What makes these “edge cases” difficult is that there is no straightforward answer. The answer requires the calculation of probabilities and uncertainties.
Although the chess tournament official declared the chess-playing robot arm as safe, it is clear that it does not contain or embody contemporary safety precautions and should especially not be considered as safe to be used in a chess tournament for young children.
Some people suggested that the chess-playing robot intentionally harmed the child’s finger since it was “angry” at the child for not letting it complete its movement of taking a piece. Although scientists are working on robots with emotions, we are still far away from sentient robots with human affective and cognitive capabilities. The huge progress made in machine learning (ML) and deep learning (DL), which leverage computational pattern matching, however, has led to AI systems that have the appearance of human-like dispositions.
Even if the developers ignored AI Ethics and wrote code that instructs the robot to grab a human finger if any move is illegal, it is still not anger but merely the following of a programmatic instruction. It is thus possible that the tournament authorities placed too much confidence in the predictability of the AI by declaring the robot arm as safe since no similar incident has occurred in fifteen years of usage. It is quite possible that the gripper code has recently been updated to add new features without including precautionary measures.
In 1979, factory worker, Robert Williams, became the first person to be crushed to death by the arm of a one-ton robot on Ford’s Michigan production line. In 2015, a robot grabbed and crushed a 22-year-old contractor at one of Volkswagen’s German plants.
AI is not always predictable, since through ML and DL the AI can autonomously “learn” certain behaviours as a result of the data being fed to it. There is, however, in future a possibility that we may reach a stage where we can no longer control the actions taken by robots.
Without doubt, AI has fundamentally changed and enhanced our lives and offers “incalculable benefits and risks” as Hawking said. A typical example of the risks are that a chess match between a chess-playing robot and a young child ended with a broken index finger. Although still very serious and concerning, it rather seems like a case of machinery with inadequate safety measures and someone putting their hands where they were not supposed to, than the beginning of the uprising of the robots!
Professor Louis C H Fourie is an Extraordinary Professor at the University of the Western Cape.