JOHANNESBURG – The idea of machines turning into man’s worst enemy is not a new one. Whole movies have been devoted to this theme, from Space Odyssey 2000, where the computer HAL “eliminates” humans because it thinks they will sabotage a mission, to the Matrix where Neo has to rescue the human race from machines that are harvesting people like battery chicken.
Where this doom’s day theory used to make interesting conversation in the past, it is taken much more serious today due to the huge advances made in the development of artificial intelligence. Swedish philosopher, Nick Bostrom, summarised the threat in 2003 with his “paperclip maximiser” theory, according to which a super intelligent machine goes awry after being programmed to ensure a company never runs out of paperclips.
The computer turns the command into its sole goal, “eliminating” anyone and anything that stands in its way. While machines like this does not yet exist, cracks are already showing in the form of biases from some of the artificial intelligence programmes that are already in operation. These biases can take many forms, ranging from race to gender, ethnic group, social class and age.
The problem with the paperclip maximiser theory, however, is that it bears a very narrow view of what “super intelligence” really is.