The Infonomist: George Floyd inspires facial recognition tech ethical decision
CAPE TOWN - The brutal murder of George Floyd has inspired an important decision in the development of facial recognition technology.
IBM has taken a very significant decision aimed at preventing abuse of technology in the hands of the police. In a letter to the US Congress, IBM chief excutive Arvind Krishna, wrote: “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency”
This decision is important for a couple of reasons that includes the fact that the future of security systems will likely depend on facial recognition. It i, therefore, important to get it right now and unfortunately there’s a lot wrong with facial recognition in its current form.
The technology has been blamed for racial bias. Researchers have found on numerous occasions that systems scrutinising our facial features are significantly less accurate for people with dark skin. The companies’ algorithms proved near perfect at identifying the gender of men with lighter skin, but frequently erred when analysing images of women with dark skin.
The skewed accuracy appears to be due to under-representation of darker skin tones in the training data used to create the face-analysis algorithms.
If this technology is used in its current form, we are likely to see wrongful arrests and further violations by the police.
Although the decision to stop this technology by the big blue firm will not itself stop the development of this technology by some tech companies, it is still a step in the right direction. Amazon has also followed suit, and chances are that more tech companies will follow.
A more important reason why this decision is critical has little to do with technology but more to do with the fact that, the development of technology should be aligned with ethical values of society.
Technologies that violate human rights and values should not be allowed to exist.
Human beings should not be held hostage by the very technology that they create.
Ethics should be a key consideration as we develop technologies of the future.
This is true of many other technologies that are now being developed to shape the future.
It is up to human beings to develop technologies that do not trample on the rights of individuals.
In addition, to address the potential of developing technologies that turn against human beings, the education of technologists should include ethics as well as humanities and not only focus on computing and commerce.
Now that IBM has acknowledged the real danger of using some of these technologies, focus should now shift to others that may negatively impact human lives.
IBM is not alone in developing technologies that are now considered harmful. As long as there’s still more technology companies that continue to use technologies that are known to be harmful, the impact of IBMs decision will be minimal.
For a very long time technology companies have managed to get away with murder in the name of innovation. The recent announcement by IBM is a clear indication that innovation projects should undergo strict scrutiny before they are deployed.
Researchers have for years warned about the problems with facial recognition.
As society reflects about the use of technology by the police, there’s an opportunity to correct harmful features and maintain features that can advance humanity.
Wesley Diphoko is the Editor-In-Chief of Fast Company (SA). He can be reached on Twitter via: @WesleyDiphoko.