Tech News: You cannot reason with a biased AI algorithm

Over the last few years Artificial Intelligence (AI) has been incorporated into more and more devices that are part of our daily life, says Professor Louis Fourie.

Over the last few years Artificial Intelligence (AI) has been incorporated into more and more devices that are part of our daily life, says Professor Louis Fourie.

Published Oct 9, 2020

Share

By Prof Louis C H Fourie is a Futurist and Technology Strategist.

JOHANNESBURG - Over the last few years Artificial Intelligence (AI) has been incorporated into more and more devices that are part of our daily life. Currently, AI makes numerous important decisions on a regular basis and also performs countless automated functions.

AI has indeed become an inherent and indispensable part of modern business.

It is therefore understandable that many people are concerned about the future of AI and that it might one day destroy humanity. Although it is doubtful if humanity will be destroyed through AI and intelligent killer robots, unfortunately the shady effects of AI are already present and in many instances impacting our lives, often causing a divide amongst people and groups, inadvertently marginalising certain people, ensnaring our attention, and enlarging the gap between rich and the poor.

The problem of algorithm bias

About three decades ago, when algorithms were mostly used by computer scientists and did not really had an impact on our lives, algorithm bias was not a problem. But AI has since found its way into more sensitive areas such as loan application processing, the analysing of interviews and making of decisions regarding appointments of employees, adaptive pricing, credit scoring, facial recognition, health care and housing.

There are many instances of algorithm bias that have been discovered over the past few years.

A few recent examples are:

  • The algorithm of the Amazon AI recruiting tool that discriminated against women. Automation has been key to Amazon’s huge success. An automated AI hiring tool was thus used to score job candidates on a scale of one to five stars. Unfortunately, this very efficient tool did not rate candidates in a gender-neutral way because the algorithms were trained to vet applicants by observing patterns in resumes submitted over the previous ten years. Since the tech industry is often dominated by men, these training resumes mostly came from men. The system thus taught itself through machine learning that male candidates were preferable and penalised applicants who attended all-women’s colleges or resumes containing the word “women’s” as in “women’s chess club.”
  • The risk assessment algorithm used by the judicial system (judges, probation and parole officers) in the USA to predict a criminal defendant’s likelihood to becoming a recidivist, or to determine the bond amount during bail, was found to be biased with a significant disparity in the handling of the assigned risk towards different races.
  • Microsoft, IBM and Face++ developed face detection systems that did not perform well with black female faces due to the underrepresentation of darker skin colours in the training data used to create the face-analysis algorithms.
  • Microsoft’s facial expression cloud service fared poorly in the analysing of facial expressions of children under a certain age due to shortcomings in the data used to train the algorithms.
  • The Google photo-organising algorithm grossly mislabelled the images of black people.
  • The Apple credit card lending algorithm discriminated against women in their credit scoring and offered them less credit than men with similar income and circumstances.
  • A health care algorithm used by most health care systems in the USA was found to be biased against black patients making them less likely to receive important medical treatment. The algorithm screens patients for “high-risk care management” intervention and relied on patient treatment cost data as a proxy for health. However, due to unequal access to health care, black patients spent less on treatments, which led to a racial bias against treatment for black patients.
  • Winterlight Labs’ developed an algorithm to determine if a person is developing a neurological disease like Alzheimer’s, Parkinson’s or multiple sclerosis. After publication it was found that the algorithm only works for English speakers of a particular Canadian dialect. The data used by the company were from native English speakers speaking in their mother tongue. A native French speaker taking the test in English, might pause while thinking of the appropriate English word, or pronounce a word with uncertainty in their voice, which are then misconstrued for markers of a particular disease by the system.
  • Facebook was sued by the US Department of Housing and Urban Development, since their ad targeting algorithm unfairly limited people who could see housing ads.

All the above problems were eventually fixed, or the algorithms were discontinued, since algorithmic fairness is critical in the use of AI. One of the problems with algorithmic bias is the severe limitation that you cannot reason with an algorithm. Once the untransparent decision has been made by the algorithmic overlord, little can be done. The algorithm is in control.

The importance of the training data

All the above examples clearly illustrate the importance of the data that is used for machine learning. Biased data sources used for the training of the algorithm produce biased results in automated systems. Because AI systems learn to make decisions by looking at historical data, they often perpetuate existing biases. Machine and deep learning are especially susceptible to bias. The aim of deep learning is to find patterns in the data that it is trained on. Unfortunately, the data may reaffirm false stereotypes, for instance, where men are associated with doctors and women with nurses, the algorithm will apply this bias to answer all future questions. In the field of medicine, such as the diagnosis of skin cancer or determining the best drug treatment based on biological markers, these biases can mean the difference between life and death.

Like-minded coders

Unfortunately, technology is never neutral and mostly contains the fingerprints of its creators. This is also true in the case of the coding of algorithms, which often reflect the unconscious biases, preferences, and blind spots of the coders or algorithm creators.

In many of the above examples of AI bias it was found that the algorithms were usually built by a relatively small, insulated group of like-minded young people from more or less similar social backgrounds and ethnic groups. As with any insulated team that work closely together, their unconscious biases and myopia often become new systems of belief and accepted behaviour as time passes, and even find its way into their products.

The like-mindedness is created by universities that feed the major AI companies. Universities and computer science programmes mostly focus on hard software engineering skills, programming, systems engineering, mathematics, machine learning algorithms, natural language processing, computer vision, and other technical skills. There is almost no time for anthropology, philosophy, psychology, let alone, ethics. If they are offered, they are optional. And when these people start creating AI systems, they are sometimes blind to their own biases and would not know where to look for problems.

Fixing the bias

Companies and government agencies often introduce automated AI systems to cut costs and handle complex datasets, but unfortunately some of the algorithms are opaque and unregulated and contain biases that were often unintentionally built into their code.

Fortunately, it is possible to fix the bias. Either with a totally new set of data and careful training of the algorithm, an improved neural network, or just by changing the very thing that the algorithm is supposed to predict. In the case of the above-mentioned healthcare system, researchers found that by focusing on only a subset of the health costs, such as emergency room visits, they were able to lower the bias. In fact, they also found that an algorithm that directly predicts health outcomes, rather than costs (as a proxy for patient health) is much more accurate.

In the longer-term, universities will have to rethink their current computer science programmes and at least accommodate ethics as a core part of the curriculum, just as it was done in business studies some years ago after the Enron scandal. The only problem is that technology is moving faster than academia and infinitely faster than the Department of Higher Education and the Council of Higher Education that take a year or more to approve any new university curriculum. An ethics course without current material or the newest technological thinking just will not make sense.

But whatever it takes, we urgently need an ethical conscience in the field of algorithms and AI decision-making to ensure that in future our lives are not ruled by a biased algorithmic overlord.

Prof Louis C H Fourie is a Futurist and Technology Strategist.

BUSINESS REPORT

Related Topics: