Beware of AI - it produces biases and prejudices

Published Jun 12, 2022

Share

Ido Lekota

Johannesburg - Like most South Africans, one has these past weeks been engrossed in the late Bafana Bafana and Orlando Pirates captain Senzo Meyiwa murder court case where defence lawyer advocate Zandile “Smiling Assassin” Mshololo has “brick by brick” been demolishing the testimony of state witness Sergeant Thabo Mosia.

Mshololo represents Fisokuhle Ntuli – the fifth accused in the trial of Meyiwa’s murder.

It was the deceptively genial Mshololo’s relentless probing that eventually led to Mosia admitting that the crime scene – where Meyiwa was fatally wounded by a bullet allegedly fired by one of the five suspects – might have been contaminated, thereby pulling away at least one leg off the table the state’s case is standing on.

Adding to the intrigue in the Pretoria High Court proceedings has been the performance by Advocate Dan Teffo. He unsuccessfully applied for Mshololo to stop her cross-examination for “a trial within a trial” to be held.

Teffo represents the first four accused – Muzikawukhulelwa Sibiya, Bongani Ntanzi, Mthobisi Ncube and Mthokosizeni Maphisa.

Teffo explained that his application sought to ascertain the “constitutionality” and admissibility of alleged confessions made by Sibiya and Ntanzi.

Dismissing the application, Judge Tshifhiwa Maumela said the confessions were not before the court, and he would deal with their admissibility should the State decide to present them as evidence.

On Wednesday, the case was thrown into disarray following the introduction of a second docket implication of Meyiwa's ex-girlfriend singer Kelly Khumalo and others. Mshololo took exception to the fact that the State knew about the docket since March but had failed to inform her.

Besides all the ups and downs relating to the case, what caught one's eye was an incident where the court proceedings (conducted in English) were supposed to be projected on the screen as text. However, the sub-titles were in some gibberish instead of being in English.

Upon realising that, what came to mind was an article last year in The Story Exchange (an award-winning non-profit media organisation dedicated to elevating women's voices) talking about the impact of Artificial Intelligence (AI) in our daily lives.

Artificial intelligence has incredible potential. But it's also reproducing all the biases and inequalities that currently exist worldwide.

The article pointed out how we use AI in several invisible ways, such as every time we use a search engine or facial recognition to unlock our phones.

"The problem — and this is disturbing — is that decades and even centuries of bias are embedded in that AI technology because of the limitations of the humans who built it," it read.

Essentially the article highlighted how we often hear the argument that computers are impartial.

However, research disproves this by showing that upbringing experiences and culture shape people, and they internalise certain assumptions about the world around them accordingly.

AI is the same. It does not exist in a vacuum but is built out of algorithms devised and tweaked by those same people – and it tends to “think” the way it’s been taught – hence the bias.

Research also tells us that there are two types of bias in AI. One is algorithmic AI bias or "data bias", where the algorithms are trained using biased data. The other kind of bias in AI is societal AI bias. That's where our societal assumptions and norms cause us to have blind spots or certain expectations in our thinking.

An example of "data bias" was carried out in a 2018 Reuters story wherein Amazon software engineers were found to have developed a new online recruiting tool that discriminated against women. As part of its automatic hiring process, Amazon developed an AI resume tool that gave job candidates scores from one to five stars.

The problem was that Amazon's computer models were trained to rank job candidates based on trends in resumes submitted over the past 10 years — and most resumes came from men, in effect, teaching themselves that male candidates were preferable. So this tool gave low scores to applicants whose resumes divulged their gender – for example, mentioning that they went to an all-girl school. Amazon eventually withdrew the tool.

Societal AI bias occurs when AI behaves in ways that reflect social intolerance or institutional discrimination. This happens typically, for example, in communities where both the data and the people working with it work to operate from a particular dominant mono-cultural lens.

For example, in the Meyiwa case, the technology used to convert the audio recordings into text does not understand the English spoken by the state lawyer, the defence team and the judge. This is the English that some English-speaking white South Africans call “African” - the vocabulary of which includes words like “political” and “country”.

Lekota is a former political editor for the Sowetan