In this three part blog series, Elizabeth and Jen will be focusing on the ethics of AI and data. So often, we hear how we have to be data-driven, but it is not enough. We need to be human-centred. In this three part series, we will look at important topics such as DeepFakes, Bias and AI, why these phenomenon happen, and what we can do about it.
At the beginning of 2019, Alexandria Ocasio-Cortez made headlines by stating that algorithms can perpetuate racism. She stated that… “[algorithms] always have these racial inequities that get translated, because algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated. And automated assumptions — if you don’t fix the bias, then you’re just automating the bias.” (1)
The ways that algorithms can discriminate against certain demographics is felt in many tangible ways for a lot of people. Algorithms can determine how likely you are to get a job interview, what consequences you face if you commit a crime, whether or not you will get a mortgage, and how likely you are to “randomly” get stopped by the police. Skewed data, prejudiced programmers, and false logic can mean that the results aren’t as indisputable as we may assume at first.
Algorithms are based on mathematical equations — and the answers you can glean from a numeric equation are objectively true. Many people used this fact as a retort against Ocasio-Cortez, accusing her of trying to find prejudices where there simply weren’t any. These people are wrong. Not only can algorithms perpetuate racial biases, but they are prone to perpetuating any and all societal biases.
A prominent example of race and gender bias can be found in facial recognition software, which is beginning to become a more and more popular tool in law enforcement. Joy Buolamwini at the Massachusetts Institute of Technology found that three of the most recent gender-recognition AIs could identify a person’s gender from a photo with 99 percent accuracy — but only if the person in the photo was a white man (2). This puts women and people of colour at risk of false identification — in fact, accuracy dropped all the way down to 35 percent for women of colour.
Inequalities Reflected In Our Technology Begins With Us
Similarly, but perhaps even more worryingly, a report by ProPublica (3) found that AI used to anticipate future criminal behaviour was heavily skewed against black people. It falsely flagged almost twice as many black defendants as potential re-offenders than white defendants, and was much more likely to mislabel white defendants who would go on to reoffend as being low-risk. And it was, as you might predict, almost always wrong in its predictions of violent crime; only 20 percent of the people it predicted would commit violent crimes actually did so.
This large, potentially dangerous oversight is likely to be down to a lack of diversity in the data used to train the algorithms. If the people programming the AI ensured that the data input contained more white men than any other demographics, then the AI will learn to identify those people with much higher accuracy. It makes sense — AI can only learn from the data it is given.
1- Kosoff, M. (2019). Alexandria Ocasio-Cortez Says Algorithms Can Be Racist. Here’s Why She’s Right. https://www.livescience.com/64621-how-algorithms-can-be-racist.html
2- New Scientist, (2018). Face-recognition software is perfect – if you’re a white man. https://www.newscientist.com/article/2161028-face-recognition-software-is-perfect-if-youre-a-white-man/
3- ProPublica(2016). Machine Bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
2 thoughts on “Data-Driven Isn’t Enough: We Need Human Centred AI”