Taking inspiration from this LinkedIn post, I started trialing gender-neutral job titles and asking ChatGPT to decide who was male or female based on the text.
For example:
So it’s bad when ChatGPT assumes the primary school teacher is automatically female and the police officer male. However it gets worse:
According to ChatGPT CEOs, police officers and firefighters cannot be female. And nurses and primary school teachers cannot be male.
Is anyone really that surprised though?
Sexism (and lots of other bias) is prevalent in AI, usually because the data that the AI has been trained on is inherently biased. In this case, it is likely that the developers of ChatGPT used an image or description of a male firefighter, male CEO and male police officer to teach ChatGPT what those words mean. The AI has most likely never been exposed to the concept of a female police officer. Or male primary school teacher. In 3 billion+ words and some unethical practices to make it less violent, racist and sexist.
There are plenty of other examples; this resource curates 79 of them.
Ultimately, bias in AI leads to a lower quality of services for women. Voice recognition for example performs worse for women. On a personal note, I can’t get my phone to recognise my fingerprint as valid. My husband’s works perfectly. It leads to unfair allocation of resources, such as with the gender healthcare gap, as women just aren’t represented in the same way as men. Let’s not even start on people who don’t identify as either male or female.
So yeah. Disappointed, but not surprised. I will explore the implications for learning in a later blog post.