Artificial neural networks have given AIs the functionality for complex problem solving and pattern recognition, and they have entered the workforce, particularly in areas of big data analysis and global finance. As we begin to interact with and study these new learning machines, interesting questions arise concerning unbiased decisions.
Table of contents
Are they going to take on human behavioral and gender distinctions (gender identity), because they have been programmed with data sets that have unconscious bias? Will those who are giving the learning machines feedback to focus their problem solving allow behavioral constraints into the teaching? If we give the AIs a woman’s voice, and a woman’s name, will we interact with her as if she was a woman? And does that mean she will in turn internalize those social expectations and become more female?
Naturally we are interested in all things having to do with gender. It is the first sentence the world places upon us, when the midwife announces boy or girl. We love gender. We give our teddy bears genders, and can describe in detail why we think-no, why we know that our little darling is a boy or girl. We give our cars genders, names, and personalities. It’s just because we’re human, and we want to humanize the things we love, and that surround us. And part of humanizing inanimate objects is to give them a name, a gender, and shower them with affection.
Gender Differences
Part of our fascination with gender has led to some poor science, the popularity of which has trickled down into our collective consciousness. The idea that male brains and female brains are different in a significant way is probably not true, though the debate rages. Structure follows function, and hormones affect the developing brain. But even with minor structural and functional differences in the brains that are most probably hormonally-based, there is very little difference in boys and girl’s brains. There is a much wider variance between individuals than can be measured than between generalized groups based just on gender. We are more complicated than can be described in pop-science about hardwired aggression and nurture vs nature.
What is different between genders is communication, how we use language, and there the gender differences are significant enough to be measured. If we think of communication as the way we input data into our brains, we grow our biological neural networks with the complex range of human communication to which we’re exposed. And there are differences between male and female communication.
Complexity
So with the science showing that biological neural networks- aka human brains- are more complex than can be measured, but are influenced by hormones, language, biology, and the wide range of human culture, we are left to consider if artificial neural networks will also be influenced by language and human culture. (This is assuming that the artificial neural networks that are biology and hormonally mediated are still a few years in the future.)
In a discussion about gender in AIs, it is important to clarify that, at this time, AIs are not aware of their artificial neural networks. As far as we know, they are not self-aware and do not study themselves- their complex motivations and desires- with the same skill and fascination that humans use to think of themselves. So an internal awareness, or gender identification as humans know it, is probably not occurring. If AIs develop gender awareness, it will be because the whole of human culture is teaching them, intentionally or not, how to behave, learn, think, communicate, and identify as male or female.
AI and personal assistants
The majority of personal assistants and chatbots are given female names and voices. Our robotic Gal Fridays are very popular, as well as small, polite, and always ready to assist. Does this matter? Maybe, but what matters more is the way we are teaching natural language processing to AIs. There are, unlike the structure of human brains, real measurable differences in the way that men and women communicate using language. At this time, the vast majority of those working in natural language processing are men. Men use language differently, and the impact that this difference will have will grow as AIs are exposed to primarily or exclusively male language use and vocabulary.
The strength of our current AIs rests on their ability to find new patterns in large, disparate batches of data. It is critical that they receive the largest experiences with human language possible. If our AIs are only talking to men, they will learn to use language like men. That is more likely to impact their ability to successfully do their work than to confirm a gender identity.
Let’s have a look at this example:
Poet of Code Joy Buolamwini is an MIT researcher, a Rhodes Scholar with a beautiful face, and she had to wonder what was going on when the facial recognition software she was using to teach robots social interaction didn’t recognize her face. The software didn’t recognize her face as human.
Reflecting human diversity?
AIs and machine learning platforms learn what we teach them. We give them large data sets and they set to work finding patterns out of the data. Over time, with more data, they continue to find patterns and commonalities among diverse groups of data. But how diverse are the data sets? Are we giving the AIs data that accurately reflects the broad range of human diversity, or will we find that gender and ethnicity variables are reflected in the gender and ethnicity of the majority of coders? And since much of bias of this nature is unconscious, and we would rather burn at the stake than admit that we might be contributing to the problem, what is to be done?
Luckily, the world has a beautiful young Poet of Code, and Joy is slinging on her red cape and dashing to the rescue of a world that is busy writing algorithms and code and collecting data sets that are skewed, and that do not reflect the true nature of the world. This matters, because the nature of big data and machine learning means that these identified patterns can move around the world, and be accepted as accurate and valid, with no further cross-checking or audits for accuracy.
And this matters because those data sets and algorithms are being used to determine everything from your credit score to your employment to how long you should spend in jail, and if you should ever qualify for a loan again.
Lawyers and professional Services
In finance and law enforcement, complex decision-making is being left to the AIs with increasing frequency. But in the excitement of actually getting these systems to work, have we stopped to make sure they are working well? If there is even the tiniest bit of bias in data sets, it will spread and replicate on a massive scale as the systems are used more and more across the world. And this means the exclusionary nature of bias is going to spread like a virus.
Unbiased Decisions
And here comes the Algorithmic Justice League, Coding for Justice, and finding ways to introduce the idea of full spectrum data sets to eliminate inclusion bias, and a system to audit the algorithms. They want to make sure the systems have been given the data they need to do the job correctly. Joy, keep that red cape close by; the world needs you.
When we develop a new tool in scientific research, it has to be tested for validity and variability in order to generate unbiased decisions. We have to audit the results, and confirm the statistics. Sometimes setting a new piece of scientific research before the editorial board of a peer-reviewed journal is like watching a bunch of wolves stare at a rabbit, drool dripping from their blood-stained muzzles. But there is no question that, after the peers tear into the research and check under every piece of data for some random variable that is screwing up the works, the study doesn’t have any obvious problems like a skewed data set, or bias in the study directions.
We have been so exited to have actually made machine learning work, and at the possibilities of using it to make profit and the hard decisions, that the peer-reviewed process of audit is not in place. And while there is even the smallest unconscious bias in the data sets, the possibility of exclusionary bias in our financial and criminal justice algorithms is dangerous and real. Contact us to help generating unbiased decisions.
Related Links: