AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas. For example, as the investigative news site ProPublica has found, a criminal justice algorithm used in Broward County, Florida, mislabeled African-American defendants as ‘high risk’ at nearly twice the rate it mislabeled white defendants. Other research has found that training natural language processing models on news articles can lead them to exhibit gender stereotypes.
At a critical moment in the civil rights movement, when there is a very real and nuanced national conversation about systems change, can we fight a battle around unconscious bias on two fronts effectively? I would hate to see progress in this space undermined at a far quicker rate by code, modelling and machine learning that is hard to see (and understand) by large sections of the community.
Let’s not be naïve here, our unconscious biases are being mirrored, patterned and woven into the code that underpins many of the most significant platforms that drive successful automation and predictions today, and for all of the training, internal checks & balances companies employ, things like this largely go under the radar until it blows up into a big deal.
For instance, the failed AI project from Amazon had been building programs to review job applicants’ resumes, with the aim of identifying top talent and revolutionizing the hiring process. A year later, it was found that the new system was reviewing candidates in a gender-biased way. Basically, Amazon’s computer models were trained to vet applicants by observing patterns
in resumes submitted to the company over a ten-year period. For an industry that wasn’t exactly diverse, being male became the identified dominant requisite of the ideal candidates. It also actively penalized resumes via this learned methodology by concluding the word ‘women’s’ was not a good thing. This meant that graduates of all-women’s colleges and captains of the women’s STEM or robotics clubs were unfairly downgraded.
I’ll use that last sentence as a segue to where I believe we can perhaps independently probe algorithms for bias, potentially revealing human biases that had gone unnoticed or unproven – Colleges & Universities.
This is mainly a case of trust, credibility and research ambition where the research is a matter of public importance and will be approached and disseminated by that north star. So how can philanthropy help tackle which could be another one of these ‘defining issues of our generation’ type of things?
It’s a simple equation that doesn’t need the math’s background of a computer science to figure out. Fund research and fund it now.
Academia is best positioned to identify and address these issues by using AI to improve decision-making to benefit currently and historically disadvantaged groups, and responsibly take advantage of the several ways that AI can improve on traditional human decision-making especially when machine learning systems disregard variables that do not accurately predict outcomes through the data available to them.
Funds could help recruit high-caliber graduate students from underrepresented backgrounds in computing and cultivate interdisciplinary teams from other scholars across campus or the computing/social sciences to conduct research in a wide variety of fields relating to ethics & ai.
Academia is perfectly suited to handle these roles because the challenges I have outlined require much more than technical solutions, including how to determine when a system is fair enough to be released, and in which situations fully automated decision making should be permissible at all.
Ultimately bias affects not only interpersonal relationships, but also the diversity of an organization’s leadership and the actual outcomes of programs & products. Each decision informed by implicit biases can have a far reaching impact within the communities they serve and with the rapid adoption of AI across a variety of different applications, there is the possibility of exacerbating these issues in unforeseen ways.
Philanthropy has the ability to ensure that the AI systems our societies use in the future improve on human decision-making, not unknowingly undermine them. Partnering with academia can drive critical research around bias in AI and also elevate it to the larger national conversation around the need for systems change and a more fair and equitable society.