Need a job? A loan? The software making the decision may have a racial or gender bias
Usually, the American Civil Liberties Union tackles civil rights issues caused directly by other humans.
Now, the organization and a group of researchers partnered up to form the The AI Now Initiative out of concern with machines that are showing signs of hidden bias.
For example, a ProPublica investigation found disparities in a computer algorithm a being used to predict whether a person would commit a crime in the future, known as a “risk assessment.”
The findings? The mathematical formula was more likely to mislabel black defendants as future criminals while determining white defendants as less of a risk.
Another example includes algorithmic bias in job hiring tools used by human resource departments that may end up excluding potential employees based on gender, race, age, disability, or military service, which are all protected classes under employment law, the Harvard Business Review reported.
These types algorithmic biases could be potentially dangerous for low-income communities and minorities, especially since algorithms are and can be used to decide who can receive a loan or get a job interview, according to the MIT Technology Review.
And neither tech companies nor the government have addressed the problem, the MIT Technology Review reported.
Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, told the MIT Technology Review in an email that these types of biases could be more common than people think.
“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker told the MIT Technology Review. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”
Crawford and Whittaker said President Donald Trump’s administration hasn’t shown any interest in regulating these issues.
The MIT Technology Review also spoke to Cathy O’Neil, a mathematician and author of Weapons of Math Destruction. O’Neil said people are often too reliant of mathematical models to remove human bias.
“(Algorithms) replace human processes, but they’re not held to the same standards,” O’Neil told the MIT Technology Review. “People trust them too much.”
This story was originally published July 13, 2017 at 5:44 PM with the headline "Need a job? A loan? The software making the decision may have a racial or gender bias."