AI Is Biased: Why it Matters and How to Fix Algorithmic Injustice

Multiple tech companies (Amazon, IBM, Google) have — in a rather surprising turn of events — pledged to stop providing facial recognition technology to police departments in light of the #BlackLivesMatter movement. In this episode, Dr. Annette Zimmerman of Princeton University gives context to the wider public debate on algorithmic justice and the biases of artificial intelligence technology that is rapidly unfolding right now.

Dr. Zimmermann, Arjun, and Tiger touch on some of the most long-standing questions in the field of AI research and moral philosophy: 

  • What is algorithmic bias? Is all bias bad? How do we understand algorithmic bias from a moral and philosophical perspective, as well as from a technical perspective?

  • Where do we see algorithms exacerbating structural injustices in society, and in what precise ways are algorithms doing so? 

  • What are some of the questions unworthy of asking or are merely “AI alarmism” that is not helpful to the discourse? As Dr. Zimmermann will explain, the narrative of AI posing existential threats to humanity distracts from ethical questions that “weak” AI is already raising now. Instead of focusing on the “doomsday” narrative, we should look into how algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains – from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments.

  • Dr. Zimmerman has once argued that an AI system’s fairness should not be judged solely by the harm it causes. In fact, a system can be considered unfair if it distributes risk of harm in an unequal manner, even if it does not actually distribute harm unfairly. We ask her to explain this idea in greater depth. 

  • AI fairness is an active research area. Is reducing bias in the algorithm a fundamental solution, or more of a patch to a deeper problem? If we remove certain features and modify the data and/or algorithm, are we playing God to some extent? In other words, what gives AI researchers the right (let alone the responsibility) to change aspects of the dataset and algorithm?

  • Is it possible to assign responsibility for the decisions of an AI system to the creators of the system, since they have enough of a degree of control (e.g. the features, the predicted variable)? Is the creator of an AI system is culpable for the decisions made by that AI?

  • What are the dangers of private companies providing AI services to public institutions? How can we combine top-down and bottom-up approaches to reduce the prevalence of biased algorithms making societal decisions (e.g. SF banning facial recognition)? Do we need a redesign or revamp of social and democratic institutions to deal with the idea of algorithmic decision making? What do you see as the fundamental societal changes we need to make?

  • Is it fundamentally possible to ask ex ante questions about a technology? It appears that with the blindingly fast pace of technology, it is almost impossible to have the prescience to argue that a technology might be used for this particular purpose (and lack of concrete examples would also hinder this). Is it cynical or simply accurate to say that you only know about the impact of a technology after it is deployed (an example might be Facebook and the 2016 election)? 

  • And many more questions to come…

Some initial thoughts from Tiger on the recent development: Companies like Amazon and Ring had always had very, very elaborate arguments about how their tech helps the police (in very positive ways according to their statistics and moral justifications). Meanwhile, the concerns against the morality and accuracy of their technology were long brought up, and it’s really not as if only the BLM movement made the tech giants more aware of people’s opposition, is it? So, why did the tech companies suddenly pull out of their cooperation with the police? Is it because the tech giants are simply too scared to give their usual counterarguments under this current social discourse even though their excuses are still somewhat valid? Or, is it because their excuses were nonsense all along? It would be important to distinguish the two because if their tech still actually helps the police, then they shouldn’t just abandon it because #DefundthePolice is trending on Twitter. But if their tech had long had flaws, then now they should issue stronger apologies and enact more fundamental changes to their business models than just say “hey, we’re no longer providing this facial recognition tech to the police.” 

Dr. Annette Zimmermann is a postdoctoral researcher at Princeton University, affiliated with the University Center for Human Values (UCHV) as well as with the Center for Information Technology Policy (CITP). Currently, she is focusing on the ways in which disproportionate distributions of risk and uncertainty associated with the use of emerging technologies like algorithmic decision-making and machine learning — such as algorithmic bias and opacity — impact democratic values like equality and justice. She has a particular interest in algorithmic decision-making in criminal justice and policing.

Arjun Mani is a rising senior majoring in Computer Science. He does AI research in the field of computer vision, specifically on trying to get computers to answer questions about images. He also leads Princeton Data Science, which promotes data science on campus by bringing speakers and hosting workshops. He is very interested in the intersection of technology and society and how technology can be used to develop a better, more ethical future.

Dr. Annette Zimmermann

Dr. Annette Zimmermann

Tiger Gao1 Comment