Examining bias in AI

Assistant Professor of Computer Science Sarah Brown

Sarah Brown, an assistant professor of computer science, is examining how to prevent artificial intelligence from reinforcing biases.

Photo credit: Nora Lewis

Not long ago, human beings wrote the code that controlled every aspect of how a computer program worked, but this limited programs to doing things that people could write very explicit rules for. More complex decision making, like whether or not you receive a bank loan or what digital ads are most likely to get you to click them in your Instagram feed, do not fit into that framework. Human-generated algorithms were written with specific instructions allowing for controlled outcomes.

Now, machine-learning algorithms are used to “train” algorithms with big datasets, collections of information. Letting a computer find the patterns from examples to make decisions allows us to solve more complex problems. However, these algorithms pick up on every single pattern in the data, not only the ones we want them to find. Gender bias, sexism, and racism have been found in facial recognition software and in the labeling of digital image collections.

Sarah Brown, an assistant professor of computer science, is examining how to prevent artificial intelligence from reinforcing biases. “My research is about how we can adapt machine-learning algorithms and the systems they are embedded in in order to prevent AI from reinforcing patterns of discrimination,” she says.

Brown’s research is triple-pronged. She is investigating the building of tools that would enable experts in the social sciences to examine machine-generated algorithms. She’s developing better learning algorithms that “learn” from both data and social scientists’ expertise simultaneously.

And she has partnered with a colleague at Brown University, a social psychologist who studies how people engage with discrimination. “He does experiments to get people to show the ways they’re racist or biased, and together we’re looking at what people think of the different ways that machine learning defines fairness.”

How should machine learning define fairness? It’s a hard question. After all, people define fairness differently, Brown notes. “So, what do people prefer? And how does this vary across different social groups?

“Activists from lots of communities have been talking about how we need to hold algorithms accountable,” Brown says. It’s complex and multivalent theoretical work. A solution might remedy one problem in the pipeline, but not all. And there are myriad issues when it comes to machine learning. “And so how can we think about them all together?” Brown says. “That’s the big-picture question.”

“In order for the deep ideas from the humanities to help us in computing there has to be some sort of compatibility. It’s important to bring these ideas together.”
Sarah Brown

Brown has been working with undergraduates modeling ways data could be biased. In this way, she and her students examine different ways that data could capture the fact that discrimination happens in the world, Brown says. “We want our algorithm to not replicate that discrimination. We start with simple problems and scale them up to see how they apply in a broader context. I try to use the practical as the entry point.”

Complex problems require creative thinkers

Brown’s interest in fairness and equity is longstanding. In college, she was involved in diversity and outreach work. As a postdoc at UC Berkeley, she joined an organization called The Algorithmic Fairness and Opacity Group. That’s when Brown realized she could craft a professional life that included all her varied interests. Brown counsels her computer science students to think expansively about their educations, too. Computation alone won’t solve complex societal problems, she says. “Our data science bachelor of arts program is set up to encourage students to double major. We want to see students double-majoring in data science and something in the arts and humanities.

“I read a lot of things that are not computer science to inform my work,” Brown says. “In order for the really deep ideas from the humanities and social sciences to help us in computing there has to be some sort of compatibility. It’s important to bring these ideas together. We have to have a shared language. Working at disciplinary boundaries is really exciting and really necessary.

“That’s where the hardest problems are.”

—By Marybeth Reilly-McGreen