Skip to content

Professional

Track: Analytics Process

Addressing Algorithmic Bias and Ensuring Fairness in Analytical Projects

Monday, April 12, 2:40-3:20pm EDT

Machine learning models make mistakes. As the news stories have shown us, even big technology companies have had difficulty ensuring algorithmic fairness. Amazon, for instance, had to abandon a machine learning tool used for recruiting because it ranked the job applications of men higher than women. Google’s photo application labeled dark-skinned people as gorillas. COMPAS, a courtroom risk assessment software program, was almost twice as likely to falsely label black defendants, compared to white defendants, as future criminals. These examples highlight the problems that can arise when we apply statistical and machine learning models across application areas. In this talk, we will focus on how to address potential bias and ensure fairness, from data acquisition to model deployment. We will summarize the current research from a practitioner’s perspective and explain the available tools and approaches available, and how we need to change the way we approach data analytics projects.

Margret Bjarnadottir image

Margret Bjarnadottir

Margret Bjarnadottir

Associate Professor of Management Science and Statistics at University of Maryland

Dr. Margrét Vilborg Bjarnadóttir is an Associate Professor of Management Science and Statistics at Robert H. Smith School of Business, University of Maryland. Dr. Margrét Bjarnadóttir graduated from MIT’s Operations Research Center in 2008, defending her thesis titled “Data Driven Approach to Health Care, Application Using Claims Data” and has since focused on the application of operations research methods using large scale data. Her work spans applications in health care, finance, people analytics and sports! Her work is published in for example Operations Research, Production and Operation Management and JAMIA. Dr. Bjarnadóttir has consulted with both health care start-ups on risk modeling using health care data as well as governmental agencies such as a central bank on data-driven fraud detection algorithms.