Abstract: Machine learning has taken over our world, in more ways than we realize. You might get book recommendations, or an efficient route to your destination, or even a winning strategy for a game of Go. But you might also be admitted to college, granted a loan, or hired for a job based on algorithmically enhanced decision-making. We believe machines are neutral arbiters: cold, calculating entities that always make the right decision, that can see patterns that our human minds can't or won't. But are they? Or is decision-making-by-algorithm a way to amplify, extend and make inscrutable the biases and discrimination that is prevalent in society? To answer these questions, we need to go back - all the way to the original ideas of justice and fairness in society. We also need to go forward - towards a mathematical framework for talking about justice and fairness in machine learning. I will talk about the growing landscape of research in algorithmic fairness: how we can reason systematically about biases in algorithms, and how we can make our algorithms fair(er). Bio: Suresh Venkatasubramanian is an associate professor in the School of Computing at the University of Utah. He's a BTech in CSE from IIT Kanpur, did his Ph.D at Stanford University, and did a stint at AT&T Research before joining the U. His research interests include computational geometry, data mining and machine learning, with special interests in high dimensional geometry, large data algorithms, clustering and kernel methods. He received an NSF CAREER award in 2010. He spends much of his time now thinking about the problem of "algorithmic fairness": how we can ensure that algorithmic decision-making is fair, accountable and transparent. His work has been covered on Science Friday, NBC News, and Gizmodo, as well as in various print outlets.