When | Thursday, Nov 1, 2018, 7:30 – 9 p.m. |
---|
Campus room | The Seattle Public Library, Microsoft Auditorium (1000 4th Ave., Seattle, WA 98104) |
---|
Event Types | Academics, Lectures/Seminars |
---|
Event sponsors | Microsoft, University of Maryland Baltimore County, UW Department of Philosophy, Science, Technology & Society Studies, and the Simpson Center for the Humanities |
---|
| | Description | Artificial intelligence (AI) and data-intensive science are influencing all aspects of our lives—economic, political, social, intellectual, personal, and more. Our smart phones and search engines anticipate our needs and preferences, driverless cars and autonomous military weapons are no longer the stuff of SciFi, and life-changing judgments about everything from medical diagnoses and credit ratings to college admissions and parole decisions are informed by algorithm-driven data analysis. And yet there is little attention to questions about the values that guide the algorithms used to build AI systems and to mine the massive data sets we’re generating in this wired, data-rich world. It is assumed that human-centric questions – is the algorithm biased? what values should algorithms encode? – need not be asked in the course of development; they can be deferred until the technology is produced. This is highly problematic, for reasons articulated by philosophers of science; even the most highly automated and seemingly objective inquiry is deeply configured by unspoken values and entrenched interests. Contributors to this panel will address a range of issues raised by a shared concern to ensure that algorithm-driven technologies like AI and data mining serve the public good. - What constitutes the “public good” that data science and AI should serve, given the value pluralism of contemporary society?
- When is “algorithmic bias” a problem that should be mitigated? And under what conditions can algorithmic bias be a positive or useful feature?
- How can or should AI development and data science be reconfigured at the level of practice to ensure that questions about encoded values get appropriate attention?
- How do we deal with the opacity of the algorithms that make data mining and AI possible?
- What information about underlying values should AI developers and data scientists be required to disclose, and what’s needed to ensure accurate disclosure?
Panelists - Heather Douglas (Waterloo Chair in Science and Society, University Waterloo), Bases for Trust in AI
- Eric Horvitz (Technical Fellow and Director of Microsoft Research Labs) AI, People, and Society: Rising Questions and Directions
- Sabina Leonelli (Professor of Sociology, Philosophy and Anthropology, University of Exeter) Big Data Analysis and the Human Face of Automated Systems
Moderator - David Danks (Thurstone Professor of Philosophy and Psychology, Carnegie Mellon University)
|
---|
Link | psa2018.philsci.org… |
---|
|
|