Location: Gowen Hall, Room 301 Atoosa Kasirzadeh Postdoctoral Research Fellow Australian National University Title: Mathematical and causal faces of explainable AI Recent conceptual discussion on the nature of the explainability of Artificial Intelligence (AI) has largely been limited to causal investigations. This paper identifies some shortcomings of this approach to help strengthen the debate on this subject. Building on recent philosophical work on the nature of explanations, I demonstrate the significance of two non-causal explanatory elements: (1) mathematical structures that are the grounds for capturing the decision-making situation and (2) statistical and optimality facts in terms of which the algorithm is designed and implemented. I argue that these elements feature directly in important aspects of AI explainability and interpretability. I then propose a hierarchical framework that acknowledges the existence of various types of explanation, each of which reveals an aspect of decision making, and answers to a different kind of why-question. The usefulness of this framework will be illustrated by bringing it to bear on some salient questions about AI and society. Atoosa Kasirzadeh is a Ph.D. candidate in philosophy of science and technology at the University of Toronto, and a postdoctoral research fellow for the Humanizing Machine Intelligence at the Australian National University. In 2015, she obtained a Ph.D. in mathematics (operations research) from the Ecole Polytechnique de Montreal, where she worked on optimization of large-scale decision-making problems. Before coming to Toronto, she was a graduate research fellow at the Group for Research in Decision Analysis (GERAD, Montreal) and Transport and Mobility Laboratory (EPFL, Lausanne, Switzerland). Prior to Canada, she studied at Royal institute of Technology (Stockholm, Sweden) and Polytechnic of Tehran (Iran). Her main research interests lie at the intersection of philosophy of science and mathematics, artificial intelligence (in particular machine learning), and decision-making. Currently, she is relying on the rich philosophical discussion on explanation to draw lessons about the scope of the explanatory reasoning of AI. She is also very interested in ethical, and societal implications of the emerging technologies. |