Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Opening the Black Box: Explanation and Transparency in Machine Learning

Creel, Kathleen (2021) Opening the Black Box: Explanation and Transparency in Machine Learning. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img] PDF
Restricted to University of Pittsburgh users only until 20 January 2025.

Download (3MB) | Request a Copy

Abstract

Machine learning algorithms often predict and classify without offering human-cognizable reasons for their evaluations. When confronted with the opacity of machine learning in science, what is our epistemic situation and what ought we to do to resolve it? In order to answer this question, I first outline a framework for increasing transparency in complex computational systems such as climate simulations and machine learning on big scientific data. I identify three different ways to attain knowledge about these opaque systems and argue that each fulfills a different explanatory purpose. Second, I argue that analogy with the renormalization group helps us choose the better of two philosophically suggestive explanatory strategies that rely on different diagnoses of the success of deep learning. The coarse-graining strategy suggests that highlighting the parts of the input which most contributed to the output will be misleading without two things: an explanation for why the irrelevant parts are themselves irrelevant, and an explanation for the stability of the output under minor perturbations of the input. Armed with a framework for understanding transparency and an analysis of explanatory strategies appropriate for deep learning, I turn to an application of these frameworks to automated science. Automated science is the use of machine learning to automate hypothesis generation, experimental design, performance of experiment, and evaluation of results. If automated science is to find patterns on its own, then it must be able to solve the Molyneux problem for science, namely recognizing identity across modalities or data streams without the aid of causation or correlation.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Creel, Kathleenkac284@pitt.edukac2840000-0001-7371-2680
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairBatterman, Robertrbatterm@pitt.edurbatterm
Committee MemberMitchell, Sandrasmitchel@pitt.edusmitchel
Committee MemberWoodward, Jamesjfw@pitt.edujfw
Committee MemberChirimuuta, Mazviitamchirimu@exseed.ed.ac.uk
Committee MemberDanks, Davidddanks@cmu.edu
Date: 20 January 2021
Date Type: Publication
Defense Date: 18 September 2020
Approval Date: 20 January 2021
Submission Date: 31 August 2020
Access Restriction: 2 year -- Restrict access to University of Pittsburgh for a period of 2 years.
Number of Pages: 139
Institution: University of Pittsburgh
Schools and Programs: Dietrich School of Arts and Sciences > History and Philosophy of Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: machine learning, transparency, renormalization, explanation, Molyneux
Date Deposited: 20 Jan 2021 18:31
Last Modified: 29 Nov 2022 14:08
URI: http://d-scholarship.pitt.edu/id/eprint/39690

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item