Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare

Barda, Amie J (2020) Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

Download (4MB) | Preview


Challenges in interpreting some high-performing models present complications in applying machine learning (ML) techniques to healthcare problems. Recently, there has been rapid growth in research on model interpretability; however, approaches to explaining complex ML models are rarely informed by end-user needs and user evaluations of model interpretability are lacking, especially in healthcare. This makes it challenging to determine what explanation approaches might enable providers to understand model predictions in a comprehensible and useful way. Therefore, I aimed to utilize clinician perspectives to inform the design of explanations for ML-based prediction tools and improve the adoption of these systems in practice.
In this dissertation, I proposed a new theoretical framework for designing user-centered explanations for ML-based systems. I then utilized the framework to propose explanation designs for predictions from a pediatric in-hospital mortality risk model. I conducted focus groups with healthcare providers to obtain feedback on the proposed designs, which was used to inform the design of a user-centered explanation. The user-centered explanation was evaluated in a laboratory study to assess its effect on healthcare provider perceptions of the model and decision-making processes.
The results demonstrated that the user-centered explanation design improved provider perceptions of utilizing the predictive model in practice, but exhibited no significant effect on provider accuracy, confidence, or efficiency in making decisions. Limitations of the evaluation study design, including a small sample size, may have affected the ability to detect an impact on decision-making. Nonetheless, the predictive model with the user-centered explanation was positively received by healthcare providers, and demonstrated a viable approach to explaining ML model predictions in healthcare. Future work is required to address the limitations of this study and further explore the potential benefits of user-centered explanation designs for predictive models in healthcare.
This work contributes a new theoretical framework for user-centered explanation design for ML-based systems that is generalizable outside the domain of healthcare. Moreover, the work provides meaningful insights into the role of model interpretability and explanation in healthcare while advancing the discussion on how to effectively communicate ML model information to healthcare providers.


Social Networking:
Share |


Item Type: University of Pittsburgh ETD
Status: Unpublished
CreatorsEmailPitt UsernameORCID
Barda, Amie Jajd109@pitt.eduajd1090000-0002-7361-077X
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee MemberBecich,
Committee MemberHorvat,
Committee MemberLandsittel,
Committee MemberVisweswaran,
Thesis AdvisorHochheiser,
Date: 1 January 2020
Date Type: Publication
Defense Date: 12 December 2019
Approval Date: 1 January 2020
Submission Date: 20 December 2019
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 184
Institution: University of Pittsburgh
Schools and Programs: School of Medicine > Biomedical Informatics
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: model interpretability; user-centered design; pediatric mortality; risk assessment; machine learning; human computer interaction
Date Deposited: 01 Jan 2020 06:34
Last Modified: 01 Jan 2020 06:34


Monthly Views for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item