Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Deep Learning for Medical Imaging From Diagnosis Prediction to its Explanation

singla, sumedha (2022) Deep Learning for Medical Imaging From Diagnosis Prediction to its Explanation. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF (Final Version)
Download (31MB) | Preview

Abstract

Deep neural networks (DNN) have achieved unprecedented performance in computer-vision tasks almost ubiquitously in business, technology, and science. While substantial efforts are made to engineer highly accurate architectures and provide usable model explanations, most state-of-the-art approaches are first designed for natural vision and then translated to the medical domain. This dissertation seeks to address this gap by proposing novel architectures that integrate the domain-specific constraints of medical imaging into the DNN model and explanation design.

Prior work on DNN design commonly performs lossy data manipulation to make volumetric data compatible with 2D or low-resolution 3D architectures. To this end, we proposed a novel DNN architecture that transforms volumetric medical imaging data of any resolution into a robust representation that is highly predictive of disease. For DNN model explanation, current explanation methods primarily focus on highlighting the essential regions (where) for the classification decisions. The location information alone is insufficient for applications in medical imaging. We designed counterfactual explanations to visually demonstrate how adding or removing image-features changes the DNN decision to be positive or negative for a diagnosis.

Further, we reinforced the explanations by quantifying the causal relationship between neurons in DNN and relevant clinical concepts. These clinical concepts are derived from radiology reports and are corroborated by the clinicians to be useful in identifying the underlying diagnosis. In the medical domain, multiple conditions may have a similar visual appearance, and it's common to have images with conditions that are novel for the pre-trained DNN. DNN should refrain from making over-confident predictions on such data and mark them for a second reading. Our final work proposed a novel strategy to make any off-the-shelf DNN classifier adhere to this clinical requirement.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
singla, sumedhasumedha.singla@pitt.edusus980000-0003-3477-0524
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairBatmanghelich, Kayhankayhan@pitt.edukayhan0000-0001-9893-9136
Committee CoChairHauskrecht, Milosmilos@pitt.edumilos0000-0002-7818-0633
Committee MemberKovashka, Adrianaaik85@pitt.eduaik850000-0003-1901-9660
Committee MemberJia, Xiaoweixiaowei@pitt.eduxiaowei0000-0001-8544-5233
Committee MemberTriantafillou, Sofiasof.triantafillou@gmail.com0000-0002-2535-0432
Date: 6 September 2022
Date Type: Publication
Defense Date: 12 July 2022
Approval Date: 6 September 2022
Submission Date: 3 August 2022
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 201
Institution: University of Pittsburgh
Schools and Programs: School of Computing and Information > Computer Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Deep Learning, Medical Imaging, Explainable AI, XAI, Counterfactual Explanations, Lung, Causal Concept
Date Deposited: 06 Sep 2022 20:34
Last Modified: 06 Sep 2022 20:34
URI: http://d-scholarship.pitt.edu/id/eprint/43339

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item