Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Interpretable deep learning for advancing precision oncology

Ren, Shuangxia (2024) Interpretable deep learning for advancing precision oncology. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF (Final Submission)
Primary Text

Download (14MB) | Preview

Abstract

The goal of precision oncology is to provide each patient with the most appropriate cancer treatment. This approach involves understanding the impact of genomics on individual cancer cases and utilizing this understanding to tailor cancer therapies targeting the genetic mutations that drive the malignancy. Achieving this level of personalization in cancer treatment can be realized by integrating artificial intelligence (AI) to predict how a patient's cancer cells will react to anticancer medications based on their genomic data.

Three deep learning methods were developed for drug sensitivity prediction: 1) We used graph-regularized matrix factorization to decompose the drug response matrix into vectors for cell lines and drugs and developed mapping functions to convert observed molecular characteristics into these vectors. 2) A model was created to learn a mapping function from Somatic Genetic Alterations (SGAs) to gene expression. The hidden representations between these observed datasets served as the features of cell line to predict drug response. 3) An extended version of the Variational Autoencoder (VAE) model was utilized to condense gene expression and SGAs data into a low-dimensional, informative representation of cell lines. These representations were subsequently used for predicting drug sensitivity, replacing the raw omics data.

While all three models leverage representation learning for cell lines and subsequently apply the representation for drug sensitivity prediction, notable differences distinguish them. The first model excels in prediction performance, accurately forecasting drug responses for unseen drugs and cell lines. However, it falls short in terms of interpretability, which refers to how easily humans can understand its decisions. It involves the transparency of the model's internal workings, making clear how it processes input data to produce output. Conversely, the second model, though interpretable, falls short in performance due to its reliance on SGAs for predicting drug responses. To achieve a balance between predictive efficacy and interpretability, we developed the third model which takes gene expression and SGAs as input, utilizing a Deep Generative Model (DGM) and self-attention mechanism to enhance interpretability. All these three approaches contribute to the broader evolution of cancer medicine by aligning predictive accuracy with interpretive insights.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Ren, Shuangxiashr81@pitt.edu
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee MemberCooper, Greggfc@pitt.edu
Committee MemberHochheiser, Harryharryh@pitt.edu
Committee MemberChen, Lujialuc17@pitt.edu
Committee ChairLu, Xinghuaxinghua@pitt.edu
Date: 29 August 2024
Date Type: Publication
Defense Date: 24 July 2024
Approval Date: 29 August 2024
Submission Date: 31 July 2024
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 132
Institution: University of Pittsburgh
Schools and Programs: School of Computing and Information > Intelligent Systems Program
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: Drug Sensitivity; Deep Learning; Interpretability; Genomic
Date Deposited: 29 Aug 2024 20:24
Last Modified: 29 Aug 2024 20:24
URI: http://d-scholarship.pitt.edu/id/eprint/46791

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item