Reinforcement learning and stochastic control for sepsis treatment: the promise, obstacles and potential solutionsNanayakkara, Thesath (2022) Reinforcement learning and stochastic control for sepsis treatment: the promise, obstacles and potential solutions. Doctoral Dissertation, University of Pittsburgh. (Unpublished) This is the latest version of this item.
AbstractWe develop clinically motivated, computational methods for sepsis decision-making. Sepsis is a life-threatening syndrome, with enormous mortality, morbidity, and economic burden. However, despite decades of research spanning various academic disciplines, a thorough understanding of sepsis treatment had proved elusive. Recent advances in data-driven machine learning and control methods have led to numerous attempts to gain insight and learn intelligent treatment strategies directly from observed data. Stochastic optimal control and Reinforcement Learning, are in particular popular as they are a natural fit to formalize clinical decision-making. However, although such methods carry significant promise, there are multiple obstacles at all levels. Thus, the goal of our work is to identify, and address these challenges and propose novel solutions. In particular, we focus on formalizing the problem in a stochastic control framework, encoding physiologic domain knowledge and improving the patient state representation, and investigating associated uncertainties. Through a combination of control theory, deep representation learning, and the integration of mechanistic modeling we introduce several improvements and novel directions to advance the current status quo of data-driven interventions for clinical sepsis. We show how our methods can supplement clinicians, provide new directions for future computational research and potentially uncover valuable hints toward better treatment strategies. Share
Details
Available Versions of this Item
MetricsMonthly Views for the past 3 yearsPlum AnalyticsActions (login required)
|