Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Iterated regularization methods for solving inverse problems

Mays, Nathaniel (2011) Iterated regularization methods for solving inverse problems. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Primary Text

Download (1MB) | Preview

Abstract

Typical inverse problems are ill-posed which frequently leads to difficulties in calculatingnumerical solutions. A common approximation method to solve ill-posed inverse problemsis iterated Tikhonov-Lavrentiev regularization.We examine iterated Tikhonov-Lavrentiev regularization and show that, in the casethat regularity properties are not globally satisfied, certain projections of the error converge faster than the theoretical predictions of the global error. We also explore the sensitivity of iterated Tikhonov regularization to the choice of the regularization parameter. We show that by calculating higher order sensitivities we improve the accuracy. We present a simple to implement algorithm that calculates the iterated Tikhonov updates and the sensitivities to the regularization parameter. The cost of this new algorithm is one vector addition and one scalar multiplication per step more than the standard iterated Tikhonov calculation.In considering the inverse problem of inverting the Helmholz-differential filter (with filterradius δ), we propose iterating a modification to Tikhonov-Lavrentiev regularization (withregularization parameter α and J iteration steps). We show that this modification to themethod decreases the theoretical error bounds from O(α(δ^2 +1)) for Tikhonov regularizationto O((αδ^2)^(J+1) ). We apply this modified iterated Tikhonov regularization method to theLeray deconvolution model of fluid flow. We discretize the problem with finite elements inspace and Crank-Nicolson in time and show existence, uniqueness and convergence of thissolution.We examine the combination of iterated Tikhonov regularization, the L-curve method,a new stopping criterion, and a bootstrapping algorithm as a general solution method inbrain mapping. This method is a robust method for handling the difficulties associated withbrain mapping: uncertainty quantification, co-linearity of the data, and data noise. Weuse this method to estimate correlation coefficients between brain regions and a quantified performance as well as identify regions of interest for future analysis.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Mays, Nathanielnhm3@pitt.eduNHM3
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairLayton, Williamwjl@math.pitt.eduWJL
Committee MemberTrenchea, Catalintrenchea@pitt.eduTRENCHEA
Committee MemberYotov, Ivanyotov@math.pitt.eduYOTOV
Committee MemberSmolinski, Patrickpatsmol@pitt.eduPATSMOL
Date: 29 September 2011
Date Type: Completion
Defense Date: 24 May 2011
Approval Date: 29 September 2011
Submission Date: 1 August 2011
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Institution: University of Pittsburgh
Schools and Programs: Dietrich School of Arts and Sciences > Mathematics
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: ill-posed problems; numerical analysis
Other ID: http://etd.library.pitt.edu/ETD/available/etd-08012011-133845/, etd-08012011-133845
Date Deposited: 10 Nov 2011 19:56
Last Modified: 15 Nov 2016 13:47
URI: http://d-scholarship.pitt.edu/id/eprint/8812

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item