Bilevel programming forms a class of optimization problems that model hierarchical relation between two independent decision-makers, namely, the leader and the follower, in a collaborative or conflicting setting. Decisions in this hierarchical structure are made sequentially where the leader decides first and then the follower responds by solving an optimization problem, which is parameterized by the leader's decisions. The follower's reaction, in return, affects the leader's decision, usually through shaping the leader's objective function. Thus, the leader should take into account the follower's response in the decision-making process.

A key assumption in bilevel optimization is that both participants, the leader and the follower, solve their problems optimally. However, this assumption does not hold in many important application areas because: (i) there is no known efficient method to solve the lower-level formulation to optimality; (ii) the follower either is not sufficiently sophisticated or does not have the required computational resources to find an optimal solution to the lower-level problem in a timely manner; or (iii) the follower might be willing to give up a portion of his/her optimal objective function value in order to inflict more damage to the leader.

This dissertation mainly focuses on developing approaches to model such situations in which the follower does not necessarily return an optimal solution of the lower-level problem as a response to the leader's action. That is, we assume that the follower's reaction set may include both exact and inexact solutions of the lower-level problem. Therefore, we study a generalized class of the follower's reaction sets. This is arguably the case in many application areas in practice, thus our approach contributes to closing the gap between the theory and practice in the bilevel optimization area.

In addition, we develop a method to solve bilevel problems through single-level reformulations under the assumption that the lower-level problem is a linear program. The most common technique for such transformations is to replace the lower-level linear optimization problem by its KKT optimality conditions. We propose an alternative technique for a broad class of bilevel linear integer problems, based on the strong duality property of linear programs and compare its performance against the current methods. Finally, we explore bilevel models in an application setting of the pediatric vaccine pricing problem.

The number of implantable medical devices ranging from dental implants to cardiovascular implants has been exponentially increased in the last decades and various materials including metallic materials and polymeric materials are used in medical device manufacturing. In metallic materials, nitinol is widely used due to its superelasticity and well-known biocompatibility. Finite element modeling (FEM) along with in vitro and in vivo is being adopted to evaluate the medical device performance in patients and optimize medical designs.

In this dissertation, four innovative implantable medical devices were developed and their performances were evaluated using finite element modeling and in vitro testing: 1) ventriculoamniotic shunt for aqueductal stenosis treatment. It has the conduit to drain excessive cerebrospinal fluid (CSF) to lower intracranial pressure in fetal brains and the anchors to prevent the device dislocation during the treatment. The shunt tube design was optimized using computational fluid dynamics calculations. The anchor design was determined with ANSYS Static Structural and the prototype was manufactured based on modeling results. In vitro pressure and flow rate measurement within shunt device demonstrate that the pressure in fetal brain can be reduced by 95.2% while the pressure elevation in amniotic sac is negligible. 2) TFN flow-diverter integrated with flow sensing system for cerebral aneurysm post-treatment monitoring. The flexibility of TFN membrane was investigated using both computational modeling and stretching experiment. As the TFN was wrapped on a flow-diverter backbone with junction points, we evaluated the attachment patterns between a TFN and stent backbone. Finally, micro-scale flow sensor was designed and fabricated based on the computational modeling. 3) compartmentalized stent to isolate the perfusion of the abdominal organs. The fluid dynamics inside the arterial and venous organ perfusion stent (OPS) was analyzed in terms of velocity distribution and wall shear stress (WSS). In vitro pressure difference both in arterial and venous OPS showed that the stent deployment with perfusion flow to the abdominal organs will not generate a significant load on the donor's heart. 4) retrievable stent graft for noncompressible hemorrhage control. The mechanical and biological properties of ePTFE membrane were investigated and the nitinol framed backbone was evaluated in terms of radial force.

Microgrids are small-scale distribution networks that provide a template for large-scale deployment of renewable energy sources, such as wind and solar power, in close proximity to demand. However, the inherent variability and intermittency of these sources can have a significant impact on power generation and scheduling decisions. Distributed energy resources, such as energy storage systems, can be used to decouple the times of energy consumption and generation, thereby enabling microgrid operators to improve scheduling decisions and exploit arbitrage opportunities in energy markets. The integration of renewable energy sources into the nation's power grid, by way of microgrids, holds great promise for sustainable energy production and delivery; however, operators and consumers both lack effective strategies for optimally using stored energy that is generated by renewable energy sources.

This dissertation presents a comprehensive stochastic optimization framework to prescribe optimal strategies for effectively managing stored energy in microgrids, subject to the inherent uncertainty of renewable resources, local demand and electricity prices. First, a Markov decision process model is created to characterize and illustrate structural properties of an optimal storage strategy and to assess the economic value of sharing stored energy between heterogeneous, demand-side entities. Second, a multistage stochastic programming (MSP) model is formulated and solved to determine the optimal storage, procurement, selling and energy flow decisions in a microgrid, subject to storage inefficiencies, distribution line losses and line capacity constraints. Additionally, the well-known stochastic dual dynamic programming (SDDP) algorithm is customized and improved to drastically reduce the computation time and significantly improve solution quality when approximately solving this MSP model. Finally, and more generally, a novel nonconvex regularization scheme is developed to improve the computational performance of the SDDP algorithm for solving high-dimensional MSP models. Specifically, it is shown that these nonconvex regularization problems can be reformulated as mixed-integer programming problems with provable convergence guarantees. The benefits of this regularization scheme are illustrated by way of a computational study that reveals significant improvements in the convergence rate and solution quality over the standard SDDP algorithm and other regularization schemes.

This study explored the use of machine learning to generate metamodel approximations of a physics-based fire hazard model called Consolidated Fire and Smoke Transport (CFAST). The motivation to generate accurate and efficient metamodels is to improve modeling realism in probabilistic risk assessments where computational burden has prevented broader application of high fidelity codes. The process involved scenario definition, generating training data by iteratively running the hazard model over a range of input space, exploratory data analysis and feature selection, an initial testing of a broad set of metamodel types, and finally metamodel selection and tuning.

The study identified several factors that should be considered when metamodeling a physics-based computer code. First, the input space should be limited to a manageable scale and number of parameters; otherwise generating sufficient training data becomes infeasible. Second, there is a relationship between the physics being characterized and the metamodel types that will successfully mimic those physics. Finally, metamodel accuracy and efficiency must be balanced against initial development costs. Once developed, trained metamodels are portable and can be applied by many users over a range of modeling conditions.

The Idaho National Laboratory software called RAVEN was used to facilitate the analysis. Twenty five (25) metamodel types were investigated for their potential to mimic CFAST-calculated maximum upper layer temperature and its timing. Linear metamodels struggled to predict with accuracy because the physics of fire are non-linear.

k-nearest neighbor (kNN) model tuning generated a k =4 model that fit the vast majority of CFAST calculations within 10% for both maximum upper layer temperature and its timing. This model showed good generalization with use of 10-fold cross validation.

The resulting kNN model was compared to algebraic models typically used in fire probabilistic risk assessments. The algebraic models were generally conservative relative to CFAST; whereas the kNN model closely mimicked CFAST. This illustrates the potential of metamodels to improve modeling realism over the simpler models often selected for computational feasibility. While the kNN metamodel is a simplification of the higher fidelity CFAST code, the error introduced is quantifiable and can be explicitly considered in applications of the metamodel.