Description | Nonlinear reduced-order modeling: Using machine learning to enable extreme-scale simulations in fluid dynamics Kevin Carlberg Ph.D. Principal Member of Technical Staff at Sandia National Laboratories Tuesday, October 16, 2018 at 3:30PM Mechanical Engineering Building (MEB) 238 ABSTRACT: Physics-based modeling and simulation has become indispensable across many applications in engineering and science, ranging from aircraft design to nuclear-stockpile stewardship. However, as simulation is playing an increasingly important role in scientific discovery, decision making, and design, greater demands are being placed on model fidelity. This high fidelity necessitates modeling fine spatiotemporal resolution, which can lead to extreme-scale, nonlinear dynamical-system models whose simulations consume months on thousands of computing cores. This is particularly relevant to applications driven by separated turbulent flow, where low-fidelity Reynolds-averaged Navier– Stokes (RANS) models yield insufficient predictive accuracy; instead, high-resolution large eddy simulation (LES) or even direct numerical simulation (DNS) approaches are required. Furthermore, most scientific applications (e.g., uncertainty quantification, design optimization) are ‘many query’ in nature, as they require the (parameterized) model to be simulated thousands of times. This leads to a computational barrier: the computational cost of high-fidelity simulations renders them impractical for many-query problems. In this talk, I will present several advances in the field of nonlinear model reduction that exploit simulation data to overcome this barrier. These methods combine concepts from machine learning, computational mechanics, optimization, and goal-oriented adaptivity to produce low-dimensional reduced-order models (ROMs) that are 1) accurate, 2) low cost, 3) structure preserving, 4) reliable, and 5) certified. First, I will describe least-squares Petrov–Galerkin projection, which applies dimensionality reduction and optimal projection to ensure accuracy. Second, I will describe the sample mesh concept, which employs empirical regression and greedy-optimal sensor-placement techniques to realize orders-of- magnitude speedups in high-performance computing (HPC) environments. I will also describe novel methods that exploit time-domain data to further reduce computational costs. Third, I will present a technique that ensures the ROM globally conserves mass, momentum, and energy in the case of finite-volume discretizations, thus ensuring structure preservation. Fourth, I will describe ROM h-adaptivity, which employs concepts from adaptive mesh refinement to ensure that the ROM is reliable, i.e., it can satisfy any prescribed error tolerance. Finally, I will present machine-learning error models, which apply regression methods (e.g., artificial neural networks) from machine learning to construct an accurate statistical model for the ROM error; this quantifies the ROM-induced uncertainty and provides a mechanism for rigorous statistical certification. SPEAKER BIO: Kevin Carlberg is a Principal Member of Technical Staff at Sandia National Laboratories in Livermore, CA. He was the President Harry S Truman Postdoctoral Fellow in National Science and Engineering from 2011 to 2014 and received his PhD in Aeronautics and Astronautics from Stanford University in 2011 with a PhD minor in Computational and Mathematical Engineering. His research interests include reduced-order modeling, machine learning, uncertainty quantification, computational fluid dynamics, high-performance computing, and numerical optimization. The objective of his work is to exploit structure in data to drastically reduce the cost of simulating nonlinear computational models at extreme scale. Current national-security applications for his work include hypersonic vehicles, turbulent flows over store-in-cavity configurations, and high-speed gas-transfer systems. |
---|