Trends in Next Generation HPC Architectures and Their Impact on Computational Methods for Nuclear Reactor Analysis
Andrew Siegel, Argonne National Laboratory
It is well known that the design of next generation HPC systems requires architectural choices that leave application developers in a largely unfamiliar parameter regime relative to the trends of the past twenty years -- overall levels of concurrency, bandwidth to FLOP/s ratios, memory per floating point unit, use of instruction-level and shared-memory parallelism, power, and resilience characteristics are a few common examples. While constrained to some degree by the technology, in designing future HPC systems there is still considerable latitude both in specific design tradeoffs and the programming models that are used to optimally express them. At the same time, regardless of specific design choices, most applications will need to evolve considerably to make efficient use of these systems, including developing new algorithmic implementations, formulations, and potentially even new mathematical descriptions of the target physical problem. While this co-design of future architectures is a two-way street, in this talk I focus on the application "pull" side of the equation, discussing in particular three examples from the fields of nuclear reactor modeling: comparing particle vs. pde-based methods for neutron transport, replacing table lookups with functional evaluations for representing neutron cross section data, and the avoidance of bulk-synchronicity in explicit timestepping algorithms.
Abstract Author(s): Andrew Siegel