A Plasma Simulation Code Design Respecting Communication Hierarchies on Clustered Many-core Systems
Noah Reddell, University of Washington
We present a multi-fluid modeling code for plasma called WARPM designed for the emergence of many-core processors such as GPUs in high-performance computing. The scientific calculations are based in the new OpenCL language, allowing natural data-parallel execution on many-core devices like GPUs. WARPM uses multiple threads and a task dependency graph to simultaneously execute parallel independent tasks. WARPM uses traditional Message Passing Interface (MPI) to distribute computation across multiple nodes of a cluster. The resulting code now is a hybrid combination of MPI for communication between nodes, threads for task parallelism on each host, and OpenCL for data-parallel scientific computation on tens or hundreds of cores available on each node. The code framework is relatively general, but I use it for computational modeling of plasmas. Fusion plasma simulation has much in common with computational fluid dynamics, but adds the complexities of modeling the electromagnetic interactions of the charged plasma species, confinement fields and heating fields. Modeling, then, involves elaborate codes with computational demands that can limit the approximation of real-world behavior. To pair compiled performance with code flexibility while handling varying fluid models and numerical methods, the OpenCL source code is dynamically assembled at WARPM run-time based on user specifications. I discuss some of the factors considered before making the strategic decision to use OpenCL over other language options, show an overview of how to structure such a problem for many-core computation and show performance gains seen using this approach.
Abstract Author(s): Noah Reddell and Uri Shumlak