Reinventing HPC

Daniel Reed, University of Utah

Our current model for configuring, procuring, and constructing leading edge HPC systems is predicated on a vibrant commercial computing market whose interests and products align with scientific computing needs. Alas, this model is increasingly problematic, as the underlying technical and market conditions have changed. First, the PC ecosystem that birthed the “attack of the killer micros” and today’s large-scale HPC clusters is increasingly stagnant, which is in stark contrast to the rapid growth and hardware innovation that is taking place in the hyperscaler cloud and AI markets. Meanwhile, reflecting the technical and financial challenges of a post-Moore environment, the semiconductor industry is shifting rapidly to multiple chip packaging – chiplets that integrate multiple, heterogeneous chips via a high-bandwidth interconnect and package. Finally, AI advances are reshaping how we think about the nature of scientific computation and how we pursue scientific breakthroughs via hybrid computations and data analytics. The message is clear. We must again adapt, just as we did during the transitions from vector systems and shared memory parallel processors. Despite the challenges, there are promising ways forward, and this talk will discuss the history of HPC, promising research directions, and some exciting opportunities at the intersection of AI, edge computing, and environmental modeling and analysis.