Controlling AI Surrogates for Costly Science Codes With Accuracy Guarantees
Michael Tynes, University of Chicago
Replacing computationally expensive calculations in simulations with machine learning (ML) surrogates can offer substantial speedups, but these surrogates are notoriously unreliable for states that are distant from the training set. Researchers have explored imposing thresholds on model uncertainty above which the original expensive calculation is performed and the ML model is updated. Existing approaches use static thresholds, but the optimal value of this threshold varies as the simulation evolves. We have developed a control theoretic approach that adaptively varies this threshold across the runtime of the simulation in order to meet a scientist-defined error bound across the full simulation. When uncertainty is low, our controller raises the threshold to use ML more often and achieve better speedup. Likewise, when uncertainty is high, the controller lowers the threshold to ensure the given error bound is respected. Initial results in atomistic simulations demonstrate the promise of this method.
Abstract Author(s): Michael Tynes, Yuliana Zamora, Logan Ward, Kyle Chard, Ian Foster, Hank Hoffman