Encoding for Multi-Fidelity Neural Network Surrogates

Cristian Villatoro, University of Notre Dame

Photo of Cristian Villatoro

The construction of accurate surrogates is a key step in many outer loop tasks, like uncertainty quantification and optimization. Due in part to their flexibility and expressiveness, neural networks have shown great potential to significantly improve the quality of surrogates, particularly in high dimensions. Unfortunately, it is well-known that neural networks typically require large datasets for reliable training and accurate predictions. Recently, multi-fidelity approaches have been proposed to address this challenge by utilizing data sources with varying accuracy and computational costs to create larger datasets. In such approaches, the larger training datasets are leveraged to meet desired accuracy thresholds for high-fidelity predictions while maintaining the overall cost below to a predetermined computational budget. Previous work has shown that exploiting the correlation between data sources can enhance training accuracy in scenarios with scarce high-fidelity data. However, in realistic applications where the correlation between model outputs may be low, the performance of this approach could be diminished. Therefore, we show that transforming model inputs can significantly increase the correlation between high- and low-fidelity models while resolving differences in model parameterization. In the work, we demonstrate the benefit of linear encoding and non-linear encoding on multiple test cases, including numerical examples using multiple low-fidelity sources and problems inspired by realistic applications.