Computer Science, Machine Learning, and Applied Mathematics
TimeMonday, June 2616:30 - 18:30 CEST
DescriptionOver the last ten years there has been a profusion of scalable packages, such as TensorFlow, Pytorch, and ONNX, that also run on heterogeneous computing platforms. The ability of these tools to ingest massive amounts of training data and make predictions, makes them an obvious choice as tools for scientific machine learning (SciML). In our presentations we demonstrate how researchers in diverse areas of scientific inquiry employ these tools, creatively, to model small scale phenomena in coarse grain simulations, which are then used as predictive tools. Also, with the availability of exa-scale computing platforms, it is becoming clear that storing several petabytes of training data, for machine (ML) models, is not a viable option. We present ongoing research in the area of in-situ ML, where the simulation code and ML and deep learning (DL) framework are run together, to generate, and use, the streaming simulation data, to train the model, and make predictions using coarser simulations. Our presentations also explore the performance of in-situ machine learning frameworks, and the portability of the generated ML models to simulation codes that are different from the one that was used to train the model.