Keynote Presentations
Anti Patterns of Scientific Machine Learning to Fool the Masses: A Call for Open Science
PASC23 Keynote Presentation
Lorena A. Barba (George Washington University in Washington, DC, USA)
An anti-pattern is a frequently occurring pattern that is ineffective and risks being counterproductive. The term comes from software engineering, inspired by the classic book “Design Patterns” (highlighting desirable and effective patterns for code). Over the years, the term has spread beyond software to other fields, like project management. An anti-pattern is recurring, it has bad consequences, and a better solution exists. Documenting anti-patterns is effective in revealing how to make improvements. This talk will call attention to anti-patterns in scientific machine learning—faintly tongue-in-cheek—with a call to do better. Scientific machine learning promises to help solve problems of high consequence in science, facing challenges like expensive or sparse data, complex scenarios, stringent accuracy requirements. It is expected to be domain-aware, interpretable, and robust. But realizing the potential is obstructed by anti-patterns: performance claims out of context, renaming old things, incomplete reporting, poor transparency, glossing over limitations, closet failures, overgeneralization, data negligence, gatekeeping, and puffery. Open science—the culture and practices that lead to a transparent scientific process and elevate collaboration—is the lens through which we can see a path for improvement. In the Year of Open Science, this talk is a call for a better way of doing and communicating science.
Lorena A. Barba
Lorena A. Barba is professor of mechanical and aerospace engineering at the George Washington University in Washington, DC. Her research interests include computational fluid dynamics, high-performance computing, and computational biophysics. An international leader in computational science and engineering, she is also a long-standing advocate of open source software for science and education, and is well known for her courses and open educational resources. Barba served (2014–2021) in the Board of Directors for NumFOCUS, a US public charity that supports and promotes world-class open-source scientific software. She is an expert in research reproducibility, and was a member of the National Academies study committee on Reproducibility and Replicability in Science. She served as Reproducibility Chair for the SC19 (Supercomputing) Conference, is Editor-in-Chief of IEEE Computing in Science & Engineering, was founding editor and Associate EiC (2016–2021) for the Journal of Open Source Software, and is EiC of The Journal of Open Source Education. She was General Chair of the global JupyterCon 2020 and was named Jupyter Distinguished Contributor in 2020.
A View of Post-Exascale Computational Science
and the Emerging Mix of HPC, AI, and Quantum
Rick Stevens (Argonne National Laboratory / University of Chicago, US)
PASC23 PUBLIC LECTURE
This event is free of charge and open to the general public. The lecture is given in English.
In this talk, I will outline my vision of the evolution of computational science over the next twenty years. The emergence of new platforms will complement and challenge traditional high-performance computing (HPC), impacting the types of problems we work on, the platforms that centers design and deploy, and the research that gets funded. As we launch into the post-exascale epoch, we face a computing landscape that is quite different from the one that motivated the international push for exascale HPC systems. We see the emergence of powerful AI methods from generative language models that will transform research and teaching (and exams!), to AI-HPC hybrid (or surrogate) models that promise orders of magnitude performance gains for some problems. Quantum computers and algorithms also show potential to greatly impact computational science. I will discuss how these capabilities could change the landscape of problems researchers pursue, and when and how the scientific computing community may evolve as it absorbs new approaches, sorting through what is real and works, and what is not ready for scientific application. Future platforms must be designed for the problems that the community wants to solve in the near term, while also leading us to new approaches that offer sustained impact across many disciplines.
Rick Stevens
Rick Stevens is a Professor of Computer Science at the University of Chicago as well as the Associate Laboratory Director of the Computing, Environment and Life Sciences (CELS) Directorate and Argonne Distinguished Fellow at Argonne National Laboratory. In these, and in numerous other roles, he is responsible for ongoing research in the computational and computer sciences from HPC architecture to the development of tools and methods for bioinformatics, cancer, infectious disease, and other problems in science and engineering. Stevens is a member of the American Association for the Advancement of Science and has received many national honors for his research, including being named a Fellow of the Association of Computer Machinery (ACM) for his continuing contributions to high-performance computing.
AI, Computing and Thinking: Algorithmic Alloys for Advancing Scientific Discovery
PRACE HPC EXCELLENCE AWARD
Petros Koumoutsakos (Harvard University, US)
Our generation has experienced more than a billion-fold increase in computer hardware capabilities and a dizzying pace of acquiring and transmitting massive amounts of data. Harnessing this resource has introduced a new form of inquiry: Computing. Computing is transforming our Thinking for solving complex problems and it is fueling the Artificial Intelligence (AI) revolution that is changing our world. I believe that we are at the dawn of a new era in science that would benefit from forming “alloys” of AI, Computing and Thinking.
I will present two examples of this process: Learning the Effective Dynamics of multi-scale systems and a fusion of Multi-Agent Reinforcement Learning and HPC for modeling and control of biological and turbulent flows. I will juxtapose successes and failures and argue that the proper integration of domain knowledge, AI expertise and Computing are essential to advance scientific discovery and technology frontiers.
Petros Koumoutsakos
Petros Koumoutsakos is the Herbert S. Winokur Jr., Professor for Computing in Science and Engineering and the Area Chair of Applied Mathematics at Harvard University. Petros held previously the Chair for Computational Science at ETHZ (2000-2020). He is elected Fellow of the the Society of Industrial and Applied Mathematics (SIAM), American Society of Mechanical Engineers (ASME) and the American Physical Society (APS). He is recipient of the ACM/SIAM Gordon Bell prize in Supercomputing (2013) and he is elected International Member to the US National Academy of Engineering (NAE) (2013). His research interests are on the fundamentals and applications of computing and AI to understand, predict and optimize complex systems in engineering, nanotechnology, and medicine.
Leveraging HPC Performance Engineering to Support Exascale Scientific Discovery
PRACE ADA LOVELACE AWARD FOR HPC
Sarah Neuwirth (Goethe University Frankfurt, Germany)
HPC applications are evolving not only to include traditional modeling and simulation bulk-synchronous scale-up workloads but also scale-out workloads including: artificial intelligence, big data analytics, deep learning, and complex workflows. Given the ever-growing complexity of supercomputers and the advent of exascale computing, these trends can create a gap between expected and observed peak performance. Therefore, performance engineering is critical to bridge this gap through reproducible benchmarking, prediction, optimization, and analysis of large-scale HPC workloads. In this talk, I will highlight the challenges and opportunities in leveraging modular HPC performance engineering to support exascale scientific discovery. This will include introducing the key pillars of performance engineering such as user-friendly tool infrastructures, performance modeling, automatic optimization through integration into the application/system lifecycle, and feedback and user engagement.
Sarah Neuwirth
Dr. Sarah Neuwirth is the deputy group leader of the Modular Supercomputing and Quantum Computing Group at Goethe University Frankfurt, Germany. In 2018, Sarah was awarded her Ph.D. in Computer Science from Heidelberg University, Germany. She was honored with the “ZONTA Science Award 2019” for her outstanding dissertation. Her research focuses on parallel file systems, modular supercomputing, performance engineering, high performance networks, benchmarking, parallel I/O, and communication protocols. Sarah has worked on numerous research collaborations including working with: Jülich Supercomputing Centre, BITS Pilani, ParTec, CEA, Intel, Oak Ridge National Laboratory, and Virginia Tech.
FourCastNet: Accelerating Global High-Resolution Weather Forecasting Using Adaptive Fourier Neural Operators
ACM Papers – PASC23 Plenary Paper
Thorsten Kurth (NVIDIA Inc., Switzerland)
Extreme weather amplified by climate change is causing increasingly devastating impacts across the globe. The current use of physics-based numerical weather prediction (NWP) limits accuracy due to high computational cost and strict time-to-solution limits. We report that a data-driven deep learning Earth system emulator, FourCastNet, can predict global weather and generate medium-range forecasts five orders-of-magnitude faster than NWP while approaching state-of-the-art accuracy. FourCastNet is optimized and scales efficiently on three supercomputing systems: Selene, Perlmutter, and JUWELS Booster up to 3,808 NVIDIA A100 GPUs, attaining 140.8 Petaflops in mixed precision (11.9% of peak at that scale). The time-to-solution for training FourCastNet measured on JUWELS Booster on 3,072GPUs is 67.4minutes, resulting in an 80,000 times faster time-to-solution relative to state-of-the-art NWP, in inference. FourCastNet produces accurate instantaneous weather predictions for a week in advance and enables enormous ensembles that could be used to improve predictions of rare weather extremes.
Thorsten Kurth
Thorsten works at NVIDIA on optimizing scientific codes for GPU based supercomputers. His main focus is on providing optimized deep learning applications for HPC systems, including MLPerf HPC benchmark applications. These include end-to-end optimizations such as input pipeline including IO tuning and distributed training. In 2018 he was awarded the Gordon Bell Prize for the first Deep Learning application which achieved more than 1 ExaOp peak performance on the OLCF Summit HPC system. In 2020 he was awarded the Gordon Bell Special Prize for HPC-based Covid-19 research for efficiently generating large ensembles of scientifically relevant Spike Trimer confirmations using the AI driven MD simulations workflow.