Minisymposia

Application Optimization in an Energy Constrained Environment

Time-to-solution has historically been the primary optimization objective for HPC applications. A side effect of the resulting accelerated application was typically a reduced energy consumption. Especially in light of the current energy shortage, this side effect is quickly becoming an optimization target by itself. But with dynamic power management, every changing landscape of system architectures it is increasingly challenging to find energy optimal implementations for a specific application. Approaching HPC applications from the energy perspective leads to a range of interesting questions: Is asking for performance portability justified if energy footprint is the primary driver? Can we co-design a minimum energy system geared towards a certain application domain? As HPC is moving from single applications to complex coupled systems, will the same types of optimization recipes hold? And what kind of tools are needed to support the developer’s energy optimization journey? In this minisymposium we bring together experts addressing energy optimization from the application, system and HPC center point of view. With a focus on weather/climate and related applications, the presenters will discuss the impact of low-level application optimization on energy footprint, the impact of different hardware architectures and the support that is needed from an HPC center level.

Organizer(s): Peter Messmer (NVIDIA Inc.), and Fernanda Foertter (Voltron Data)

Domain: Climate, Weather and Earth Sciences


Application Perspective on SYCL, a Modern Programming Model for Performance and Portability

Heterogeneous architectures are now common in high-performance computing. As accelerator hardware gets more diverse, HPC applications must target architectures from multiple vendors and hardware generations with different capabilities and limitations. The dominance of vendor-specific and proprietary programming models makes writing portable and future-proof code challenging. Like OpenCL, SYCL is a standard not controlled by a single vendor. SYCL is still in its early stage, but there already are several quite mature implementations of the standard by different groups, targeting a wide range of hardware. It has been adopted by several large HPC projects and is the recommended programming framework for upcoming Intel GPU-based supercomputers. SYCL allows accessing low-level hardware capabilities while providing a high-level abstraction and the ability to rely on the runtime for data movement and execution configuration choice, making it possible to write portable and performant code without compromising the ability to tune for a specific device architecture when needed. This minisymposium aims to discuss SYCL from the perspective of scientific application developers, sharing the experiences of early adopters and promoting interdisciplinary communication in software engineering for modern, performance-portable, and maintainable code.

Organizer(s): Andrey Alekseenko (KTH Royal Institute of Technology, SciLifeLab), and Szilárd Páll (KTH Royal Institute of Technology)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Are HPC Codes Ready for Exascale? A EU HPC Centre of Excellence Point of View

In June 2023 more than 15 HPC Centres of Excellence (CoEs) will be covering almost all the scientific domains that use HPC. Some have been running for more than six years and will be in their third edition, while others have just started. Collectively they have a unique view of the current status of HPC codes and the main issues they are facing to run at pre-exascale scales. This minisymposium aims to open a dialog between the different domains to reveal the similarities and differences in optimizing their applications. With presentations from different CoEs, we will identify the common pitfalls and bottlenecks facing the current codes and compare their solutions and approaches. Speakers from four CoEs (ChEESE, PerMedCoE, ESiWACE & CoEC) are invited to summarize the parallel performance and scalability of the application codes of their CoE, and their progress towards exascale on large-scale heterogeneous CPU and GPU computer systems. They are asked to explain how they identify the main bottlenecks and problems to address, as well as to explain the main optimization focus already addressed or planned.

Organizer(s): Marta Garcia-Gasulla (Barcelona Supercomputing Center), and Brian Wylie (Forschungszentrum Jülich)

Domain: Physics


Biophysics-Informed Machine Learning

For designing personalized treatment strategies, measurable quantities (biomarkers) that relate a patient’s clinical representation to the existence, progress, and outcome of the disease need to be measured. They can often be formulated as quantities coming from biophysical models involving, for example, material deformations or fluid transport. However, the computational cost of numerically solving for these quantities can be prohibitive. These challenges are limiting the potential clinical impact of classical computational approaches, thus posing the need for new frameworks that reduce the time to prediction without sacrificing the physical consistency and fidelity of the inferred biomarkers. The success of machine learning methods provides a viable path to amortize the cost of these expensive simulations by training models to replicate the input-output behavior of the classical simulations. In purely data driven approaches, large amounts of labeled data are needed to train the model without leveraging any prior knowledge about the underlying biophysics. Unfortunately, in many biological scenarios the data acquisition process can be expensive and time consuming, limiting the amount of available training data. To address this difficulty, biophysics-informed machine learning offers a computationally efficient approach that has the potential to bridge the gap between modeling and clinical decision making.

Organizer(s): Georgios Kissas (University of Pennsylvania), and Jacob Seidman (University of Pennsylvania)

Domain: Life Sciences


Breaking the Silos to Enhance HPC Impact

This mini-symposium will offer a unique perspective at the crossroad between excellence in HPC/Computational Sciences and benefits of more diversity, equity and inclusion. The session will be built around live testimonials from speakers at different stages of their career, each describing how diversity played and continues to play an important role in their path. While focusing on the quality of the scientific results, each presentation will highlight the importance of evolving and developing in an environment that is becoming more diverse and more inclusive. Different experiences will be shared: from Master Student to an HPC engineer, with Elsa Germann, from PSI; how a Cairo university graduate who came to Switzerland to do his PhD in HPC is now working as a postdoc, with Ahmed Eleliemy from the University of Basel; how the coordination of Exclaim, one of most demanding exascale projects, requires diverse competences, technical and soft skills, with Tamara Bandikova from ETH Zürich. Finally, we will hear about the career track and professional experience of Adriana Iamnitchi, professor leading a research team in computational social data science at the University of Maastricht. Join us to see how diversity and inclusion are contributing to break barriers in HPC!

Organizer(s): Marie-Christine Sawley (IDEAS4HPC, ICES Foundation), Maria Girone (CERN), Florina Ciorba (University of Basel), Sadaf Alam (University of Bristol), and Cerlane Leong (ETH Zurich / CSCS)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Code as Community

Computer software, or code, is often seen as a means to an end; which is to say that we create and use code to accomplish an objective — whether the objective is for computational science, music, performance art, or any number of other applications. That said, Code is an expression of human creativity and ingenuity —and by definition a means in which we communicate, and share experiences. In this mini symposium, we are interested in how scientific code fosters scientific communities to form, evolve, and/or express themselves in predictable and unpredictable ways. We will address the following questions: What are the challenges with integrating code and teams across distinct communities? What are the challenges associated with growth by community adoption of code? What are enduring characteristics of code that survive the test of time in large communities? When do software developers formally identify as a formal community service as in the case of research science engineers? How does code communicate? The talks in this session intend to be thought provoking and the goal is to encourage developers to have a holistic perspective of communities in mind when developing their codes.

Organizer(s): Rinku Gupta (Argonne National Laboratory), and Elaine Raybourn (Sandia National Laboratories)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Code Complete and More: Emerging Efforts to Improve Software Quality

In the book Code Complete by Steve McConnell, the author argues that working software is the only essential deliverable of a software project. At the same time, many other activities requiring diverse skills produce a variety of non-software artifacts that make high-quality software possible and assure the effective use of software in research. As software becomes more important for research, more sophisticated in its design and capabilities, more integrated into research workflows, and more useful to broader communities, additional expertise and tools become important to producing and sustaining software. In this minisymposium, we explore both possible and demonstrated successes resulting from introducing expertise in social and cognitive sciences, organizational psychology, community development and policies, and tools and processes that focus on meta-data in support of high-quality software. We have found that taking into account these aspects of scientific software development, maintenance, and use can significantly improve the value of software, enabling us to improve the quality and frequency of research results and reduce the cost of obtaining them.

Organizer(s): Michael Heroux (Sandia National Laboratories, St. John’s University)

Domain: Applied Social Sciences and Humanities


Computer-Aided Design of High-Performance Thermoelectric Materials

The heat-to-electricity conversion (one aspect of thermoelectricity) is known for more than two centuries. However, thermoelectricity has hardly found its way to large-scale deployment due to the lack of materials with a figure of merit ZT greater than 2. One reason for this is that the figure of merit combines properties that counteract one another. Another reason is that the experimental search for new TE materials have reached a plateau in terms of chemical diversity, design, and manufacturing of materials, leading to best material ZT values of at most 1.6-1.9. A new paradigm has to be found to relaunch the discovery of TE materials, and the computer-aided design of materials could be such a paradigm. In this mini-symposium we will discuss the recently developed algorithms (e.g., basin hoping, evolutionary, genetic algorithms) that foster the discovery of new chemical structures, the artificial-intelligence-based ones to predict both new structures and their properties and their potential synergistic association to unveil new generation of thermoelectrics and improve their efficiency.

Organizer(s): Pascal Boulet (Aix-Marseille University), and Marie-Christine Record (Aix-Marseille University)

Domain: Chemistry and Materials


Cross-Cutting Aspects of Exploiting Exascale Platforms for High-Fidelity CFD in Turbulence Research

With the arrival of exascale computing, the theoretical computational performance has increased, opening up for unprecedented simulation capabilities for Computational Fluid Dynamics (CFD) applications. While offering high theoretical peak performance and high memory bandwidth, to efficiently exploit these systems, complex programming models and significant programming investments are necessary. Furthermore, most known pre and exascale systems currently planned or installed, e.g. Frontier, Leonardo and LUMI, contain a large fraction of accelerators. Thus, the challenge of porting and tuning scientific codes for these new platforms can no longer be ignored. However, established CFD codes build on years of verification and validation of their underlying numerical methods, potentially preventing a complete rewrite and rendering disruptive code changes a delicate task. Therefore, porting established codes to accelerators poses several interdisciplinary challenges, from formulating suitable numerical methods to applying sound software engineering practices to cope with disruptive code changes. The wide range of topics makes the exascale CFD transition relevant to a broader audience, extending outside the traditional fluid dynamics community. This minisymposium aims at bringing together the CFD community as a whole, from domain scientists to HPC experts, to discuss current and future challenges towards enabling exascale fluid dynamics simulations on anticipated accelerated systems.

Organizer(s): Philipp Schlatter (Friedrich-Alexander-Universität Erlangen-Nürnberg), and Niclas Jansson (KTH Royal Institute of Technology)

Domain: Engineering


Data Management across the Computing Continuum

During the last few years, applications in scientific fields such as weather forecasting and radio astronomy have evolved into complex data-centric workflows distributed over what is known as the computing continuum, i.e. all the computing resources from the Edge (captors, sensors, …) to Cloud and HPC infrastructures. These workflows exhibit new scenarios across the domains of “modelling and simulation”, “AI”, “Analytics” and “Internet of things” (IoT) for which advanced data management techniques have to be devised. Achieving this goal requires adressing new data-related challenges, both within the infrastructures themselves, with ever-increasing performance and flexibility requirements, but also between geo-distributed systems. During our minisymposium, we will address these challenges from two perspectives. First, we will try to get an overview of the needs of scientific workflows through concrete examples such as the Square Kilometre Array (SKA) use-case whose goal is to build the world’s largest radio telescope, although these requirements are present in many other areas. Second, we will review the latest research on the topic of data management across the computing continuum, from the point of view of both architectures and software components.

Organizer(s): François Tessier (INRIA)

Domain: Computer Science, Machine Learning, and Applied Mathematics


European Advanced Computing Hubs to Accelerate Progress in Fusion R&D

Nuclear fusion, which harnesses the energy of the stars, promises nearly limitless carbon-free energy. In order for fusion to become economically viable, special reactors need to be designed. The scientific and engineering multidisciplinary challenges involved in building such reactors are enormous: the plasma core needs to be kept at millions of degrees so that fusion can occur while preventing the reactor walls from melting from that heat and making sure that this heat bleeds out smoothly from the plasma. Currently under construction in Southern France, the ITER experiment is expected to show the feasibility of fusion energy. In order to optimize its operation, as well as the next-generation reactors, highly optimized plasma code simulations running on exascale-grade supercomputers are necessary. In order to tackle this task, the European Consortium for the Development of Fusion Energy (EUROfusion) has created three HPC Advanced Computing Hubs (ACH). Those hubs participate in the improvement of existing European simulation codes to enable researchers to take full advantage of the new capabilities offered by the new generations of supercomputers. This mini-symposium will introduce the fusion HPC programme and present current HPC software development carried out at the ACHs.

Organizer(s): Mervi Mantsinen (Barcelona Supercomputing Center), Roman Hatzky (Max Planck Institute for Plasma Physics), and Gilles Fourestey (EPFL)

Domain: Physics


Exascale Plasma Simulations, Methods and Technologies for Fusion, Plasma Accelerator and Space Physics Research

Plasmas constitute paradigmatic examples of complex physical systems, involving nonlinear, multiscale processes far from thermodynamic equilibrium. HPC and virtual prototyping play an increasingly crucial role in explaining and predicting observations as well as in designing more effective plasma systems and manufacturing processes. This provides novel pathways to reducing the risks, costs, and time often associated with the design and construction of new devices and large experimental facilities. The present mini-symposium brings together scientists, researchers, and practitioners from plasma physics, applied mathematics, and computer science (incl. HPC, computational science, software engineering, and data analytics) to discuss computational tools and methodologies needed to tackle critical grand challenges in plasma physics: optimising magnetic confinement fusion devices, developing new accelerator technologies, and predicting space weather. Meanwhile, the tools and techniques devised in these contexts are applicable to a much wider range of problems involving natural and laboratory plasmas and fluids.

Organizer(s): Stefano Markidis (KTH Royal Institute of Technology), Jeremy Johnathan Williams (KTH Royal Institute of Technology), Michael Bussmann (Helmholtz-Zentrum Dresden-Rossendorf), Sriramkrishnan Muralikrishnan (Forschungszentrum Jülich), and Andreas Adelmann (Paul Scherrer Institute)

Domain: Engineering


Galaxy: An Open Web-Based Platform for FAIR Data Analysis and Computing across Scales, Domains and Communities

The Galaxy platform is a workflow management system that has the potential to simplify the use of diverse computing infrastructures, effectively unlocking and democratising access to powerful data processing and computing capabilities. Originally developed for the bioinformatics community, but architected to be neutral to any particular science domain, Galaxy is now being deployed for researchers and professionals in the materials science, astrophysics, climate science and nuclear physics fields to optimally utilize the diverse computing infrastructures that are potentially available to these communities. In this mini-symposium, participants will explore the different aspects of enabling effective data analysis across a variety of scales, domains, and communities utilizing the Galaxy platform. This will include presentations on deploying and scaling Galaxy infrastructure in an operational facility, the challenges in transforming science use cases into functioning workflows, and how European scientific organizations are using galaxy to study muon science, climate science, and physics. We will also include a presentation on the Galaxy platform itself, and in particular the training and educational network that is used by a large number of researchers to promote open data analysis practices worldwide.

Organizer(s): Gregory Watson (Oak Ridge National Laboratory), and Leandro Liborio (Science and Technology Facilities Council)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Green Computing Architectures and Tools for Scientific Computing

High-Performance Computing (HPC) data centers are facing a significant challenge in terms of energy efficiency as they strive to operate within stringent power budgets and reduce environmental pollution. The computing facilities that will be built for the Square Kilometre Array Telescope (SKA) are a prime example of this challenge, as they must process massive quantities of data from thousands of antennas with a limited power budget. In addition, new hardware technologies with increasing power footprints result in power-hungry supercomputers. In light of this, new strategies, tools, and architectures are being explored to address these issues. This mini-symposium aims to gather experts from diverse research environments to present solutions and methods being investigated in state-of-the-art research. The mini-symposium will dive into the energy efficiency and carbon footprint of SPH-EXA, a smoothed particle hydrodynamics. Then, it will focus on an improved Kernel Tuner, a generic autotuning tool for GPU applications that takes into account energy efficiency to find the best operational frequency for GPUs for SKA workloads. Finally, the mini-symposium will address lower lever optimization by presenting “exotic” solutions such as approximate computing design of Artificial Intelligence accelerators and domain-specific accelerators for biomedical workloads.

Organizer(s): Emma Tolley (EPFL), Stefano Corda (EPFL), and Jean-Guillaume Piccinali (ETH Zurich / CSCS)

Domain: Computer Science, Machine Learning, and Applied Mathematics


High Performance and High Throughput Approaches in Material Science Simulations: A European Perspective

Everything in life is made of a material, understanding materials is key to understanding the world. Simulations, the “third pillar of science”, are an essential tool for every material and chemistry scientist. The availability of data combined with machine learning algorithms and High Performance Computing has made possible to create a new fourth pillar. However the evolution of computer architectures brought more powerful and diverse systems (expanding what is computationally achievable) as well as the need to create new algorithms and scalable methods. Today grand challenges in material discovery are approached using High Performance Computing as well as High Throughput Computing and ensemble methodologies. This minisymposium brings together the expertise built over the years by material scientists and computational scientists in the European Centre of Excellence in HPC application and the equivalent UK initiative. All these projects are acting at the forefront of transition into the Exascale era with a particular attention at conceiving the new generation of scientific codes. We will ask invited speakers to share strategies, lessons learnt and successes. To conclude we will host a panel of four people discussing the role of domain specific libraries in achieving portability, productivity and performance in complex material science codes.

Organizer(s): Fabio Affinito (CINECA), Joost VandeVondele (ETH Zurich / CSCS), and Filippo Spiga (NVIDIA Inc.)

Domain: Chemistry and Materials


High Performance Computational Imaging across Scales, Communities and Modalities

A recent trend in the design of imaging systems consists in replacing fixed-function instrumentation by sensor networks with multiplexed data streams. Such distributed sensing architectures generally yield rich measurements allowing for greater adaptivity and performances. Unlike traditional imaging systems however, the data they produce is seldom directly interpretable as an image and must be processed by computational imaging algorithms. The restoration process generally relies on powerful and universal image priors promoting specific perceptual or structural features of natural images. Despite substantial technical progress, computational imaging suffers from an adoptability crisis in applied sciences. Indeed, most methods proposed in the literature remain at the proof-of-concept stage, requiring expert knowledge to tune or use. This represents a major roadblock for the field, which experiences significant slowdown in the adoption of state-of-the-art techniques. To accelerate the path from research prototyping to production deployment in imaging science, there is hence a strong need to rethink traditional imaging pipelines, with an emphasis on scalability (for both CPUs and GPUs), high performance computing, and modularity (for customizability). This minisymposium will foster high performance computational imaging by bringing together various research and open-source software communities and showcasing modern production imaging pipelines, both in research and industrial environments.

Organizer(s): Matthieu Simeoni (EPFL), and Jean-Paul Kneib (EPFL)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Hybrid Energy Systems: Adaptive Computing to Harmonize Energy Generation and Demand for Communities across Spatio-Temporal Scales

Data-driven computational approaches that bring computing and storage nearer to the requests, and especially operating on instant real-time sensor data, have contributed immensely to our state of knowledge of the impacts of climate change on our built environment and human systems. The holistic understanding of these issues demand a cross-disciplinary approach that is rigorous, based on data, and embedded in the domain. This mini symposium focuses on bringing together experts and researchers in edge-computing with knowledge of the driving issues for a discussion on the existing challenges and advances needed in driving edge-computing deployments towards well understood goals of sustainability, resilience, and decarbonization. The international panel of invited speakers will cover a wide range of topics, from a deeply data and computational perspective about IoT, streaming data, and real-time analytics, to traceability of the data sources for deep decarbonization and their impact on policy-energy-economics. Issues of data scarcity and of inequity in the Global South will be discussed. We expect to round off the discussion with a forward-looking view of integrated communities of the future as highly controllable entities in which the built environment – buildings, vehicles, energy generation, and the grid, collectively maximize the user’s wellbeing.

Organizer(s): Jibonananda Sanyal (National Renewable Energy Laboratory), Jennifer King (National Renewable Energy Laboratory), and Deepthi Vaidhynathan (National Renewable Energy Laboratory)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Implementation and Validation of Hybrid Earth System and Machine Learning Models

The transformative power of ML unleashed in Earth System modelling is taking shape. Recent advances in building hybrid models combining mechanistic Earth system models grounded in physical understanding and machine learning models trained from huge amounts of data show promising results. Full replacement models such as FourCastNet have also made continuous progress and shown impressive performance. Hybrid models that include, for instance, ML based parameterizations, can provide a substantial speed-up or qualitative improvement if trained on high-resolution data compared to parameterizations based on few data points. However, the ongoing implementation of such models poses technical challenges, while the acceptance of hybrid approaches and full replacement models critically depends on the means scientists have for validating them. This minisymposium will discuss the technical challenges in building hybrid models, such as the coupling of Earth system and machine learning model components via embedded Python or MPI, and report on accomplished milestones and benchmark results. For further adoption of ML methods, it is also crucial to establish the credibility of ML-enhanced Earth system models through statistical reproducibility. The symposium will discuss arising questions around the validation of hybrid approaches, including metrics for training and evaluation that emphasize spatial features.

Organizer(s): Tobias Weigel (German Climate Computing Centre, DKRZ), Caroline Arnold (German Climate Computing Centre, DKRZ), and Sarat Sreepathi (Oak Ridge National Laboratory)

Domain: Climate, Weather and Earth Sciences


Interdisciplinary Challenges in Multiscale Materials Modeling

Large-scale first-principles simulations play an ever-increasing role in the development of modern materials and occupy a noteworthy share of the world’s supercomputing resources. The underlying models can be remarkably complex and involve e.g. non-linear PDEs or multi-linear algebra. Materials simulation workflows therefore commonly feature a coupling of different physical models balancing various tradeoffs between accuracy and computational cost. Data-driven approaches are well-established to replace the expensive parts of the modeling procedure by cheaper statistical surrogates, but also induce necessary communication between experts at the various physical and modeling scales. Moreover targeting larger and more involved materials requires improvements with respect to the efficiency, robustness and reproducibility of simulations, challenges that can only be tackled in close collaboration between mathematics, computer science and application scientists. With this minisymposium we want to contribute to overcoming interdisciplinary barriers in materials modeling. We invite researchers from the application domain to introduce the current state and open issues of the field. Moreover we will discuss measures related to education, outreach and software infrastructures, which foster multi-disciplinary synergies. For example we will describe community codes, which support research thrusts all the way from model problems to full-scale applications and provide examples of recent successes.

Organizer(s): Michael Herbst (RWTH Aachen University), and Dallas Foster (Massachusetts Institute of Technology)

Domain: Chemistry and Materials


Julia for HPC: Tooling and Applications

Natural sciences and engineering applications increasingly leverage advanced computational methods to further improve our understanding of complex natural systems, using predictive modelling or analysing data. However, the flow of large amounts of data and the constant increase in spatiotemporal model resolution pose new challenges in scientific software development. Also, high-performance computing (HPC) resources massively rely on hardware accelerators such as graphics processing units (GPUs) that need to be efficiently utilised, representing an additional challenge. Performance portability and scalability as well as fast development on large-scale heterogeneous hardware represent crucial aspects in scientific software development that can be leveraged by the capabilities of the Julia language. The goal of this minisymposium is to bring together scientists who work on or show interest in large-scale Julia HPC development, including but not restricted to software ecosystems and portable programming models for development, GPU computing multiphysics solvers, and more. The selection of speakers with expertise spanning from computer to domain science, offers a unique opportunity to learn about the latest development of Julia for HPC to drive discoveries in Earth sciences featuring the next generation of 3D geophysical fluid dynamics solvers to leverage unprecedented resolution.

Organizer(s): Samuel Omlin (ETH Zurich / CSCS), Michael Schlottke-Lakemper (RWTH Aachen University), and Ludovic Räss (ETH Zurich)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Large Language Models for Autonomous Discovery

This minisymposium brings together a community of scientific researchers to discuss how advances in high-performance computing (HPC) and large language models (LLMs) can accelerate scientific discovery, particularly in biological domains. LLMs have recently demonstrated powerful expressivity in domains outside of natural languages, such as protein-sequence models, gene-sequence modeling, and drug design. However, applying LLMs to scientific domains presents numerous challenges, including the need for large-scale HPC resources, new methods for probing our understanding of models, and techniques to ground the output of LLMs in scientific literature and theory. Confirmed speakers include Professor Connor Cooley (MIT), Professor Ellen Zhong (Princeton), and Professor Burkhard Rost (Technical University at Munich), who will present their research on the application of LLMs to a broad range of scientific disciplines. Through discussing both the domain and computational advances alongside current obstacles, the symposium aims to develop a clear research agenda forward for incorporating LLMs into scientific discovery.

Organizer(s): Arvind Ramanathan (Argonne National Laboratory, University of Chicago), and Austin Clyde (Argonne National Laboratory, University of Chicago)

Domain: Life Sciences


Machine Learning Techniques for Modeling Under-Resolved Phenomena in Massively Parallel Simulations: Algorithms/Frameworks/Applications

Over the last ten years there has been a profusion of scalable packages, such as TensorFlow, Pytorch, and ONNX, that also run on heterogeneous computing platforms. The ability of these tools to ingest massive amounts of training data and make predictions, makes them an obvious choice as tools for scientific machine learning (SciML). In our presentations we demonstrate how researchers in diverse areas of scientific inquiry employ these tools, creatively, to model small scale phenomena in coarse grain simulations, which are then used as predictive tools. Also, with the availability of exa-scale computing platforms, it is becoming clear that storing several petabytes of training data, for machine (ML) models, is not a viable option. We present ongoing research in the area of in-situ ML, where the simulation code and ML and deep learning (DL) framework are run together, to generate, and use, the streaming simulation data, to train the model, and make predictions using coarser simulations. Our presentations also explore the performance of in-situ machine learning frameworks, and the portability of the generated ML models to simulation codes that are different from the one that was used to train the model.

Organizer(s): Ramesh Balakrishnan (Argonne National Laboratory), Kyle Felker (Argonne National Laboratory), Saumil Patel (Argonne National Laboratory), and Timothy Germann (Los Alamos National Laboratory)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Machine Learning: A Scale-Bridging Tool for Molecular Modeling

Machine learning (ML) methods have dramatically changed molecular simulations for material and biophysical applications. They can provide highly accurate models without increasing the computational effort, i.e., a bridge between classical quantum, atomistic, and coarse-grained spatiotemporal scales. However, several pressing issues still need to be addressed. Namely, the transferability of ML models (to unseen configurations, molecules, and thermodynamic states) and the stability of ML-driven simulations (avoiding unphysical states, e.g., overlapping particles). To this end, several novel approaches were recently proposed ranging from the sophisticated construction of training databases to the incorporation of physics knowledge. This minisymposium aims to present state-of-the-art ML methods for molecular modeling and simulations. Additionally, it will provide a platform to discuss the current challenges and share knowledge and ideas across different applications and modeling scales. While the primary focus of the minisymposium is on material science and biophysics applications, the novel methodologies tackling data scarcity and prediction uncertainty are transferable to continuum modeling and applications.

Organizer(s): Julija Zavadlav (Technical University of Munich)

Domain: Chemistry and Materials


Modern Approaches to Modeling Atmospheric Aerosols and Clouds

Atmospheric aerosols and clouds present several scientific, mathematical, and computational challenges in the development of large-scale earth system models. Aerosols and clouds are inherently multiscale, as they characterize the effects of submicron-scale particles on observed local, regional, and global phenomena. While the understanding of aerosols and clouds and their atmospherically relevant mixed-phase physicochemical processes has recently grown dramatically, it has become clear that related software packages must be extended or replaced. Software design for these sophisticated models has an ever greater impact on stability, extensibility to new science, and user experience. Meanwhile, exascale computing platforms offer more simulation capability but do not alleviate the curse of dimensionality. Given the high stakes of understanding our changing climate, we must rigorously answer the question of whether our models are “correct” and quantify our confidence in our answer. This symposium examines the challenges and implications inherent to two critical areas of concern in aerosol science and cloud physics: (1) the computational stability, accuracy, and representation of complex aerosol and cloud representations in weather and climate models; (2) software development packages that support robust, scalable, portable, and testable models.

Organizer(s): Michael Schmidt (Sandia National Laboratories), Kyle Shores (NCAR), and Nicole Riemer (University of Illinois Urbana-Champaign)

Domain: Climate, Weather and Earth Sciences


Nexus of AI and HPC for Weather, Climate, and Earth System Modelling

Accurately and reliably predicting weather and climate change and associated extreme weather events are critical to plan for disastrous impacts well in advance and to adapt to sea level rise, ecosystem shifts, and food and water security needs. The ever-growing demands of high-resolution weather and climate modeling require exascale systems. Simultaneously, petabytes of weather and climate data are produced from models and observations each year. Artificial Intelligence (AI) offers novel ways to learn predictive models from complex datasets, at scale, that can benefit every step of the workflow in weather and climate modeling: from data assimilation to process emulation to numerical simulation acceleration to ensemble prediction. Further, how do we make the best use of AI to build or improve Earth digital twins for a wide range of applications from extreme weather to renewable energy, including at highly localized scales such as cities? The next generation of breakthroughs will require a true nexus of HPC and large-scale AI bringing many challenges and opportunities. This mini-symposium will delve into the challenges and opportunities at the nexus of HPC and AI. Presenters will describe scientific and computing challenges and the development of efficient and scalable AI solutions for weather and climate modeling.

Organizer(s): Karthik Kashinath (NVIDIA Inc., Lawrence Berkeley National Laboratory), and Peter Dueben (ECMWF)

Domain: Climate, Weather and Earth Sciences


Novel Cloud-Based High Performance Computing and Artificial Intelligence Approaches to Emerging Computational Problems in Drug Design and Material Science

The cloud is providing increasing value in the pharma and material science domains. It enables access to a broad range of HPC and AI services and eliminates the need to lock-in a particular hardware choice for a long period of time, especially when the physical location of the hardware resources becomes less important provided certain security, compliance and cost requirements are met. In order to take full advantage of the wide range of heterogeneous architectures available, it is critical to continuously optimize and adapt algorithms and approaches to the changing computational landscape. In our minisymposium we will showcase four examples of innovative approaches to HPC and AI from the fields of drug design and material science. We demonstrate a computing-as-a-service approach to the unattended, system-agnostic thermodynamic stability prediction of the amorphous drug system and a web-based platform that gives experimentalists access to computational workflows directly from their laboratory workstation. Finally, we will present an automated workflow for reaction network exploration with the hierarchical AI-driven approach using the example of a catalyst for the asymmetric reduction of ketones and a crystal structure prediction study performed entirely in the cloud.

Organizer(s): Lukasz Miroslaw (Microsoft Research), and Wolfgang De Salvador (Microsoft Research)

Domain: Chemistry and Materials


Performance in I/O and Fault Tolerance for Scientific Applications

The interaction of modern scientific applications in the exascale computing era with I/O resources is increasingly complex and important to application performance. Challenges in I/O performance are ubiquitous, from fault tolerance and resilience to coupled application workflows to heterogeneous and task-based systems. The purpose of this minisymposium is to facilitate a discussion of these challenges and recent novel research in high performance computing (HPC) addressing them. Topics discussed will include research in the areas of fault tolerance, heterogeneous data, and workflow or coupled applications, with an emphasis on computing resource, application, and data heterogeneity. We hope to enable a conversation about how these techniques can be used across various application use cases and thereby advance the state of the art in I/O performance. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.

Organizer(s): Nicolas Morales (Sandia National Laboratories), Shyamali Mukherjee (Sandia National Laboratories), and Matthew Curry (Sandia National Laboratories)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Performance Portability Solutions beyond C++ to Support Future Workflows

Computing at large scales has become extremely challenging due to increasing heterogeneity in both hardware and software. A positive feedback loop exists where more scientific insight leads to more complex solvers which in turn need more computational resource. More and more scientific workflows need to tackle a range of scales and use machine learning (ML) and artificial intelligence (AI) intertwined with more traditional numerical modeling methods, placing more demands on computational platforms. These constraints indicate a need to fundamentally rethink the way computational science is done and the tools that are needed to enable these complex workflows. It is not obvious that current set of C++ based solutions will suffice, or that relying exclusively upon C++ is the best option, especially because several newer languages and boutique solutions offer more robust design features to tackle the challenges of heterogeneity. This two part minisymposium will include presentations about languages and heterogeneity solutions that are not tied to C++, and offer options beyond template metaprogramming and parallel-for for performance and portability. One slot will be reserved for open discussion and exchange of ideas.

Organizer(s): Anshu Dubey (Argonne National Laboratory, University of Chicago)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Porting of Applications to AMD GPUs: Lessons Learned

Accelerators, namely GPUs, have become the workhorse for floating-point/high-bandwidth intensive applications. Software development efforts in this domain have originally been focused around the CUDA programming model tailored for NVIDIA GPUs. In 2016 AMD introduced the HIP Programming language, a C++ runtime API and kernel language that allows developers to create portable applications. Several projects for porting scientific libraries and applications to run on AMD GPUs have started with many large supercomputers equipped with AMD GPUs in several supercomputer centers (e.g. Frontier in the USA and LUMI in Finland) entering production. Moreover, AMD has led a substantial effort to develop a comprehensive ecosystem for AMD GPUs, including compilers, profiling, and debugging tools. For supercomputers provided by HPE, the HPE Cray Programming Environment also offers tools for compilation and profiling as well as optimized libraries for fast MPI GPU-aware communications and most notably an OpenACC implementation for AMD GPUs. The goal of the minisymposium is to gather researchers and developers to discuss their experiences with application development and porting to AMD GPU devices.

Organizer(s): Alfio Lazzaro (HPE), Aniello Esposito (HPE), and Samuel Antao (AMD)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Research Software Engineers (RSEs) in HPC

Computational science and scientific computing are dependent on software, and this software is dependent on the people who develop and maintain it. In the past, software was primarily developed by “hero” researchers who knew the research area and enough programming techniques to build a code that they and sometimes others could use. Software today, however, is larger, less monolithic, and more complex. Additionally, new tools (e.g., version control systems, CI/CD, etc.) add a layer of abstraction to combat the growing complexity and to facilitate development. This leads to software that is developed by teams, including people who understand the research topic, people who understand the research software ecosystem (scientists and engineers), and people who understand research software engineering best practices (research software engineers or RSEs). Over the past 10 years, this RSE role has been identified and defined, with almost 10,000 self-identified RSEs and RSE organizations within many universities, national laboratories, and HPC centers. This session will share knowledge about RSE challenges with researchers who do or could work with RSEs, developers who know (or don’t know) that they are RSEs, and managers who do or may hire RSEs, including faculty, group leaders, and HPC center managers.

Organizer(s): Daniel Katz (University of Illinois Urbana-Champaign), Miranda Mundt (Sandia National Laboratories), and Juan Herrera (University of Edinburgh)

Domain: Applied Social Sciences and Humanities


Robust, Accurate, and Scalable Algorithms for Coupling in Earth System Models

Coupled Earth System Model (ESM) simulations involve computationally complex models of the atmosphere, ocean, land surface, river runoff, sea ice, and other components which must act as a single coupled system, increasing the complexity exponentially and requiring extreme-scale computational power. Rigorous spatial coupling between components in ESMs involves field transformations, and communication of data across multiresolution grids while preserving key attributes of interest such as global integrals and local features. Additionally, accurate treatment of coupled nonlinearities between components, with appropriate selection of time step size for exchanging field data can determine the overall performance of the solvers. Understanding and controlling the dominant sources of errors in coupled climate systems, in addition to finding the right balance between both the numerics and computational performance remains key to creating reliable and high-performing coupled physics infrastructures for climate models. This balance is critically important as very high-resolution “Storm resolving” Earth System Models are beginning to appear, and are running on thousands of heterogeneous nodes. In this interactive mini-symposium, we bring together computational and climate scientists to present ongoing developments on accurate and scalable earth system couplers for climate simulations, with focused discussions on performance hurdles in pre-exascale and exascale systems.

Organizer(s): Vijay Mahadevan (Argonne National Laboratory, TechTrans International Inc), and Robert Jacob (Argonne National Laboratory)

Domain: Climate, Weather and Earth Sciences


Scientific Visualization of Big Data

Data acquisition and computing systems have evolved to produce large amounts of data, but most of these datasets contain small scale features that cannot be aggregated to be understood. While these small features are usually very hard to detect automatically, the human eye can do it almost instantly, provided that the data is presented in a visually efficient way. Visualizing data in a human-readable way not only helps with data comprehensibility, but serves both data analysis and science communication. However, visualization tools and techniques need to keep pace with both increasing data sizes and the diverse needs of the scientific community. This is why a special effort towards developing automatic data visualization has been used since for as long as computing has existed, and it is getting more and more important in the present years. The minisymposium aims at gathering people working in the field of scientific big data visualization and researchers to discuss current needs and available technologies to initiate new collaborations and ideas. It will probably arise that across different fields, similar problems have to be solved, especially regarding performance and interface design. The minisymposium will motivate the mutualization of efforts needed to tackle those problems.

Organizer(s): Florian Cabot (EPFL), and Emma Tolley (EPFL)

Domain: Computer Science, Machine Learning, and Applied Mathematics


Simulating Fundamental Theories across Scales: Challenges and Interdisciplinary Solutions

This mini-symposium will explore new directions in computing motivated by developments in the simulation of fundamental theories including Lattice Quantum Chromodynamics (QCD) but focusing on the interdisciplinary aspects. With a mix of speakers from within and outside the lattice community, this mini-symposium will highlight how lattice QCD simulations will provide a unique test ground for a new generation of computational science methods – pushing hardware and software to extremes at the forefront of hybrid computing including on heterogeneous architectures at exascale, at the convergence of AI and HPC and incorporating quantum computing in a modular simulation environment. The mini-symposium will address some of the challenges that are currently under scrutiny in the Lattice Field Theory community, with an emphasis on the interdisciplinary and algorithmic aspects rather than the physics results. The urgent imperative to achieve efficient, sustainable computing at scale will feature in talks and discussion.

Organizer(s): Luigi Del Debbio (University of Edinburgh), and Sinead Ryan (Trinity College Dublin)

Domain: Physics


Technical Challenges and Opportunities for Digital Twins of the Earth System

Digital Twins of Earth encapsulate both the latest science and technology advances to provide near-real time information on extremes and climate change adaptation in a wider digital environment, where users can interact, modify and ultimately create their own tailored information. Recent work has demonstrated that global, coupled storm-resolving (or km-scale) simulations are feasible and can contribute to building such information systems. Advances in Earth system modelling, supercomputing, and the ongoing adaptation of weather and climate codes for accelerators are all crucial building blocks in enabling such endeavours. Going from high-resolution simulations to fully-featured Digital Twins requires new workflows and technical solutions to manage computational and data scales and offer the envisaged user-experience. This multi-part minisymposium aims to discuss the technical challenges of such endeavours, spanning from HPC adaptation and the use of new programming paradigms for effective utilization of modern accelerator architectures, to data handling, visualization and workflows.

Organizer(s): Balthasar Reuter (ECMWF)

Domain: Climate, Weather and Earth Sciences


Towards Weather and Climate Digital Twins at the Example of the ICON and IFS-FESOM Models

With modern HPC machines becoming increasingly diverse in architecture and vendors, the numerical and climate modeling community faces numerous challenges. However, these new systems also offer exiting possibilities by allowing for the first time to do climate simulations on km-scales, which enables the direct simulation of important processes like storms and ocean eddies. The accompanying code-complexity increase needs to be tackled to scale the code-development process. Several national and international projects are working on the modernization of climate models such as the ICON and IFS-FESOM models. As a first step directive-based approaches are applied to enable the models to be run on current pre-exascale systems, which allows us to present performance results on the pre-exascale systems LUMI and JUWELS Booster. To pave the way for the future and towards weather and climate digital twins, it is important to mitigate vendor lock-in and favor (performance) portability and scalable development. Therefore, we also present multiple aspects of the modernization of complex, critical code-bases at the example of climate simulations. As such, we believe that this Mini-Symposium will be useful not only to the climate community, but to many other simulation and application codes which all face similar challenges.

Organizer(s): Xavier Lapillonne (MeteoSwiss), Hendryk Bockelmann (German Climate Computing Centre), and Hartwig Anzt (University of Tennessee)

Domain: Climate, Weather and Earth Sciences


Tuning Low-Overhead Node-Level Load Balancing for Computational Science and Engineering Simulations on Current and Emerging Supercomputers

Inter-node dynamic load balancing speeds up scientific simulations on a supercomputer. Given that nodes of emerging supercomputers have a much larger number and variety of processors than previous supercomputers, as well as the continued need for scalability of applications, intra-node load balancing is increasingly needed. Repurposing inter-node load balancing techniques for intra-node load balancing is not practical due to heavyweight processor virtualization and because of the costs incurred from load balancing itself, e.g., data movement from task migration, are more intricate and harder to predict. Node-level load balancing requires novel low-cost load balancing strategies and empirical auto-tuning of such strategies. This mini-symposium brings forward state-of-the-art research involving such node-level load balancing from the level of (1) numerical algorithms via the SLATE library, (2) adaptive and intelligent runtime systems via Charm++’s node-level load balancing, (3) compilers via affinity and loop transformations in LLVM’s OpenMP, and (4) low-level system software for supporting task parallelism on CPUs and GPUs. By attending this mini-symposium, one can expect lively discussions, research exchanges, and ideas for future work between the audience and presenters on support for node-level load balancing and its synergy with other techniques within HPC software.

Organizer(s): Vivek Kale (Sandia National Laboratories), and Nicolas Morales (Sandia National Laboratories)

Domain: Computer Science, Machine Learning, and Applied Mathematics