BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230831T095754Z
LOCATION:Dischma
DTSTART;TZID=Europe/Stockholm:20230626T163000
DTEND;TZID=Europe/Stockholm:20230626T183000
UID:submissions.pasc-conference.org_PASC23_sess163@linklings.com
SUMMARY:MS2A - Machine Learning Techniques for Modeling Under-Resolved Phe
 nomena in Massively Parallel Simulations: Algorithms/Frameworks/Applicatio
 ns (Part 2/2)
DESCRIPTION:Minisymposium\n\nOver the last ten years there has been a prof
 usion of scalable packages, such as TensorFlow, Pytorch, and ONNX, that al
 so run on heterogeneous computing platforms. The ability of these tools to
  ingest massive amounts of training data and make predictions, makes them 
 an obvious choice as tools for scientific machine learning (SciML). In our
  presentations we demonstrate how researchers in diverse areas of scientif
 ic inquiry employ these tools, creatively, to model small scale phenomena 
 in coarse grain simulations, which are then used as predictive tools. Also
 , with the availability of exa-scale computing platforms, it is becoming c
 lear that storing several petabytes of training data, for machine (ML) mod
 els, is not a viable option. We present ongoing research in the area of in
 -situ ML, where the simulation code and ML and deep learning (DL) framewor
 k are run together, to generate, and use, the streaming simulation data, t
 o train the model, and make predictions using coarser simulations. Our pre
 sentations also explore the performance of in-situ machine learning framew
 orks, and the portability of the generated ML models to simulation codes t
 hat are different from the one that was used to train the model.\n\nGenera
 tive Modeling and Smarter Sampling for Lattice Gauge Theories\n\nIn this w
 ork we describe how recent advancements in generative modeling have contri
 buted to simulations in lattice gauge theory, and discuss some ongoing wor
 k in this direction. In particular, we are interested in generating indepe
 ndent (lattice) gauge configurations, distributed according to the de...\n
 \n\nSam Foreman, James Osborn, and Xiao-Yong Jin (Argonne National Laborat
 ory)\n---------------------\nMiniGAP: A Proxy App for ML Prediction of Mol
 ecular Properties\n\nOne could argue that AI/ML has disrupted chemistry an
 d materials science, as well as other disciplines, in part, due to the syn
 ergy of accessible databases, accelerated computing resources, and communi
 ty-supported and open codes. Three common elements found in AI/ML in chemi
 stry research are data co...\n\n\nAlvaro Vazquez-Mayagoitia and Murat Kece
 li (Argonne National Laboratory)\n---------------------\nPredictive Scale-
 Bridging Simulations through Active Learning\n\nThroughout computational s
 cience, there is a growing need to utilize the continual improvements in r
 aw computational horsepower to achieve greater physical fidelity through s
 cale-bridging over brute-force increases in the number of mesh elements. F
 or instance, quantitative predictions of transport i...\n\n\nTimothy Germa
 nn (Los Alamos National Laboratory)\n---------------------\nPhysics-Inform
 ed Machine Learning for Reduced Lagrangian Modeling of Turbulence: Lagrang
 ian LES\n\nObtaining accurate numerical solutions of turbulent flows with 
 Direct Numerical Simulation (DNS) is intractable for most practical applic
 ations. Thus, building efficient, accurate, and generalizable reduced-orde
 r models for turbulent flows remains of great interest, however, this dema
 nds new and cre...\n\n\nMichael Woodward (Los Alamos National Laboratory, 
 The University of Arizona); Yifeng Tian and Chris Fryer (Los Alamos Nation
 al Laboratory); Misha Stepanov (The University of Arizona); Daniel Livescu
  (Los Alamos National Laboratory); and Misha Chertkov (The University of A
 rizona)\n\nDomain: Computer Science, Machine Learning, and Applied Mathema
 tics &#8232;\n\nSession Chairs: Riccardo Balin (Argonne National Laboratory) and
  Ramesh Balakrishnan (Argonne National Laboratory)
END:VEVENT
END:VCALENDAR
