BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230831T095746Z
LOCATION:Hall
DTSTART;TZID=Europe/Stockholm:20230627T193000
DTEND;TZID=Europe/Stockholm:20230627T213000
UID:submissions.pasc-conference.org_PASC23_sess116_pos123@linklings.com
SUMMARY:P10 - Application of Deep Learning and Reinforcement Learning to B
oundary Control Problems
DESCRIPTION:Poster\n\nZenin Easa Panthakkalakath and Juraj Kardo (Univers
itą della Svizzera italiana) and Olaf Schenk (Universitą della Svizzera it
aliana, ETH Zurich)\n\nMany scientific problems, such as fluid dynamics pr
oblems involving drag reduction, temperature control with some desired flo
w pattern, etc., rely on optimal boundary control algorithms. These forwar
d solves are performed for multiple simulation timesteps, and hence, a met
hod to solve the boundary control problem with fewer computations would ex
pedite these simulations. The goal of the boundary control problem is, in
essence, to find the optimal values for the boundaries such that the value
s for the enclosed domain are as close as possible to desired values. Trad
itionally, the solution is obtained using nonlinear optimization methods,
such as interior point, wherein the computational bottleneck is introduced
by the large linear systems. Our objective is to use deep learning method
s to solve boundary control problems faster than traditional solvers. We a
pproach the problem using both supervised and unsupervised learning techni
ques. In supervised learning, we use traditional solvers to generate train
ing, testing and validation data, and, use Convolutional Neural Networks a
nd/or Spatial Graph Convolutional Networks. In unsupervised learning, we u
se reinforcement learning wherein the reward function is a function of the
network prediction, desired profile, governing differential equation and
constraints. The computational experiments are performed on GPU-enabled cl
usters, demonstrating the viability of this approach.
END:VEVENT
END:VCALENDAR