Deep-Reinforcement-Learning-Based Drag Reduction in Turbulent Channel Flows
DescriptionWe introduce a reinforcement-learning (RL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows enclosed in a channel. The environment provides a framework for computationally-efficient, parallelized, high-fidelity fluid simulations, ready to interface with established RL agent programming interfaces. This allows for both testing existing deep reinforcement learning (DRL) algorithms against a complex, turbulent physical system. The control is applied in the form of blowing and suction at the wall, while the observable state is defined as the velocity fluctuations at a given distance from the wall. Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging high-dimensional data to design advanced control strategies. In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, a state-of-the-art turbulence-control strategy from the literature, and a commonly-used DRL algorithm, deep deterministic policy gradient. Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively.
TimeMonday, June 2614:30 - 15:00 CEST
Event Type
Computer Science, Machine Learning, and Applied Mathematics