CylinderJetEnv2D
- class fluidgym.envs.cylinder.CylinderJetEnv2D(reynolds_number: float, resolution: int, adaptive_cfl: float, dt: float, step_length: float, episode_length: int, lift_penalty: float, use_marl: bool, dtype: dtype = torch.float32, cuda_device: device | None = None, load_initial_domain: bool = True, load_domain_statistics: bool = True, randomize_initial_state: bool = True, enable_actions: bool = True, differentiable: bool = False)[source]
Bases:
CylinderEnvBaseEnvironment for flow around a cylinder with jet actuation.
- Parameters:
reynolds_number (float) – The Reynolds number of the flow.
resolution (int) – The resolution of the simulation grid. Corresponds to the angular resolution around the cylinder.
dt (float) – The time step size to use in the simulation.
adaptive_cfl (float) – The adaptive CFL number to use in the simulation.
step_length (float) – The non-dimensional time length of each environment step.
episode_length (int) – The number of steps per episode.
lift_penalty (float) – The penalty factor for lift in the reward calculation.
use_marl (bool) – Whether to enable multi-agent reinforcement learning mode.
dtype (torch.dtype) – The data type to use for the simulation. Defaults to torch.float32.
cuda_device (torch.device | None) – The CUDA device to use for the simulation. If None, the default cuda device is used. Defaults to None.
load_initial_domain (bool) – Whether to load initial domain states from disk. Defaults to True.
load_domain_statistics (bool) – Whether to load domain statistics from disk. Defaults to True.
randomize_initial_state (bool) – Whether to randomize the initial state on reset. Defaults to True.
enable_actions (bool) – Whether to enable actions. If False, the environment will be run in uncontrolled mode. Defaults to True.
differentiable (bool) – Whether to enable differentiable simulation mode. Defaults to False.
References
[1] J. Rabault, M. Kuchta, A. Jensen, U. Réglade, and N. Cerardi, “Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control,” Journal of Fluid Mechanics, vol. 865, pp. 281-302, Apr. 2019, doi: 10.1017/jfm.2019.62.
[2] F. Ren, J. Rabault, and H. Tang, “Applying deep reinforcement learning to active flow control in weakly turbulent conditions,” Physics of Fluids, vol. 33, no. 3, p. 037121, Mar. 2021, doi: 10.1063/5.0037371.
- property id: str
Unique identifier for the environment.