CylinderJetEnv3D
- class fluidgym.envs.cylinder.CylinderJetEnv3D(n_jets: int, reynolds_number: float, resolution: int, dt: float, adaptive_cfl: float, step_length: float, episode_length: int, lift_penalty: float, local_obs_window: int, use_marl: bool, local_reward_weight: float | None, local_2d_obs: bool = False, dtype: dtype = torch.float32, cuda_device: device | None = None, load_initial_domain: bool = True, load_domain_statistics: bool = True, randomize_initial_state: bool = True, enable_actions: bool = True, differentiable: bool = False)[source]
Bases:
CylinderEnvBase3D Environment for flow around a cylinder with jet actuation.
This environment extends the 2D jet cylinder environment to 3D, allowing for multiple jet actuators distributed around the cylinder. Each jet can be controlled independently, enabling multi-agent reinforcement learning scenarios.
- Parameters:
n_jets (int) – The number of jet actuators (or agents in case of MARL) distributed along the cylinder.
reynolds_number (float) – The Reynolds number for the simulation.
circle_resolution_angular (int) – The angular resolution of the cylinder boundary.
dt (float) – The time step size for the simulation.
adaptive_cfl (float) – The adaptive CFL number for time step adjustment.
step_length (float) – The physical time duration of each environment step.
episode_length (int) – The number of steps per episode.
lift_penalty (float) – The penalty factor for lift in the reward calculation.
local_reward_weight (float | None, optional) – Weighting factor for local rewards in multi-agent settings. Has to be set for multi-agent RL. Defaults to None.
local_2d_obs (bool, optional) – Whether to use 2D local observations (velocity in x and y directions only). Defaults to False.
use_marl (bool) – Whether to enable multi-agent reinforcement learning mode.
dtype (torch.dtype, optional) – The data type for the simulation tensors. Defaults to torch.float32.
load_initial_domain (bool, optional) – Whether to load the initial domain from file. Defaults to True.
load_domain_statistic (bool, optional) – Whether to load precomputed domain statistics. Defaults to True.
randomize_initial_state (bool, optional) – Whether to randomize the initial state of the simulation. Defaults to False.
enable_actions (bool, optional) – Whether to enable action application in the environment. Defaults to True.
differentiable (bool, optional) – Whether to enable differentiable simulation. Defaults to False.
References
[1] P. Suárez et al., “Active Flow Control for Drag Reduction Through Multi-agent Reinforcement Learning on a Turbulent Cylinder at $$Re_D=3900$$,” Flow Turbulence Combust, vol. 115, no. 1, pp. 3-27, June 2025, doi: 10.1007/s10494-025-00642-x.
[2] P. Suárez et al., “Flow control of three-dimensional cylinders transitioning to turbulence via multi-agent reinforcement learning,” Commun Eng, vol. 4, no. 1, p. 113, June 2025, doi: 10.1038/s44172-025-00446-x.
- property id: str
Unique identifier for the environment.
- property n_agents: int
The number of agents in the environment.