FluidEnv

class fluidgym.envs.fluid_env.FluidEnv(adaptive_cfl: float, dt: float, step_length: float, episode_length: int, ndims: int, use_marl: bool, dtype: dtype = torch.float32, cuda_device: device | None = None, cpu_device: device | None = None, auto_render: bool = False, load_initial_domain: bool = True, load_domain_statistics: bool = True, randomize_initial_state: bool = True, enable_actions: bool = True, differentiable: bool = False)[source]

Bases: ABC, FluidEnvLike

Abstract base class for FluidGym environments.

It provides common functionality for all FluidGym environments, such as managing the simulation, rendering, and saving/loading initial domains.

Raises:
  • RuntimeError – If CUDA is not available.

  • ValueError – If ndims is not 2 or 3.

Notes

The environment must be reset before calling step(). The initial domain must be generated by calling init() before using the environment for training or evaluation. The initial domain is saved to disk when init() is called, and loaded when reset() is called.

property action_space: Box

The action space of the environment.

property cuda_device: device

The CUDA device used by the environment.

detach() None[source]

Detach all tensors in the simulation from the computation graph.

property differentiable: bool

Whether the environment is differentiable.

property dt: float

The simulation time step.

property episode_length: int

The number of steps per episode.

get_pressure() Tensor[source]

Get the pressure field of the fluid.

Returns:

The pressure field as a tensor of shape (1, NDIM, H, W) for 2D or (1, NDIM, H, W, D) for 3D.

Return type:

torch.Tensor

get_state() EnvState[source]

Get the current state of the environment.

Returns:

The current state of the environment.

Return type:

EnvState

get_uncontrolled_episode_metrics() DataFrame | None[source]

Get the uncontrolled episode metrics for the current domain.

Note: This method returns the metrics for the currently loaded (non-randomized) initial domain. If the environment has been reset with randomization, the metrics may not correspond to the current state.

Returns:

The uncontrolled episode metrics, or None if not available.

Return type:

pd.DataFrame | None

get_velocity() Tensor[source]

Get the velocity field of the fluid.

Returns:

The velocity field as a tensor.

Return type:

torch.Tensor

get_vorticity() Tensor[source]

Get the vorticity field of the fluid.

Returns:

The vorticity field as a tensor.

Return type:

torch.Tensor

abstract property id: str

Unique identifier for the environment.

init() None[source]

Generate and save the initial domain if it does not already exist.

abstract property initial_domain_id: str

Unique identifier for the initial domain.

load_initial_domain(idx: int, mode: EnvMode | None = None) None[source]

Public method to load the initial domain from disk using the current mode.

Parameters:
  • idx (int) – Index of the initial domain to load.

  • mode (EnvMode | None) – Environment mode (‘train’, ‘val’, ‘test’). If None, uses the current mode. Defaults to None.

metadata = {'render_fps': 24, 'render_modes': ['rbg_array']}
property metrics: list[str]

The list of metrics tracked by the environment.

property mode: EnvMode

The current mode of the environment (‘train’, ‘val’, or ‘test’).

abstract property n_agents: int

The number of agents in the environment.

property n_sim_steps: int

The number of simulation steps per environment step.

property ndims: int

The number of spatial dimensions (2 or 3).

property observation_space: Dict

The observation space of the environment.

abstract plot(output_path: Path | None = None) None[source]

Plot the environments configuration.

Parameters:

output_path (Path | None) – Path to save the plot. If None, the current directory is used. Defaults to None.

plot_grid() None[source]

Plot the simulation grid.

render(save: bool = False, render_3d: bool = False, filename: str | None = None, output_path: Path | None = None) ndarray[source]

Render the current state of the environment.

Parameters:
  • save (bool) – Whether to save the rendered frame as a PNG file. Defaults to False.

  • render_3d (bool) – Whether to enable 3d rendering. Defaults to False.

  • filename (str | None) – The filename to save the GIF file. If None, a default name is used. Defaults to None.

  • output_path (Path | None) – The output path to save the rendered files. If None, saves to the current directory. Defaults to None.

Returns:

The rendered frame as a numpy array.

Return type:

np.ndarray

abstract property render_shape: tuple[int, ...]

The shape of the rendered domain.

reset(seed: int | None = None, randomize: bool | None = None) tuple[dict[str, Tensor], dict[str, Tensor]][source]

Resets the environment to an initial internal state, returning an initial observation and info.

Parameters:
  • seed (int | None) – The seed to use for random number generation. If None, the current seed is used.

  • randomize (bool | None) – Whether to randomize the initial state. If None, the default behavior is used.

Returns:

A tuple containing the initial observation and an info dictionary.

Return type:

tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]

sample_action() Tensor[source]

Sample a random action uniformly from the action space.

Returns:

A random action.

Return type:

torch.Tensor

save_gif(filename: str, output_path: Path | None = None) None[source]

Save the rendered frames as a GIF file.

Parameters:
  • filename (str) – The filename for the GIF file.

  • output_path (Path | None) – The output path to save the GIF file. If None, saves to the current directory. Defaults to None.

seed(seed: int) None[source]

Update the random seeds and seed the random number generators.

Parameters:

seed (int) – The seed to set. If None, the current seed is used.

set_state(state: EnvState) None[source]

Set the current state of the environment.

Parameters:

state (EnvState) – The state to set the environment to.

step(action: Tensor) tuple[dict[str, Tensor], Tensor, bool, bool, dict[str, Tensor]][source]

Run one timestep of the environment’s dynamics using the agent actions.

When the end of an episode is reached (terminated or truncated), it is necessary to call reset() to reset this environment’s state for the next episode.

Parameters:

action (torch.Tensor) – The action to take.

Returns:

  • tuple[

  • dict[str, torch.Tensor], torch.Tensor, bool, bool, dict[str, torch.Tensor]] – A tuple containing the observation, reward, terminated flag, truncated flag, and info dictionary.

property step_length: float

The length of each environment step in non-dimensional time units.

test() None[source]

Set the environment to test mode.

property time_passed: float

The total time passed in the current episode.

train() None[source]

Set the environment to training mode.

property use_marl: bool

Whether the environment is in multi-agent reinforcement learning mode.

val() None[source]

Set the environment to evaluation mode.