This notebook steps you through setting up fluid simulations and using TensorFlow's differentiation to optimize them.

Execute the cell below to install the Φ_{Flow} Python package from GitHub.

In [1]:

```
# !pip install --quiet phiflow
from phi.flow import *
```

Φ_{Flow} is vectorized but object-oriented, i.e. data are represented by Python objects that internally use tensors.

First, we create grids for the quantities we want to simulate. For this example, we require a velocity field and a smoke density field. We sample the smoke field at the cell centers and the velocity in staggered form.

In [2]:

```
smoke = CenteredGrid(0, extrapolation.BOUNDARY, x=32, y=40, bounds=Box[0:32, 0:40]) # sampled at cell centers
velocity = StaggeredGrid(0, extrapolation.ZERO, x=32, y=40, bounds=Box[0:32, 0:40]) # sampled in staggered form at face centers
```

Additionally, we want to add more smoke every time step.
We create the `INFLOW`

field from a circle (2D `Sphere`

) which defines where hot smoke is emitted.
Furthermore, we are interested in running the simulation for different inflow locations.

Φ_{Flow} supports data-parallell execution via *batch dimensions*.
When a quantity has a batch dimension with size *n*, operations involving that quantity will be performed *n* times simultaneously and the result will also have that batch dimension. Here we add the batch dimension `inflow_loc`

.

For an overview of the dimension types, see the documentation or watch the introductory tutorial video.

In [3]:

```
INFLOW_LOCATION = tensor([(4, 5), (8, 5), (12, 5), (16, 5)], batch('inflow_loc'), channel('vector'))
INFLOW = 0.6 * CenteredGrid(Sphere(center=INFLOW_LOCATION, radius=3), extrapolation.BOUNDARY, x=32, y=40, bounds=Box[0:32, 0:40])
```

The created grids are instances of the class `Grid`

.
Like tensors, grids also have the `shape`

attribute which lists all batch, spatial and channel dimensions.
Shapes in Φ_{Flow} store not only the sizes of the dimensions but also their names and types.

In [4]:

```
print(f"Smoke: {smoke.shape}")
print(f"Velocity: {velocity.shape}")
print(f"Inflow: {INFLOW.shape}")
print(f"Inflow, spatial only: {INFLOW.shape.spatial}")
```

The grid values can be accessed using the `values`

property.

In [5]:

```
print(smoke.values)
print(velocity.values)
print(INFLOW.values)
```

Grids have many more properties which are documented here. Also note that the staggered grid has a non-uniform shape because the number of faces is not equal to the number of cells.

Next, let's do some physics!
Since the initial velocity is zero, we just add the inflow and the corresponding buoyancy force.
For the buoyancy force we use the factor `(0, 0.5)`

to specify strength and direction.
Finally, we project the velocity field to make it incompressible.

Note that the `@`

operator is a shorthand for resampling a field at different points. Since `smoke`

is sampled at cell centers and `velocity`

at face centers, this conversion is necessary.

In [6]:

```
smoke += INFLOW
buoyancy_force = smoke * (0, 0.5) @ velocity
velocity += buoyancy_force
velocity, _ = fluid.make_incompressible(velocity)
view(smoke);
```

Let's run a longer simulation!
Now we add the transport or *advection* operations to the simulation.
Φ_{Flow} provides multiple algorithms for advection.
Here we use semi-Lagrangian advection for the velocity and MacCormack advection for the smoke distribution.

In [7]:

```
for _ in view(smoke).range(20):
smoke = advect.mac_cormack(smoke, velocity, dt=1) + INFLOW
buoyancy_force = smoke * (0, 0.5) @ velocity
velocity = advect.semi_lagrangian(velocity, velocity, dt=1) + buoyancy_force
velocity, _ = fluid.make_incompressible(velocity)
```

The simulation we just computed was using pure NumPy so all operations were non-differentiable.
To enable differentiability, we need to use either PyTorch, TensorFlow or Jax.
This can be achieved by changing the import statement to `phi.tf.flow`

, `phi.torch.flow`

or `phi.jax.flow`

, respectively.
Tensors created after this import will be allocated using PyTorch / TensorFlow / Jax and operations on these will be executed with the corresponding backend.
These operations can make use of a GPU through CUDA if your configuration supports it.

In [8]:

```
# from phi.jax.flow import *
from phi.torch.flow import *
# from phi.tf.flow import *
```

We set up the simulation as before.

In [9]:

```
INFLOW_LOCATION = tensor([(4, 5), (8, 5), (12, 5), (16, 5)], batch('inflow_loc'), channel('vector'))
INFLOW = 0.6 * CenteredGrid(Sphere(center=INFLOW_LOCATION, radius=3), extrapolation.BOUNDARY, x=32, y=40, bounds=Box[0:32, 0:40])
```

We can verify that tensors are now backed by TensorFlow / PyTorch / Jax.

In [10]:

```
type(INFLOW.values.native(INFLOW.shape))
```

Out[10]:

torch.Tensor

Note that tensors created with NumPy will keep using NumPy/SciPy operations unless a TensorFlow tensor is also passed to the same operation.

Let's look at how to get gradients from our simulation.
Say we want to optimize the initial velocities so that all simulations arrive at a final state that is similar to the right simulation where the inflow is located at `(16, 5)`

.

To achieve this, we define the loss function as $L = | D(s - s_r) |^2$ where $s$ denotes the smoke density and the function $D$ diffuses the difference to smoothen the gradients.

In [11]:

```
def simulate(smoke: CenteredGrid, velocity: StaggeredGrid):
for _ in range(20):
smoke = advect.mac_cormack(smoke, velocity, dt=1) + INFLOW
buoyancy_force = smoke * (0, 0.5) @ velocity
velocity = advect.semi_lagrangian(velocity, velocity, dt=1) + buoyancy_force
velocity, _ = fluid.make_incompressible(velocity)
loss = math.sum(field.l2_loss(diffuse.explicit(smoke - field.stop_gradient(smoke.inflow_loc[-1]), 1, 1, 10)))
return loss, smoke, velocity
```

Now it is important that the initial velocity has the `inflow_loc`

dimension before we record the gradients.

In [12]:

```
initial_smoke = CenteredGrid(0, extrapolation.BOUNDARY, x=32, y=40, bounds=Box[0:32, 0:40])
initial_velocity = StaggeredGrid(math.zeros(batch(inflow_loc=4)), extrapolation.ZERO, x=32, y=40, bounds=Box[0:32, 0:40])
```

Finally, we use `gradient_function()`

to obtain the gradient with respect to the initial velocity. Since the velocity is the second argument to the `simulate()`

function, we pass `wrt=[1]`

.

In [13]:

```
sim_grad = field.functional_gradient(simulate, wrt=[1], get_output=False)
```

The argument `get_output=False`

specifies that we are not interested in the actual output of the function. By setting it to `True`

, we would also get the loss value and the final simulation state.

To evaluate the gradient, we simply call the gradient function with the same arguments as we would call the simulation.

In [14]:

```
velocity_grad, = sim_grad(initial_smoke, initial_velocity)
view(velocity_grad);
```

With the gradient, we can easily perform basic gradient descent optimization. For more advanced optimization techniques and neural network training, see the optimization documentation.

In [15]:

```
print(f"Initial loss: {simulate(initial_smoke, initial_velocity)[0]}")
initial_velocity -= 0.001 * velocity_grad
print(f"Next loss: {simulate(initial_smoke, initial_velocity)[0]}")
```

In [16]:

```
sim_grad = field.functional_gradient(simulate, wrt=[1], get_output=True)
for opt_step in view('final_smoke', initial_velocity, select='frames').range(frames=4):
(loss, final_smoke, _v), (velocity_grad,) = sim_grad(initial_smoke, initial_velocity)
print(f"Step {opt_step}, loss: {loss}")
initial_velocity -= 0.001 * velocity_grad
```

Step 0, loss: (576.40936, 576.40936, 576.40936, 576.40936) along inflow_locᵇ Step 1, loss: (581.02203, 581.02203, 581.02203, 581.02203) along inflow_locᵇ Step 2, loss: (525.08154, 525.08154, 525.08154, 525.08154) along inflow_locᵇ Step 3, loss: (537.40344, 537.40344, 537.40344, 537.40344) along inflow_locᵇ

This notebook provided an introduction to running fluid simulations in NumPy and TensorFlow. It demonstrated how to obtain simulation gradients which can be used to optimize physical variables or train neural networks.

The full Φ_{Flow} documentation is available at https://tum-pbs.github.io/PhiFlow/.

Visit the playground to run Φ_{Flow} code in an empty notebook.