Realistic physics-based simulations are a great way to engage a user in a gestural interactive system. Nowadays we witness such systems in shopping malls that allow us to try on clothes virtually, or  play games. Among such simulations, it has been found that interacting with fluids such as water has a calming effect. We wanted to replicate this feeling digitally as an interactive wall. Capturing how water moves using a finite element simulation of a water surface turned out to be an interesting challenge in our exploration.

In this post, we share some interesting results from this experiment in which we implement a fluid simulation system towards such a water surface that a user can play with, as well as a few alternative, colourful visualizations of disturbed viscous fluids. The interactions were effected using a Kinect2 and the visualizations were projected on a wall for an immersive effect as shown in the following videos.

Idea

The main idea behind the graphics application described here is to visualize water simulation projected on a wall and set up an area of vicinity in front of the wall using Microsoft Kinect. When a person enters that area of vicinity, he/she will be the source of disturbance for the simulation.

flowdiagram.png


Figure-1 : The flowchart of the entire process

The overview of the process :

The entire process consists of two sections:

Kinect Interfacing

  1. KinectV2 setup
  2. Depth image generation
  3. Motion Detection

Fluid Simulation

  1. Add force
  2. Advection
  3. Diffusion
  4. Projection or Pressure gradient subtraction

1. Kinect Interfacing

We are using Microsoft Kinect as the only interface for interacting with the water wall. Kinect helps in detecting the depth and motion of the player in the area of vicinity. The motion detected here is used as the input for simulating the fluid system (discussed in section 2).

1.1 KinectV2 setup

Microsoft Kinect is a motion sensing input device which enables the user to control and interact with the system through natural user interface using gestures and voice commands. This hardware consists of a RGB camera, 3D depth sensors, IR sensor and multi array microphones. A Kinect device generates the following signals from its sensors -

Body Tracking : Can track up to 6 bodies and 25 joints per body. The tracked positions of the bodies/joints are more anatomically correct and stable.

Depth image : Helps in 3D visualization, ability to see smaller objects and all objects more clearly.

Infrared image : Allows the device to see in the dark (light independent view).

Color image : Captures the view in form of 1080p video

Sound : Capture sound, record audio, as well as find the location of the sound source and the direction of the audio wave.

The Kinect is placed static at a point before the display and configured to set up an area of vicinity, where it has a minimum and maximum distance. Any person who enters that area will be able to interact with the water wall.

1.2 Generating depth image

The depth sensor of the Kinect generates an image that gives the depth for each point in the 2D plane captured by the camera. This depth image is the main source for the further process and it is generated in monochrome, where darker shades represent nearer points and brighter shades represent points further away from the device.

depth image
Figure-2 : Depth image

1.3  Motion detection

The person in the vicinity performs some actions like moving hands/body to interact with the water wall. Those movements are captured from the depth stream of the kinect. Technically, the motion is detected by computing absolute frame difference between the current frame and the previous frame. The frame differences are marked in colors signifying the direction of motion, where green represents the movement of player towards left, red represents the movement of player towards right, and black represents no movement.

motion detection image
Figure-3 : Frame difference

After computing the frame difference, time blur effect is applied to it. The result we get is a smooth motion blurred image of the computed frame difference.

time blur
Figure-4 : Time blur effect on computed frame difference

2. Fluid Simulation

To achieve realistic simulation of fluids, it is really necessary to understand
the physical properties of fluids:

  1. How the fluid system responds when a force is applied to it
  2. How particles in the fluid system transport when a force is applied. (This
    property of fluid is called advection.)
  3. How particles in the fluid system spread based on their motion. (This
    property is called diffusion.)

We simulate the fluid on a cartesian grid with spatial coordinate X = (x, y) and time variable t. Assuming that we know the velocity u(x, t), pressure p(x, t) and external force f for initial time t=0, then the evolution of of these quantities over time is given by compact version of Navier-Stokes equations [2] :

Equation 1 :
Navier-Stokes equation
Equation 2 :
Navier-Stokes equation
  1. The first term accounts for the effect of advection of the fluid. The
    disturbance or the force applied on the fluid system propagates according to
    this expression
  2. The second term represents the pressure gradient of the fluid divided by its
    density
  3. Third term accounts for the effect of viscosity or diffusion
  4. Fourth term represents the external force applied to the fluid system

The above equations are obtained by imposing that the fluid conserves both the mass ([Equation 1] (#equation-1)) and momentum (Equation 2).  To develop a stable fluid solver, we apply projection operation on either side of the equation 2 on result of which we get,

Equation 3 :
Navier-Stokes equation

Where Pu = u and P

p = 0. The projection involves calculating the pressure gradient and subtracting it from the intermediate velocity.

Equation 3 is our fundamental equation based on which we evolve the steps in achieving realistic fluid simulation,

Figure-5 : Steps involved in fluid simulation

As we discretize the fluid system as a grid of points, each cell in the grid represents a particle. We simulate the fluid properties like adding velocity/force, advection and diffusion by writing a fragment program for each.  These simulations can run in parallel on a GPU which is significantly faster than running the same on CPU.

2.1 Add force

As discussed, we consider the fluid system as a grid of points. Here the grid is an image of size 1024X768, where each pixel represents a cell in the grid.

As the result of previous stage, when a motion is detected, this layer identifies the respective pixels which will be affected by that motion and adds color to it signifying its direction (as described in previous section 1.3 ). This is as simple as getting the current frame and previous frame as images in fragment program and adding them to get the result.

add force
Figure-6 : Fluid system after applying force

2.2 Advection

Each molecule in the fluid has its own velocity. This velocity causes the fluid to transport itself and transport objects along the flow. This transportation of fluid particle with a velocity is called advection.

We implement advection in a fragment program, which cannot change the locations of the fragments they are writing. Instead of computing the advection which makes the particle moves over the current time step (which is not possible in GPU), we trace the path of the particle from each grid cell back in time to its former position, and we copy the quantity at that position to the current grid cell.

The following picture shows the computation of advection of the point marked in red circle. Tracing the velocity field back in time step leads to a point X on the grid. The value of the four nearest neighbors of the point X (connected as green square) is bilinearly interpolated, and the result is written back to the current cell.

advection
Figure-7 : Computing advection

Advection implemented in fragment program :

//Advection

uniform sampler2DRect x;  // quantity to advect
uniform sampler2DRect u;  //velocity
uniform float TimeStep;
                               
void main()
{
  vec2 st = gl_TexCoord[0].st;
  vec2 u = texture2DRect(u, st).rg;
  vec2 coord =  st - TimeStep * u; // tracking velocity back in time
                                   
  gl_FragColor =  texture2DRect(x, coord);
} 
advection

2.3 Diffusion

Diffusion is the movement of particles from an area of high concentration to an area of low concentration until the particles are evenly distributed. This is a passive spreading or averaging process. Viscous fluids have a certain resistance to flow, which results in diffusion of velocity.

diffusion
Figure-9 : Diffusion

Considering a grid of points, generally we compute the diffusion using the finite differencing method as shown below.

computing diffusion
Figure-10 : Computing diffusion using finite differencing method

The simulation of fluid diffusion is split into two stages.  In stage one, we need to compute the divergence of intermediate velocity.  In stage two, we need to solve two poisson equations. One is poisson pressure equation (Equation 4) and another is viscous diffusion equation (Equation 5).

Only on accomplishing these stages will we be successful in realistic diffusion of our fluid system.

The following table is helpful in understanding the computation of divergence and gradient using the finite difference method which will be discussed further.

Stage 1 - Compute divergence of intermediate velocity

From the above table we know how to calculate the divergence of the velocity, where u represents the pixel values which are right and left of the current pixel, and where V represents the pixel values which are top and bottom of the current pixel. Delta x and y represent the grid scale.

The divergence is drawn in a separate frame buffer and it will be used as a texture for computing the diffusion in the next stage. This fragment program takes the current velocity texture (the resultant image after applying advection shader) as input. The following shows the fragment shader implementation of velocity divergence.

//Divergence
uniform sampler2DRect Velocity; // velocity field
uniform float HalfInverseCellSize; // 0.5,  assuming deltaX and deltaY = 1.0

void main()
{
    vec2 st = gl_TexCoord[0].st;
    
    //get pixel value from neighbors
    vec2 vE = texture2DRect(Velocity, st + vec2(1.0,0.0)).rg;
    vec2 vW = texture2DRect(Velocity, st + vec2(-1.0,0.0)).rg;
    vec2 vN = texture2DRect(Velocity, st + vec2(0.0,1.0)).rg; 
    vec2 vS = texture2DRect(Velocity, st + vec2(0.0,-1.0)).rg;
   
    gl_FragColor.r = HalfInverseCellSize * (vE.x - vW.x + vN.y - vS.y); // computing divergence as shown in table
}

Stage 2 -  Solving poisson pressure and viscous diffusion

The poisson pressure equation is given by

Equation 4 :
poisson pressure equation

The discretized form of viscous diffusion is given by,

Equation 5 :
poisson pressure equation

Both equations are discretized and rewritten in a meaningful form as follows to
develop simple algorithm,

Equation 6 :
poisson pressure equation

The velocity divergence computed from the previous stage is used as input b.  The x represents the pressure which is a black texture (containing pixel values as 0) ie., we initially assume pressure is zero, alpha = - deltaX*deltaX (square of cell size) and beta = 4.0.

We write the fragment program for diffusion based on the final discretized equation

//diffusion
Uniform sampler2DRect Pressure;
uniform sampler2DRect Divergence;
uniform float Alpha;
uniform float InverseBeta; // 1/4.0

void main() 
{
    vec2 st = gl_TexCoord[0].st;
    
    vec4 pN = texture2DRect(Pressure, st + vec2(0.0, 1.0));
    vec4 pS = texture2DRect(Pressure, st + vec2(0.0, -1.0));
    vec4 pE = texture2DRect(Pressure, st + vec2(1.0, 0.0));
    vec4 pW = texture2DRect(Pressure, st + vec2(-1.0, 0.0));
    
    vec4 bC = texture2DRect(Divergence, st );
    gl_FragColor = (pW + pE + pS + pN + Alpha * bC) * InverseBeta;
  }

To achieve good results, we update this shader for number of iterations (jacobi iteration). Generally 40 to 80 iterations produces better results. Changing the iteration affects the accuracy of the simulation. Increasing the iteration increases the time of computation but produces better simulation. Reducing the iterations less than 20 produces unrealistic effects.

computing diffusion
Figure-11 : Fluid system after applying diffusion (40 iterations)

2.4 Projection

The resultant image shows the fluid simulation with a non-uniform pressure distribution, which is not realistic. To resolve this unrealistic effect, we need to compute the gradient of pressure (as mentioned in the table under the diffusion section 2.3) and subtract that pressure gradient from the intermediate velocity. The fragment program implementing the gradient subtraction is shown below.


//Gradient subtract
uniform sampler2DRect Velocity;
uniform sampler2DRect Pressure;

uniform float GradientScale; // 0.5, refer the table under diffusion section

void main()
{
    vec2 st = gl_TexCoord[0].st;

    float pN = texture2DRect(Pressure, st + vec2(0.0, 1.0)).r;
    float pS = texture2DRect(Pressure, st + vec2(0.0, -1.0)).r;
    float pE = texture2DRect(Pressure, st + vec2(1.0, 0.0)).r;
    float pW = texture2DRect(Pressure, st + vec2(-1.0, 0.0)).r;
    float pC = texture2DRect(Pressure, st).r;

    vec2 oldV = texture2DRect(Velocity, st).rg;
    vec2 grad = vec2(pE - pW, pN - pS) * GradientScale;
    vec2 newV = oldV - grad;

    gl_FragColor.rg = newV;
}
projection
Figure-12 : Fluid system after applying projection

One important point to be noted here is, we need temporary storage at each stage, because the operations mentioned here cannot be performed in-place. For example, to compute advection, the code will be


uTemp = advect(u);
swap(u, uTemp);

Result

This resultant image serves as an input for the final fragment program, which adds lighting, specular effect and background to produce a realistic visualization of water simulation. The entire simulation is achieved by solving Navier-Stokes equation for fluids. This equation serves as the basic building block for developing more complex simulations such as clouds, smoke, etc. User can interacts with these realistic physics-based simulations through gestural inputs when he/she enters the area of vicinity, which provides more engaging and fun experience.

References