Wednesday, October 3, 2012

Fast Texture Projection onto Geometry on the GPU

Go to Index

While implementing various graphical algorithms, a problem that must be solved from time to time is projection of texture onto some geometry. This happens, for instance, when we want to make some water reflection or cast g-buffer textures onto light's geometry.

The procedure is pretty straightforward. If we multiply a vertex's position by the whole world-view-projection matrix and perform the perspective division, we get vertex's position in normalized device coordinates, which all are in $[-1, 1]$ interval. If we wanted to project a texture onto geometry, we would need to do exactly the same thing (but use world-view-projection matrix of some sort of "projector", not the game's "camera") and remap the vertex's final $x$ and $y$ coordinates from $[-1, 1]$ interval to $[0, 1]$ interval, thus successfully generating texture coordinates, which can be used to sample a texture.

Let's say that we have a vertex shader like this:
uniform mat4 worldViewProjTransform;
uniform mat4 projectorWorldViewProjTransform;

attribute vec4 attrib_position;

varying vec4 varying_projectorTexCoord;

void main()
{
  gl_Position = attrib_position * worldViewProjTransform;

  varying_projectorTexCoord = attrib_position * projectorWorldViewProjTransform;
}
In this vertex shader we simply multiply the vertex's position by the projector's matrix. Perspective division and coordinates remap does not occur here yet. It will in the fragment shader:
uniform sampler2D projectorSampler;

varying vec4 varying_projectorTexCoord;

void main()
{
  vec4 projectorTexCoord = varying_projectorTexCoord;
  
  projectorTexCoord /= projectorTexCoord.w;
  projectorTexCoord.xy = 0.5 * projectorTexCoord.xy + 0.5;
  
  gl_FragColor = texture2D(projectorSampler, projectorTexCoord.xy);
}
In this pixel shader we first do the perspective division and then, when we know we are in $[-1, 1]$ interval, we can do the remapping part.

The title of this post contains word "fast". Is our approach fast? Well, with today's hardware it actually is but we can do better. The very first thing that should capture your attention is the division in the pixel shader. Can we do something about it? Yup. There is a function in basically any real-time shading language like texture2DProj which, before taking a sample from the texture, will first divide all components of the second argument by the 4th component, $w$. So, we can try doing this:
uniform sampler2D projectorSampler;

varying vec4 varying_projectorTexCoord;

void main()
{
  vec4 projectorTexCoord = varying_projectorTexCoord;

  projectorTexCoord.xy = 0.5 * projectorTexCoord.xy + 0.5;
  
  gl_FragColor = texture2DProj(projectorSampler, projectorTexCoord);
}
Will this work? Of course not. That is because we switched the order of operations. In this version of code the projection happens after remapping whereas it should occur before. So, we need to fix the remapping code.

Remember that the projected coordinates $[x, y, z, w]$ that come into the pixel shader are all in $[-w, w]$ interval. This means that we want to remap the coordinates from $[-w, w]$ to $[0, w]$. Then, the divison can be performed either by dividing manually and calling texture2D or by calling texture2DProj directly. So, our new pixel shader can look like this:
uniform sampler2D projectorSampler;

varying vec4 varying_projectorTexCoord;

void main()
{
  vec4 projectorTexCoord = varying_projectorTexCoord;

  projectorTexCoord.xy = 0.5 * projectorTexCoord.xy + 0.5 * projectorTexCoord.w;
  
  gl_FragColor = texture2DProj(projectorSampler, projectorTexCoord);
}
Are we done yet? Well, there is actually one more thing we can do. We can shift the "$0.5$" multiplication and addition to the vertex shader. So the final vertex shader is:
uniform mat4 worldViewProjTransform;
uniform mat4 projectorWorldViewProjTransform;

attribute vec4 attrib_position;

varying vec4 varying_projectorTexCoord;

void main()
{
  gl_Position = attrib_position * worldViewProjTransform;
 
  varying_projectorTexCoord = attrib_position * projectorWorldViewProjTransform;
  varying_projectorTexCoord.xy = 0.5*varying_projectorTexCoord.xy + 0.5*varying_projectorTexCoord.w;
}
and the final pixel shader is:
uniform sampler2D projectorSampler;

varying vec4 varying_projectorTexCoord;

void main()
{
  vec4 projectorTexCoord = varying_projectorTexCoord;

  gl_FragColor = texture2DProj(projectorSampler, projectorTexCoord);
}
Yes. Now we are done.

One might wonder if we could do the perspective division in the vertex shader and call texture2D instead of texture2DProj in the pixel shader. The answer is: we can't. The hardware, during rasterization, interpolates all vertex-to-pixel shader varyings. Assuming that varying_projectorTexCoord is $[x, y, z, w]$ vector, the value that is read in the pixel shader is $[\mbox{int}(x), \mbox{int}(y), \mbox{int}(z), \mbox{int}(w)]$, where $\mbox{int}$ is an interpolating function. If we performed the division in the vertex shader, the vector coming to the pixel shader would be of the form $[\mbox{int}(\frac{x}{w}), \mbox{int}(\frac{y}{w}), \mbox{int}(\frac{z}{w}), 1]$. And obviously $\mbox{int}(\frac{x}{w}) \neq \frac{\mbox{int}(x)}{\mbox{int}(w)}$ (same for other components). Note that the right hand side of this inequality is what happens when we do the perspective division in the pixel shader

Instead of doing all this trickery we could include a special remapping matrix into the chain world-view-projector of the projector's matrix. What this matrix looks like can be found here (this is also a great tutorial about texture projection in general by the way). The problem with this approach is that you might not always have this matrix injected into the whole chain and for some reason you either don't want or even can't do it. You could, naturally, create this remapping matrix in the vertex shader but that would raise the number of instructions. It is better to use the approach discussed throughout this post in that case, as it will make things running much quicker.

Friday, June 22, 2012

Interactive Physics-based Grass Animation

Go to Index

Recently at Madman Theory Games I had quite an interesting task to do. We're working on a Diablo-like hack & slash and the lead artist wanted grass (or actually some overgrown form of grass...) to deflect when, for instance, the player is wading through or when a fireball is sweeping over it. The general idea is rather trivial: compute deflected vertices positions based on some deflectors. My initial idea was to treat an entity (like the player) as a spherical deflector, and displace vertices positions based on the distance from the deflector. So when a grass's blade is right next to the player, it is displaced by some fixed maximum amount. At the border of the sphere it is not displaced at all. This worked just fine, but made the grass blades very stiff, with completely no spring-like motion when the deflector stops affecting the blade. So put simply, it was unnatural.

To make blades swing a little like springs when a deflector stops affecting it I thought about adding some fake sinusoidal movement. That would however require tracking time telling how long a blade is not affected by deflectors at all, and scale the sinusoidal deflection based on that time. I didn't try this but I'm pretty sure that this approach would yield some problems, both with stability and realism.

I eventually came to an inevitable conclusion - I need some simple, but physics-based model to control the blades. I thought that if I implemented a simple blades movement based on some external forces, it would be much easier and prdictable to control. So the first phase was to create a test square in a 2D world and try moving it by attaching external forces to it. This turned out to be piece of cake.

The test square has a position $\mathbf{p}$ and velocity $\mathbf{v}$. We know that to move the square we should do this each frame:
\begin{eqnarray}
\mathbf{p} = \mathbf{p} + \mathbf{v} \Delta t
\end{eqnarray}
To model external forces affecting the square we need to find a way to modify $\mathbf{v}$. We know that the net force affecting the square has a value $\mathbf{F}$. We also know some of the most important formulas in the world:
\begin{eqnarray}
\mathbf{F} &=& m \mathbf{a} \cr
\mathbf{a} &=& \frac{\mathbf{F}}{m} \cr
\mathbf{a} &=& \frac{\Delta \mathbf{v}}{\Delta t} \cr
\Delta \mathbf{v} &=& \mathbf{a} \Delta t = \frac{\mathbf{F}}{m} \Delta t
\end{eqnarray}
So, we have a formula for $\Delta \mathbf{v}$. This is a value by which we should increase $\mathbf{v}$ each frame.

After all this math, let's see some code:
// do once

  vec3 position = vec3(0.0f, 0.0f, 0.0f);
  vec3 velocity = vec3(0.0f, 0.0f, 0.0f);

// do each frame 

  vec3 force = vec3(0.0f, 0.0f, 0.0f);
  float mass = 1.0f; 

  // modify force here

  vec3 acceleration = force / mass;
  vec3 velocity_delta = acceleration * deltaTime;
  velocity += velocity_delta;
  position += velocity * deltaTime;
This is all we need to have a simple force-based physical simulation.

Given this simple system we enter the second phase: we need to add proper forces to model believable grass motion. First of all we need a force that will deflect the blades. To do this I simply check if a blade is in a spherical deflector and apply a force vector $bladePosition - deflectorPosition$, normalized and scaled by some strength parameter. I call this stimulate force, as it stimulates the movement of the blades.

Stimulate force works very nice but it's not sufficient. You probably recall the first law of motion that says that if no force acts on a particle, it stands still or moves with a uniform speed. In our case the stimulate force causes movement and once we turn this force off, the blades would deflect into infinity because they are already in move, and no force acts on them! So we definitely need something that could stop them - some sort of friction is needed. My long-term friend and a colleague at work chomzee, who is much more skilled in physics than I am, suggested a simple solution - simply add a force that acts in the direction opposite to the velocity of a blade. This will look like this:
 force -= friction * velocity;
Note here that we use the velocity of a blade/particle from the previous frame to modify the force. This force is then applied to compute new velocity in the current frame.

Okay, so far we have two forces: stimulate and friction. Stimulate will make grass blades deflect, thus induce some movement, and the friction will bring the blades to a halt eventually. But there is one more piece to this puzzle. The grass blades can't stay deflected once a deflector stops acting on them. The blades need to get back to their initial positions. Here's where spring model comes handy. We can treat our grass blades as springs and use very simple spring force formula:
\begin{eqnarray}
\mathbf{F} = -k \mathbf{x}
\end{eqnarray}
In this formula $k$ is the resiliency of the spring, and $\mathbf{x}$ is a displacement vector from the initial position. The code that simulates this is:
 force -= resiliency * (position - position_initial);
Honestly, that is pretty much it! Using a couple lines of code and some basic understanding of Newton's laws of motion I was able to simulate a very, very realistic interactive grass animation. The game we are working on should have been finished somewhere by the end of this year so you will have the opportunity to see this system in a comercial product pretty quickly :).

There is one last thing I would like to mention. In the beginning of this post I showed a formula that we use to accumulate (integrate, so-called Euler integration) particle's position based on velocity:
\begin{eqnarray}
\mathbf{p} = \mathbf{p} + \mathbf{v} \Delta t
\end{eqnarray}
This equation is actually just an approximation to the real position of the particle after a simulation step. To be more precise, this is the first order approximation. We can modify this formula to get slightly more accurate solution (but still an approximation!):
\begin{eqnarray}
\mathbf{p} = \mathbf{p} + \mathbf{v} \Delta t + \mathbf{a} \frac{{\Delta t}^2}{2}
\end{eqnarray}
Does it look familiar? This is the second-order approximation. I would love to say something more about but I'm still a newbie when it comes to physics so you would have to wait a couple of years to hear a more solid and credible explanation from me :).

Saturday, April 28, 2012

Managing Shaders Combinations

Go to Index

Recently, when working on the engine (BlossomEngine) for a game that I am co-creating (http://gct-game.net) I encountered a problem that ultimtely every engine programmer must face - managing shader combinations.

It is often the case that in the engine we have some super-duper shader(s) with many switches. Such shaders are called uber-shaders. The idea of an uber-shader is that we do not write a separate shader (file) for every single surface, but rather we have on big shader where we can turn some features on and off, like the use of specular lighting on the surface, soft shadows, or anything else that can describe the material.

One way to accomplish this is by using static branching within the shader, using constant values supplied to the shader. This solution has some flaws, however. The first is that we generate more if-then instructions in the shader, which simply makes the shader longer. The other one is that we waste shader constants on feeding the shader with appropriate steering data.

The second way to manage an uber-shader is to generate completely separate shaders by using #ifdefs in the shader and thus avoiding the need to supply the shader with any constants steering data. The downside is that we will now generate a bunch of shaders which we have to manage somehow in the application. Note that the number of generated shaders grows exponentially with the number of #ifdefs. For example, consider a situation where our shader contains two #ifdefs, A and B. The possible combinations are:

No #ifdef
A
B
AB

Now, if we decide to add a third #ifdef C, we will have to double all shaders that we have so far, with new #ifdef C added to each mix:

No #ifdef
A
B
AB
C
AC
BC
ABC

So the general formula for the number of generated shaders is $2^n$, where n is the number of #ifdefs.

In BlossomEngine the most complex shader I have is a forward-shaded mesh sunlight shader. I must say that at the beginning of development, when I had a very, very few #ifdefs (normal mapping, specular lighting) I didn't bother about having any sophisticated manager. So I simply declared each combination as a separate shader, like this:
CShader meshSunlightPixelShader;
CShader meshSunlightPixelShader_NormalMapping;
CShader meshSunlightPixelShader_Specular;
CShader meshSunlightPixelShader_NormalMapping_Specular;
Obviously, as the game engine evolves, you are developing more and more complex shaders. So this way I ended up having 16 different combinations, what was really troublesome to manage in the application code. I decided to write some automatic system that would generate an array of shaders from given parameters. So I did that. My new class CShadersPack was able to generate an array of CShader objects from a single shader file that contains #ifdefs. A call to init function of that class can look like this:
meshSunLightPixelShadersPack.init("./BlossomEngine/shaders/mesh_sun_light.ps",
 stPixelShader, "-DSPECULAR -DCUBE_MAP_REFLECTIONS -DSOFT_SHADOWS 
-DDIFFUSE_SHADING_NORMAL -DDIFFUSE_SHADING_NORMAL_MAPPED 
-DAMBIENT_SHADING_NORMAL_MAPPED -DRIM_LIGHTING");
The list of all #ifdefs is passed as one string, with all #ifdefs separated with a space. The function splits that string and generates all combinations.

Now the question is, when you already have those shaders generated, how to get the one you need from the CShadersPack object. Say we have A, B and C #ifdefs, and we want to get AC shader. One way to do this would be to search through the shaders array in CShadersPack and find the one with name that matches the AC combination. I think I don't have to point out how wasteful in CPU time that would be. There is a much better way. Let's again see how combinations are generated. First, we have #ifdef A. Combinations are:

No #ifdef
A

Now we add B:

No #ifdef
A
B
BA

Now C:

No #ifdef
A
B
BA
C
CA
CB
CBA

So as you can see, there is some regularity. For instance, if we are starting with shaderIndex = 0 and we want to pick a shader with feature A, we will have to add $1$ to shaderIndex. If we want also feature B, we must add $2$, if we want feature C, we must add $4$, and so on... So to get AC combination we get shaderIndex = 1 + 4 = 5. By looking at the list above we see that indeed, CA is 5th on the list (0-based indexing). Adding a couple of integers and indexing the shaders array will probably be much faster than looping through the array and comparing strings. In real code this looks like this:
int shaderIndex = 0;

if (useSpecular)
 shaderIndex += 1;
if (useReflectionsCubeMap)
 shaderIndex += 2;
if (useSoftShadows)
 shaderIndex += 4;
if (meshesInstances[i]->material.diffuseShadingType != CMeshMaterial::dstNone)
 shaderIndex += 8;
if (meshesInstances[i]->material.diffuseShadingType == CMeshMaterial::dstNormalMapped)
 shaderIndex += 16;
if (meshesInstances[i]->material.ambientShadingNormalMapped)
 shaderIndex += 32;
if (useRimLighting)
 shaderIndex += 64;

Okay, so at this point I had a nice system to manage shader combinations. Unfortunately, I quickly discovered an annoying problem - with nested #ifdefs. In my mesh sunlight shader I have #ifdef called DIFFUSE_SHADING_NORMAL. If it is turned on, the shader will perform classical NdotL lighting based on interpolated vertex normal. However, I want to have the ability to perform the shading using normal from a normal map. So inside DIFFUSE_SHADING_NORMAL I have DIFFUSE_SHADING_NORMAL_MAPPED which takes normal from the normal map into the computations. Does this cause a big problem? Yup, it does.

Let's now assume we have a shader with #ifdef A, B and C, but C is located inside the #ifdef B. Something like this:
#ifdef A
 A code
#endif

#ifdef B
 B code
 #ifdef C
  C code
 #endif
 B code
#endif
Combinations generated with my system would still be:

No #ifdef
A
B
BA
C
CA
CB
CBA

But hey, since C is dependent on B, the combination AC will generate exactly the same code as combination A. As we add more and more #ifdefs we will have more and more duplicated shaders. This is wasteful in terms of compilation/assembling time, as well as in terms of wasted GPU memory to store those duplicated shaders.

A way around this problem would be to, put simply, generate combinations in a more intelligent way. This raises some new problems, however, like computing a proper index to the shaders array in CShadersPack, or passing the #ifdefs list to CShadersPack::init. The dependencies would have to marked in some way. This solution can certainly be done, but fortunately to me I figured a much easier to implement and manage way to handle this problem.

First of all, the general idea of generating arguments combinations remains the same, so we always have $ 2^n$ combinations, where n is the number of arguments. However, I do not generate the shaders immediately. Instead, a particular shader combination is generated when a mesh with particular material is to be used. So let's say I want to use a shader that only has rim lighting (see "real" code above). In that case I look for an arguments combination under index $64$. I see that it has not been used, so I generate that shader, store the pointer in the shaders array, and hold index into that shaders array in the arguments combinations list. Here's the code for CShadersPack class:
class CShadersPack
{
 struct ShaderCombination
 {
  std::string args;
  int indexToShadersList;
 };

private:
 std::string fileName;
 ShaderType shaderType;

 int combinationsNum; // 2 raised to number of args
 int shadersNum; // number of actually generated shaders

 ShaderCombination *combinationsList;
 std::vector< CShader* > shaders;

public:
 void init(const std::string &fileName, ShaderType shaderType, const std::string &args = "", const std::string &argsToAffix = "");
 void free();

 CShader* getShader(int index);
};
As you can see, there is ShaderCombination structure. The list of that type (combinationsList) is generated a priori, and has the size of $2^n$, where n is the number of arguments. indexToShadersList is initialized with $-1$ value, indicating that this particular combination does not have a shader generated yet. Once the shader is generated, this value will point to shaders vector.

Let's now see how CShadersPack::getShader looks like:
CShader* CShadersPack::getShader(int index)
{
 if (combinationsList[index].indexToShadersList == -1)
 {
  combinationsList[index].indexToShadersList = shaders.size();

  CShader* shader = new CShader();
  shader->init(fileName, shaderType, combinationsList[index].args);
  shaders.push_back(shader);
 }

 return shaders[combinationsList[index].indexToShadersList];
}
So, if indexToShadersList is still $-1$, it means we have to compile/assemble the shader and store it in the shaders list. Finally, a proper pointer to the shader is returned.

This solution is simple, robust and easy to implement and manage. One flaw is that we cannot generate all shaders (actual shaders) when the game is starting because we don't know what materials are used, so we have to generate them during actual gameplay. This will however introduce some slight jerking once an object with a new material combination shows up. But hey, can't we just cache already used arguments combinations on the disk, and generate shaders with those arguments once the game is starting? Yes, we can! And that is what my engine does.