Cyanilux

Game Dev Blog & Tutorials

Fog Plane Shader Breakdown

URP
Shader Graph

Intro

This post includes two examples to produce fog shader effects. In both cases, the shader is applied to a flat plane or quad where the alpha of the surface is altered based on depth values to produce a fog effect. As shown in the tweet above, It can be used to produce a vertical fog effect or used in doorways or cave exits/entrances to fade into white/black (or any other colour).

The first example is simpler, but seems to have some inaccuracies at very shallow angles (between the camera’s view direction and plane surface) where the fog can look squashed / more dense. The second example is more complicated but doesn’t have this issue, so looks better if the camera can get close to the fog effect. Note however, in both examples the camera cannot pass through the quad/plane, or the fog effect will disappear, so it is best for 3rd person / top-down / fixed camera angles.

The graphs provided will also only work in a Perspective camera projection. For Orthographic, a slightly different method will be required. See the Depth Difference and Reconstruct World Pos from Depth sections in the Depth post for some insight on how to handle that.

Notes

Breakdown

In order to create these fog effects, we need to sample the depth texture from the camera. Shadergraph has a Scene Depth node which will handle this as well as the conversions for us. By using the Eye sampling mode it will output the linear depth in terms of eye (aka view) space units. If you aren’t familiar with the node, you can read more in the Depth post.

Next we need the depth to the surface of the plane. To do this we can take the alpha/w component of the Screen Position node set to Raw mode.

It’s not too important to know why this is the object’s depth, but it’s due to how 3D object positions are converted to screen coordinates by the model-view-projection matrix. It is important that we set the node to Raw however, as in the Default mode each component is divided by this alpha/w value, including the alpha/w component itself meaning it’ll just be 1. This is usually referred to as the “perspective divide” – it converts the clip space coordinates (obtained after applying the model-view-projection matrix) into normalised screen coordinates, displaying the 3D perspective to the 2D screen.

(I believe we could also use the Position node set to View Space, and Negate the Z/B output from a Split node to obtain this object depth. I’m used to using the other approach instead though).

With these to depth values, we can now move onto producing the fog shaders.

Example 1 (Simple)

We can Subtract the two depth values to obtain the difference in depth between the object’s surface and the scene depth. The further apart these two depth values are, the larger the output value is, producing a fog effect.

Before we plug that into the alpha, it would be good if we could control the density of the fog somehow. We can do this by using a Multiply node (or Divide, if you prefer to use density values like 2 or 3 instead of 0.5 and 0.33). We should also put the output into a Saturate node, to clamp values to be between 0 and 1, as values larger than 1 may produce very bright artifacts – especially when using bloom post processing.

Finally, in order to change the colour of the fog, we create a Color property to use as the Color input on the Master node. I’m also taking the alpha component of our property and multiplying it with the saturated output so the alpha value of our colour is taken into account.

IMPORTANT :

Example 2 (Accurate)

For a more accurate fog, we can calculate the world position of the scene depth coordinate and use that to produce the fog instead, using the normal direction of the plane/quad to have the fog in the correct direction. Note that this will likely be less performant than the simpler version.

Create a View Direction node set to World space. This obtains a vector from the pixel/fragment position to the camera. The magnitude of this vector is the distance (between the camera and fragment), but this is not the same as depth – The depth is a distance from the fragment position to a plane that is perpendicular to the camera, not the camera position itself. This creates a triangle as shown in the image.

Note : View Direction is not normalised in URP, but is in HDRP. If you are using HDRP you will need to use the Position node in World space instead as we need it non-normalised. World positions in HDRP are also Camera-Relative which means we also don’t need to worry about subtracting this from the camera’s position later. At the end of the post I’ll provide another graph showing how you can achieve the fog effect in HDRP.

In order to reconstruct the world position we need to scale this vector to the scene position, behind the quad/plane. The Scene Depth and “Scene Pos” we want creates another triangle as shown in the image. We can use the scale factor of the two triangles to obtain the vector to the scene position, which can be achieved by dividing the View Direction by the Raw Screen Position W/A depth and then multiplying by the Scene Depth. We can then Subtract the camera’s world Position from the Camera node to get the scene world position.

With this scene world position, we can now handle the fog. To make sure the fog is in the correct direction, we can Transform the position to Object space and take the Dot Product between it and the normalized Normal Vector in Object space. Since the normal vector points outwards from the plane though, we also need to Negate the result of the Dot Product (or negate the normal vector going into the dot product).

A side effect of using the Transform to Object space, is that we don’t need a property to control the density of the fog like in the first example. We can change it by just scaling the quad/plane in the normal’s direction instead.

We then Saturate our output to clamp values above 1, as values larger than 1 may produce very bright artifacts – especially when using bloom post processing.

Finally, in order to change the colour of the fog, we create a Color property to use as the Color input on the Master node. I’m also taking the alpha component of our property and multiplying it with the saturated output so the alpha value of our colour is taken into account.

IMPORTANT :

If you are working in HDRP, the following graph should work instead (Note it doesn’t include the Fog Colour property, but you can easily add that in and connect it to the Color input on the Master node) :



Thanks for reading! If you have any comments, questions or suggestions you can drop me a tweet or join my discord. If this post helped, consider sharing a link with others!

~ Cyan


License / Usage Cookies & Privacy