Cyanilux

Game Dev Blog & Tutorials

Fog Plane Shader Breakdown

Intro

This post includes two examples to produce fog shader effects. In both cases, the shader is applied to a flat plane or quad where the alpha of the surface is altered based on depth values to produce a fog effect. As shown in the tweet above, It can be used to produce a vertical fog effect or used in doorways or cave exits/entrances to fade into white/black (or any other colour).

The first example is simpler, but seems to have some inaccuracies at very shallow angles (between the camera’s view direction and plane surface) where the fog can look squashed / more dense. The second example is more complicated but doesn’t have this issue, so looks better if the camera can get close to the fog effect. Note however, in both examples the camera cannot pass through the quad/plane, or the fog effect will disappear, so it is best for 3rd person / top-down / fixed camera angles.

!

The graphs provided will also only work in a Perspective camera projection.

For Orthographic, a slightly different method will be required. See the Depth Difference and Reconstruct World Pos from Depth sections near the end of the Depth post for some insight on how to handle that.

Notes

Breakdown

In order to create these fog effects, we need to sample the depth texture from the camera. Shadergraph has a Scene Depth node which will handle this as well as the conversions for us. By using the Eye sampling mode it will output the linear depth in terms of eye (aka view) space units. If you aren’t familiar with the node, you can read more in the Depth post.

Next we need the depth to the surface of the plane. To do this we can take the alpha/w component of the Screen Position node set to Raw mode.

It’s not too important to know why this is the object’s depth, but it’s due to how 3D object positions are converted to screen coordinates by the model-view-projection matrix. It is important that we set the node to Raw however, as in the Default mode each component is divided by this alpha/w value, including the alpha/w component itself meaning it’ll just be 1. That’s referred to as the “perspective divide” which converts clip space coordinates into the screen coordinates.

We could instead use the Position node set to View Space, and Negate the Z/B output from a Split node to obtain this object depth. In a perspective projection both of these would produce the same output.

(Image)

With these to depth values, we can now move onto producing the fog shaders.


Example 1 (Simple)

We can Subtract the two depth values to obtain the difference in depth between the object’s surface and the scene depth. The further apart these two depth values are, the larger the output value is, producing a fog effect.

Before we plug that into the alpha, it would be good if we could control the density of the fog somehow. We can do this by using a Multiply node (or Divide, if you prefer to use density values like 2 or 3 instead of 0.5 and 0.33). We should also put the output into a Saturate node, to clamp values to be between 0 and 1, as values larger than 1 may produce very bright artifacts – especially when using bloom post processing.

Finally, in order to change the colour of the fog, we create a Color property to use as the Color input on the Master node. I’m also taking the alpha component of our property and multiplying it with the saturated output so the alpha value of our colour is taken into account.

(Image)

!

Make sure to set the alpha component of the Fog Colour in the shadergraph blackboard (and in the material inspector!) to 1 (or 255 if using the 0-255 range), otherwise the effect will be invisible. Can use other values if you don’t want fog to be fully opaque at a distance.

If you don’t want the alpha to be taken into account you can remove the last Multiply


Example 2 (Accurate)

URP / Built-in

For a more accurate fog, we can calculate the world position of the scene depth coordinate and use that to produce the fog instead, using the normal direction of the plane/quad to have the fog in the correct direction. Note that this will likely be less performant than the simpler version.

!

In Shader Graph v11 (Unity 2021.1+) the View Direction node is now normalised in all pipelines. We must use the newer View Vector node to achieve the correct result.

(Text in this post has been changed to account for this, but images have not!)

Create a View Vector node set to World space. This obtains a vector from the pixel/fragment position to the camera. The magnitude of this vector is the distance (between the camera and fragment), but this is not the same as depth – The depth is a distance from the fragment position to a plane that is perpendicular to the camera, not the camera position itself. This creates a triangle as shown in the image.

(Image)

In order to reconstruct the world position we need to scale this vector to the scene position, behind the quad/plane. The Scene Depth and “Scene Pos” we want creates another triangle as shown in the image. We can use the scale factor of the two triangles to obtain the vector to the scene position, which can be achieved by dividing the View Vector by the Raw Screen Position W/A depth and then multiplying by the Scene Depth. We can then Subtract the camera’s world Position from the Camera node to get the scene world position.

(Image)

With this scene world position, we can now handle the fog. To make sure the fog is in the correct direction, we can Transform the position to Object space and take the Dot Product between it and the normalized Normal Vector in Object space. Since the normal vector points outwards from the plane though, we also need to Negate the result of the Dot Product (or negate the normal vector going into the dot product).

A side effect of using the Transform to Object space, is that we don’t need a property to control the density of the fog like in the first example. We can change it by just scaling the quad/plane in the normal’s direction instead.

We then Saturate our output to clamp values above 1, as values larger than 1 may produce very bright artifacts – especially when using bloom post processing.

Finally, in order to change the colour of the fog, we create a Color property to use as the Color input on the Master node. I’m also taking the alpha component of our property and multiplying it with the saturated output so the alpha value of our colour is taken into account.

(Image)

!

Make sure to set the alpha component of the Fog Colour in the shadergraph blackboard (and material inspector!) to 1 (or 255 if using the 0-255 range), otherwise the effect will be invisible. Can use other values if you don’t want fog to be fully opaque at a distance.

If you don’t want the alpha to be taken into account you can remove the last Multiply

To control the density of the fog, adjust the Y scale of the plane. We can do this as we transform to object space, rather than needing to use an additional property.


HDRP

If you are working in HDRP, the following graph should work instead. It doesn’t include the Fog Colour property, but it should be easy to add that in and connect it to the Color input on the Master node, or Master Stack in newer versions)

(Image)

We don’t use the View Direction node here as in HDRP it was always normalised which isn’t what we want. It may be possible to use the same View Vector setup above if in Unity 2021.1+, but otherwise this method using the Position node in World space should work too. Due to Camera-Relative rendering we also don’t need to worry about subtracting this from the camera’s position.


Thanks for reading! 😊

If you find this post helpful, please consider sharing it with others / on socials
Donations are also greatly appreciated! 🙏✨

(Keeps this site free from ads and allows me to focus more on tutorials)


License / Usage Cookies & Privacy RSS Feed