Cyanilux

Game Dev Blog & Tutorials

FAQ

Over time I’ve answered quite a few questions in the Shaders (and URP) channels on the Official Unity Discord (as well as my own discord). But one problem with using discords over regular forums, is those answers are no longer searchable through Google - so I’ve made this page!

Click buttons below to foldout answers! This also changes the URL so you can link to them! :)

General

You’ll commonly see your materials turn magenta if you’ve tried switching to a different render pipeline. This occurs because each pipeline uses different shaders.

You can automatically convert any materials using Unity’s shaders (e.g. Standard) :

Any custom shaders (that you’ve written or from downloaded assets) cannot be converted automatically. You would need to rewrite them or a find a replacement asset.

If you aren’t upgrading, materials can also turn magenta when there is an error in the shader syntax (Should be able to see these in the Console, or in the Inspector when the shader asset is selected).

In a build, magenta materials could also mean the shader uses a compile target which is higher than what is supported by the platform.

Correctly sorting transparent geometry can be difficult. It’s basically an unsolved problem in realtime computer graphics. There usually isn’t a single answer that can solve every case.

Opaque objects don’t have this problem as they write to the Depth Buffer (ZWrite), which allows Depth Testing (ZTest) to occur to sort on a per-fragment/pixel basis. Unity also will render opaque objects closer to the camera first, so we don’t need to waste time rendering pixels behind others.

But with the Transparent queue, shaders typically don’t write to the depth buffer and we have to render objects further away first, in order to achieve the correct Alpha Blending. Even if depth write was enabled, it still wouldn’t sort correctly when blending is involved.

Quick tips :

Solutions to fixing common transparent sorting issues are listed below.

Render Queue

Transparent objects will sort based on their mesh origin. If you know that a particular transparent material should always appear behind / on top of others, you can use the Render Queue or Sorting Priority on the material to force the render order.

Pre-sorting Mesh Triangle Order

When rendering a mesh, the faces are rendered in order of their indices/triangles array. In cases where you don’t have intersecting geometry, it may be possible to pre-sort the triangles to force the draw order. This can be more performant than using the above method, but requires some setup in the modelling program.

(Image)

A mesh consisting of layered spheres.
Left : The inner spheres are incorrectly rendering on top.
Right : Sorting corrected by combining layers in specific order in Blender.

It may vary in each 3D modelling program, but as you combine multiple meshes, the assumption is that the triangles are appended.

Example in Blender :

  1. Separate each “layer” of triangles into separate objects.
  2. Select two layers - outer most layer first. I find it easiest to do this in the Outliner while holding Ctrl
  3. Combine/Join with Ctrl+J (while hovering over the 3D Viewport)
  4. Repeat steps 2 & 3 until each layer is collapsed down into a single mesh
(Image)

Blender Outliner, showing each layer being combined after repeating these steps.

Transparent Depth Prepass

When you don’t want to see overlapping faces of the model through itself, we can use a prepass to write to the depth buffer only. When we then render our object normally it can ZTest against this.

(Image)

Left : Normal Render
Right : Render with Prepass
(Character from Kenney Animated Characters 2)

For Built-in RP, can add a pass before your main one :

1
2
3
4
Pass {
    ZWrite On       // Write to the Depth Buffer
    ColorMask 0     // Don't write to the Color Buffer
}

(Can find another example here : https://forum.unity.com/threads/transparent-depth-shader-good-for-ghosts.149511/)

In other pipelines we can’t use multi-pass shaders, but you can instead use a separate shader/material with -1 on the Sorting Priority (or Render Queue). Can use the following code in a .shader file (Create → Unlit Shader)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Shader "Custom/DepthPrepass" {
    Properties { }
    SubShader {
        Tags { "Queue"="Transparent-1" }
        Pass {
            ZWrite On
            ColorMask 0
        }
    }
}

OIT

There are also methods to achieve Order Independent Transparency - But I’m not at all familiar with these. I think they’re quite expensive (either in terms of performance or memory), and Unity does not include any support for them so you’d need to implement this stuff yourself (or find an asset/package that handles it for you. This would require custom shaders too though, so not easy to implement)

Shader Graph

The Main Preview showing as Magenta/Pink usually means that the graph Target is not supported in the current Render Pipeline. If you have installed URP or HDRP packages, there is more that is required to properly configure Unity to use those pipelines, such as assigning a pipeline asset under Project Settings.

See the documentation for steps (can select version in top left of those pages) :

If you have already done this, check the Target under the Graph Settings tab of the Graph Inspector window. If you have created a “Blank Shader Graph” you need to add a target to that list before you can use the graph. In 2021.2+ it is possible to add the Built-in RP as a target. Older verisons do not have support for Built-in and must use URP or HDRP.

May be able to find more information in my Intro to Shader Graph post.

Note that you may also see other previews showing as Magenta/Pink when obtaining undefined values (NAN), such as when dividing by zero, or using negative values in the Power node (as the hlsl pow function only supports positive ranges).

In v10+, you can obtain a redirect node by double left-clicking a connection wire. Or right-click and select “Add Redirect”.

Intro to Shader Graph post - Redirect

This is intentional. Previews inside nodes do not show alpha/transparency, only the RGB data. It’s common for colours to “stretch out” in fully transparent areas to avoid artifacts. If it was instead black in these areas, the colour might darken along the transparent edge. (That may look even worse when dealing with mipmaps, though you typically wouldn’t have those with sprites in particular)

Only the Main Preview will show transparency.

(Image)

(Robot Character from Kenney Toon Characters 1)

Even with these stretched-out previews, the final result should look correct provided the alpha channel (A output from Sample Texture 2D) is connected to the Alpha port in the Master Stack.

If for some reason you do need the texture masked correctly to the alpha, can Multiply the RGBA and A outputs. Or if you need to control the background color, put the A output into the T input on a Lerp with A as the background and B as the RGBA output.

Some nodes can only be connected to the Fragment stage.

Shader Graph does not make this problem very clear, but it is most commonly encountered when dealing with the Sample Texture 2D node, which uses these derivatives behind the scenes to calculate the mipmap level when sampling. You would instead need to use the Sample Texture 2D LOD version.

There are other nodes that rely on the derivates, such as Normal From Height and the various procedural shapes (e.g. Ellipse, Rectangle, Polygon, Rounded Rectangle, Rounded Polygon).

There are also nodes that use SAMPLE_TEXTURE2D(tex, sampler, uv) in their code (e.g. Triplanar), but these could likely be rewritten manually via a Custom Function node to use the SAMPLE_TEXTURE2D_LOD(tex, sampler, uv, lod) macro instead, in order to support the vertex shader. (Can view generated code for a specific node by right-clicking it and selecting Show Generated Code or Open Documentation)

For more information / nodes affected, see Intro to Shader Graph post - Fragment-Only Nodes

It is not possible to change the frequency/wavelength or adjust the phase of the Sine Time output from the Time node. You can only remap values, such as using a Multiply to adjust the amplitude of the wave.

If you need to control the frequency, use the Time output instead, Multiply by your frequency. To adjust phase use Add or Subtract. You would then put into a Sine node. Can also Multiply again after this to adjust amplitude, same as before.

While it’s possible to rotate meshes in C# to produce billboarding (e.g. with transform.LookAt), it’s usually cheaper to handle effects like this in the shader - especially if many objects require billboarding.

Shader Graph makes it a little trickier to handle as it does space conversions behind the scenes. The Position port in the Master Stack is intended to be in Object space, rather than the clip space output a written vertex shader would usually have.

First up, we need a matrix to handle the effect. Can use one of the following.

If you don’t want shadows (e.g. for billboard fire / light flares / etc), can use either of these, and turn off Cast Shadows in the Graph Settings (or via the Mesh Renderer)

To make billboard only rotate around the Y axis :

One of the above matrices is then used in the following setup :

Textures are typically applied to the mesh using UV coordinates stored in the vertex data. But we can also use other coordinates.

In a technique usually referred to as “Worldspace Planar Mapping” (sometimes also called “Worldspace Projected UV”), we use the Position node set to World space. This is a Vector3 output but nodes that take a UV port use a Vector2 though, so we first need to Swizzle (or Split and recombine into a Vector2 node), where we can also reorder the components. For example, we can use RB (aka XZ) axis.

This can then be put into a Sample Texture 2D, or any other nodes that have a UV port - such as procedural noise (Simple Noise and Gradient Noise).

Because we are in World space this acts like projecting the texture from above/below (as the G/Y axis is the one that we didn’t use). The texture will be stretched for any faces that are vertical, but this method is useful for flat surfaces. The texture also does not move, rotate or scale with the object, and so seamlessly continues over neighbouring objects.

We could also project from other axis by using the RG/XY or GB/YZ ports (and swizzle these further for 90 deg rotations).

Sampling from all three axis, then blending based on the Normal Vector is known as Triplanar Mapping. There is a Triplanar node which handles this for you - with only one texture input for all three axis though. For supporting different textures per axis, you’d need to handle it yourself by recreating it in nodes or using a Custom Function. I have a page on my old site explaining this further.

In some cases you might want to use the Fraction node (aka frac() in HLSL) on the UVs (e.g. for repeating sections of a larger texture). But when using this with the Sample Texture 2D you may notice some strange pixellation artifacts along the seam produced by the jump in the UV coordinates.

(Also, this isn’t limited to the Fraction node, I just find that this is the most common place where it is noticeable. It occurs for anything that causes a jump in the UVs. Another fairly common example is the seam in the Y component when using the Polar Coordinates node)

This pixellation occurs because the Sample Texture 2D calculates the mipmap level for sampling by using the partial screenspace derivatives (DDX, DDY), which compare values between neighbouring pixels. The fragment shader can do this as it runs in 2x2 pixel blocks.

Usually this mipmapping is a good thing, as it reduces artifacts when viewing the texture at shallow/glancing angles. But when comparing values for pixels along this seam, the difference is much larger than expected - which is interpreted as needing to sample a high LOD/mipmap level. At mipmap resolutions this small, it typically results in the colour being an average of the entire texture.

This pixellated seam can be fixed in a few ways :

This occurs because the Normal vectors stored in the mesh are intended for the front faces only. We can flip them for back faces by making use of the Is Front Face and Branch nodes.

If using a Normal Map, we would use the result of our Sample Texture 2D in the True input, and Multiply it by (1, 1, -1) for the False input. This would then go into the Normal (Tangent Space) port on the Fragment stage of the Master Stack.

If not using a Normal Map, we could use the Normal Vector node in Tangent space - or just a Vector3 set to (0, 0, 1) as those will be equal…

But it should actually be cheaper to change the Fragment Normal Space to “World” in the Graph Settings (tab of Graph Inspector, toggled with button in the top right of graph), as this avoids the need for the Tangent → World transformation. We can then use the Normal Vector node in World space for the True input and put through a Negate node for the False input.

In order to get shadows working in an Unlit Graph you need to add some important keywords which allow the ShaderLibrary to calculate shadow info.

I’ve shared a Custom Lighting for Shader Graph package on github, which can handle this for you. For supporting shadows, use the Main Light Shadows and Additional Lights subgraphs. Both of these will work in Unlit Graphs.

Main Light Shadows

If you’d instead prefer to handle it yourself, Create these Boolean Keywords in the Shader Graph Blackboard :

Make sure you set the Reference field, and not just the name. Also set them to Multi Compile and Global, and untick Exposed.

Can then sample shadows by calculating the shadowCoord :

Then use one of the following :

Additional Light Shadows

For additional lights, you need the following keywords:

In your light loop, you’d then use :

For better examples see CustomLighting.hlsl in my Custom Lighting package. Also see the Lighting.hlsl, RealtimeLights.hlsl (v12+) and Shadows.hlsl files in the URP ShaderLibrary. Can find these under the Packages in the Project window, or via Unity/Graphics github.

You can disable fog entirely via Unity’s Lighting window (under Environement tab). If you still want fog enabled, but a certain object to not include fog, you can disable fog in a Lit Graph by using a Custom Function node.

I’d recommend using String mode, with a Vector4 Input named “In” and a Vector4 Output named “Out”. The function name can be anything (e.g. DisableFog), while the body should use :

For URP Target :

1
2
Out = In;
#define MixFog(x,y) x

The function itself doesn’t do anything, except pass the input straight through. But by using the #define we override the later MixFog function call (in URP/PBRForwardPass.hlsl), so rather than applying fog it just outputs the unmodified colour.

For Built-in Target :

1
2
3
4
Out = In;
// Untested, but I think this should work
#undef UNITY_APPLY_FOG
#define UNITY_APPLY_FOG(x,y) y

For the Built-in version, they are already using the UNITY_APPLY_FOG macro (in SG/Built-in/PBRForwardPass.hlsl), so we undefine and then redefine it to output the colour (second parameter).

This is kinda hacky (abusing what macros are supposed to be used for), but hey it works!~

Unity supports fixed-size Float and Vector(4) arrays in shaders. We can define them in Shader Graph by using a Custom Function node. In this case, we must use the File mode as the array has to be defined outside the function scope - and this is not possible using String mode.

Here is some example code, defining an array named _ExampleArray, containing 10 floats. Note that the [10] is put after the name, not after float. We can then index the array inside the function, in this case using a loop to add the contents together.

1
2
3
4
5
6
7
float _ExampleArray[10];
 
void ExampleFunction_float(out float Sum){
   for (int i=0;i<10;i++){
      Sum += _ExampleArray[i];
   }
}

The array would then be set from a C# script, using :

Shader.SetGlobalFloatArray("_ExampleArray", floatArray);

For a Vector4 array, you would use float4 instead of float, and :

Shader.SetGlobalVectorArray("_ExampleArray", vectorArray);

When using these C# functions, make sure that floatArray (or vectorArray) has the same length that is specified in the shader! Can pad with zeros if required.

I also have a forcefield shader breakdown which uses an array, if you want a real example.

GPU Instancing is typically not required when using MeshRenderer and SkinnedMeshRenderers in URP, as these already use the SRP Batcher to optimise setup between draw calls. When using the SRP Batcher you should avoid using Material Property Blocks though and stick to multiple Materials (or instaniate materials, e.g. using renderer.material)

But if you wish to render lots (many thousands) of the same mesh, you could consider using GPU Instancing via Graphics.DrawMeshInstanced, to remove some of the overhead of GameObjects. Or even better, DrawMeshInstancedIndirect (see answer below instead).

To support DrawMeshInstanced, Materials should already have a “Enable GPU Instancing” tickbox which can be ticked.

However using any properties in the Shader Graph Blackboard will still break instancing. Instead, I’ve found that you can use a Custom Function to define the instancing buffer & properties. This is using macros similar to how you would set up instancing in a code-written shader.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
#ifndef CUSTOM_INSTANCING
#define CUSTOM_INSTANCING
// This allows multiple Custom Function nodes to use the same file
// without trying to include the code multiple times (causing redefinition errors)

// Instancing Buffer
UNITY_INSTANCING_BUFFER_START(Props)
  UNITY_DEFINE_INSTANCED_PROP(float4, _Color)
UNITY_INSTANCING_BUFFER_END(Props)

// Custom Function "GetInstancedColor", Outputs : Vector4
void GetInstancedColor_float(out float4 Out){
    Out = UNITY_ACCESS_INSTANCED_PROP(Props, _Color);
}

#endif

For supporting DrawMeshInstancedIndirect, (possibly also DrawProcedural), add a Boolean Keyword to the Blackboard, with reference set to PROCEDURAL_INSTANCING_ON. Should also be Global and using Multi-Compile.

Use two Custom Function nodes. Both with a Vector3 input named “In” and a Vector3 output named “Out”.

1
2
Out = In;
#pragma instancing_options procedural:vertInstancingSetup

(vertInstancingSetup being a function in the include file)

Both functions here don’t alter the input passed in, but it is required to be able to connect the node to the Master Stack. We should connect it somewhere in the Vertex stage - likely easiest using the Position port with the Position node set to Object space (or swap this out for a displaced vertex position if you require that)

See instanced grass example here :

Shaders

Rather than hardcoding ShaderLab operations, it is possible to specify a Property so they can be changed on the material or at runtime (e.g. through material.SetFloat)

// (in Properties)
[Enum(Off, 0, On, 1)] _ZWrite("Z Write", Float) = 1
[Enum(UnityEngine.Rendering.CompareFunction)] _ZTest("ZTest", Float) = 4 // "LessEqual"
[Enum(UnityEngine.Rendering.CullMode)] _Cull ("Cull", Float) = 2 // "Back"
[Enum(UnityEngine.Rendering.ColorWriteMask)] _ColorMask ("ColorMask", Float) = 15 // "RGBA"

[Enum(UnityEngine.Rendering.BlendMode)] _BlendSrc ("Blend Src Factor", Float) = 1 // "One"
[Enum(UnityEngine.Rendering.BlendMode)] _BlendDst ("Blend Dst Factor", Float) = 0 // "Zero"
[Enum(UnityEngine.Rendering.BlendMode)] _BlendSrcA ("Blend Src Factor (Alpha)", Float) = 1 // "One"
[Enum(UnityEngine.Rendering.BlendMode)] _BlendDstA ("Blend Dst Factor (Alpha)", Float) = 0 // "Zero"
[Enum(UnityEngine.Rendering.BlendOp)] _BlendOp ("Blend Op", Float) = 0 // "Add"

[Enum(UnityEngine.Rendering.CompareFunction)] _StencilComp ("Stencil Comparison", Float) = 0 // "Disabled"
[IntRange] _Stencil ("Stencil ID", Range (0, 255)) = 0
[Enum(UnityEngine.Rendering.StencilOp)] _StencilOp ("Stencil Op (Pass)", Float) = 2 // "Replace"
[Enum(UnityEngine.Rendering.StencilOp)] _StencilOpFail ("Stencil Op (Fail)", Float) = 0 // "Keep"
[Enum(UnityEngine.Rendering.StencilOp)] _StencilOpZFail ("Stencil Op (ZFail)", Float) = 0 // "Keep"
_StencilWriteMask ("Stencil Write Mask", Float) = 255
_StencilReadMask ("Stencil Read Mask", Float) = 255

...

// (in SubShader/Pass)
ZWrite [_ZWrite]
ZTest [_ZTest]
Cull [_Cull]
ColorMask [_ColorMask]

//Blend [_BlendSrc] [_BlendDst] // Uses Blend mode for both RGB and Alpha channels
Blend [_BlendSrc] [_BlendDst], [_BlendSrcA] [_BlendDstA] // Use different Blend mode for Alpha
BlendOp [_BlendOp]

Stencil {
    Ref [_Stencil]
    Comp [_StencilComp]
    Pass [_StencilOp]
    Fail [_StencilOpFail]
    ZFail [_StencilOpZFail]
    ReadMask [_StencilReadMask]
    WriteMask [_StencilWriteMask]
}

Tangent space uses vectors from the mesh data to stay relative to the surface of the mesh. It can be difficult to visualise for an entire model as unlike other spaces, it can be different per-pixel.

(Image)

A way to visualise the tangent space for a given point (centered at the gizmo)

The Normal vector you should be familiar with, points out from each vertex - It’s the Z axis of tangent space (shown in blue, since XYZ=RGB). The X and Y axis use the Tangent vector and a Bitangent vector (also called Binormal) - which is typically calculated using a Cross Product with the other two vectors.

These tangent and bitangent vectors are also aligned to the direction of the UV coordinates stored in the mesh. (The tangent follows the X axis of the UV coordinates, and the bitangent follows the Y axis)

The space is needed so tangent space normal maps (that use UV coordinates for sampling) can be converted back to world space to produce correct lighting/shading. The tangent space View Direction is also used to produce Parallax effects.

Of note, if you are using techniques like Triplanar Mapping, then the “Tangent space” you’d need would be different than the Tangent space calculated from mesh data. This article by Ben Golus explains this in detail. In Shader Graph, the Triplanar node already takes this into account when using it’s Normal mode (I believe using the “Whiteout Blend” example)

This is passed into colours stored in each vertex inside the mesh.

The origin of a mesh is (0,0,0) in object space. To calculate this in world space we could use a matrix multiplciation, like float3 originWS = mul(UNITY_MATRIX_M, float4(0,0,0,1)).xyz, however a cheaper method is to extract the translation data from the matrix :

This is equivalent to the Position output of the Object node in Shader Graph.

To save on drawcalls (and performance), Sprites on the screen that use the same material, are batched so they can be drawn together. It shows as “Draw Dynamic” in the Frame Debugger window. When this batching occurs, meshes for each sprite are transformed into World space and combined into a single object. The model matrix (UNITY_MATRIX_M) is cleared to an identity matrix (scale of 1, no rotation/translation)

The model matrix is usually responsible for transforming vertex data stored in the mesh into World space, but an identity matrix is used so the values aren’t altered. “Object space” now doesn’t really exist on the shader side, as the vertex positions are already stored in World space.

Anything else that relies on the model matrix also won’t work correctly, such as calculating the origin and scale of the object (outputs on the Object node)

There isn’t really a good way around this afaik, but I don’t work in 2D that often. You could break batching by using different material instances, but that may not be good for performance. Typically you would try to rely on UV coordinates rather than vertex positions.

You could consider using MeshRenderers instead as they can support the SRP Batcher - which doesn’t combine meshes, but instead batches setup between the draw calls.

This error means the shader is using more than 16 samplers. While shaders can support more textures (DX11 supports 128), it has a much lower limit to the number of samplers. To get around this, we can re-use samplers from other textures or use inline sampler states.

In Shader Graph, can use Sampler State node to achieve this, which would be connected to the sampler port on the Sample Texture 2D node.

Be aware that the GLSL mod function is not an exact equivalent to fmod function in HLSL, the result will be different when dealing with negative values.

If you are converting a shader, you may want to implement your own mod function using that code instead.


License / Usage Cookies & Privacy