Intro to Shaders
Hello! This is a small introduction to what shaders are and how they are used when rendering graphics in Unity.
Sections :
- What is a Mesh?
- What is a Shader?
- Shader Code
- Shader Passes
- Forward vs Deferred Rendering
- Other Types of Shaders
- Materials & Properties
- Setting Properties from C#
- Global Shader Properties
- Material Instances & Batching
- SRP Batcher
What is a Mesh?
Before going into shaders specifically, it’s important to first have a basic understanding of what a mesh is.
A mesh contains the data for a 3D model, consisting of vertices and how they connect into triangles. Each triangle is made up of 3 vertices, though each vertex can be shared between multiple triangles, (so there won’t always be triple the amount of vertices than triangles).

Example Cube Mesh, consisting of 12 triangles. In Blender (3D modelling software), this has 8 vertices, however this value may change when importing into Unity depending on whether it’s per-vertex data is the same between shared vertices.
The terms “mesh” and “model” are usually used as interchangeable words. But if we are being specific, a mesh always refers to the geometry (vertices/triangles) while a model may refer to the imported file, which can actually contain multiple mesh objects, sub-meshes, as well as materials, animations, etc.
A model is commonly made in external 3D modelling programs, such as Blender, and imported into unity (usually using the .FBX format) but we can also generate (and/or edit) a Mesh at runtime using the methods in the C# Mesh class.
When we mention “vertex” we are usually referring to it’s position in 3D space, but every vertex in the mesh can contain many pieces of data. This includes data such as :
- Position in 3D space (Object space, so (0,0,0) is at the origin of the mesh).
- UVs, also known as Texture Coordinates as they are most commonly used for applying a texture to the model. These coordinates are typically Vector2 (two floating point values, each axis labelled as either xy or uv) but the actual UV channels can be Vector4, so can contain up to 4 floats in order to pass data in. See Mesh.SetUVs.
- Normals (a direction used for shading. Some modelling programs (e.g. Blender) may also use these to determine the winding order of the vertices, which shows which way the face is pointing. It is possible in a shader to cull the front or back side of a face when rendering)
- Tangents (a direction perpendicular (90 degrees) to the normal that follows the meshes surface along the horizontal texture coord (uv.x). Used for constructing Tangent Space, which is important for shading techniques such as Normal/Bump Mapping)
- Vertex Colours (a colour given to each vertex)
While two vertices may share the same position in 3D space, if their other data doesn’t also match then they must be two separate entries in the vertex data lists.

Example Cube Mesh. Both have 12 triangles but the left has 24 vertices, while the right has 8 vertices.
This is quite common if a model has flat shading rather than smooth shading like in the image above. Flat shading will increase the number of vertices because the Vertex Normals need to point in different directions depending on which face they are a part of. (This might not happen in the modelling program itself, but would when it’s exported to Unity). Smooth shading instead takes an average of these directions, so those vertices can be shared, assuming all other data is also the same!
What is a Shader?
A Shader is code that runs on the GPU (Graphics processing unit), similar to how a C# script runs on the CPU. In Unity a single “.shader” file contains a shader program, which usually contains multiple “passes” that are used to render a mesh. Each pass contains a Vertex shader and Fragment shader (sometimes also referred to as a pixel shader). (Perhaps slightly confusingly, it’s common to use “shader” to refer to the specific stages, as well as the shader program/file as a whole).
The vertex shader runs for each vertex in a mesh and is responsible for transforming the 3D vertex positions (object space) from the mesh into Clip space positions (positions used to clip geometry to only render what is visible by a camera. There’s a few extra steps to turn this into a 2D screen position. I’ll hopefully be going over spaces in more detail in a future post). It also should pass other data that will be needed for calculations into the fragment stage, such as data from the mesh (UVs, normals, etc).
For tools like Shader Graph, this is mostly handled for us but it’s still very important to recognise there is two separate stages going on here. Newer versions have a Master Stack to make the separation between these stages clearer. It exposes a Vertex Position port for us to override the object space position before it is transformed to clip space, e.g. for Vertex Displacement. We can also override the Normal and Tangent that is passed to the fragment stage, but the graph will also handle all these ports automatically if left blank.
For each triangle, and vertex in that triangle, the clip space positions passed out from the vertex stage is used to create fragments – potential pixels on the 2D screen. All of the per-vertex data passed into the fragment shader also gets interpolated across the triangle. This is why each vertex can specify a single vertex colour but the triangle ends up with a gradient across it. (The same thing happens to the UVs which is what allows the texture to be properly applied rather than just taking the colour a single pixel for each vertex)
The fragment shader then runs for each of these fragments (potential pixels) and determines its colour that will be drawn to the screen (and in some cases, outputs a depth value too). This could be outputting a solid colour, using vertex colours, sampling textures, and/or handling lighting calculations to produce a more complex shading – which is where the name “shader” comes from.
In some cases, we might also want to discard/clip a pixel from being rendered (e.g. alpha clipping/cutout).
Shader Code
In Unity shaders are written HLSL (High Level Shading Language), though you’ll typically also see it referred to as CG, especially when dealing with the Built-in Render Pipeline.
You’ll always see this shader code between CGPROGRAM / HLSLPROGRAM and ENDCG / ENDHLSL tags. (You might also see CGINCLUDE / HLSLINCLUDE which includes the code in every shader pass).
Shaders for URP/HDRP should always use the HLSL versions of these tags as the CG ones include some extra files which aren’t needed by those pipelines. It will cause the shader to error due to redefintions in their shader libraries.
The rest of the .shader file is written in a Unity-specific syntax known as ShaderLab, which includes blocks such as “Shader”, “Properties”, “SubShader”, and “Pass”. You can find documentation for this Shaderlab syntax in the docs pages here.
Technically Shaderlab has some legacy “fixed-function” ways of creating shaders which means a CG/HLSL PROGRAM isn’t needed, but I wouldn’t worry about learning these as programming the shader is much more useful.
Alternatively there are also node-based shader editors, such as Shader Graph (official, avaliable for URP or HDRP), Amplify Shader Editor (works in all pipelines, but not free), and Shader Forge (no longer developed, only works on older Unity versions).
Shader Passes
Typically shaders will include a main pass that either doesn’t have a LightMode tag or uses one like “UniversalForward” (URP), “ForwardBase” (Built-in Pipeline) or “Forward” (HDRP), assuming the shader is intended for use in Forward Rendering rather than Deferred Rendering (the next section will explain this a little).
|
|
The LightMode tag is important to tell Unity what the pass is used for. The RenderPipeline tag is also useful to tell Unity which pipeline the SubShader is intended for if the shader needs to support multiple pipelines.
In URP and the Built-in pipeline there is typically also a ShadowCaster pass in the shader program, which as the name suggests, allows the mesh to cast shadows. Each pass has its own vertex and fragment shader. In this case, the vertex shader uses the shadow bias settings to offset the vertices a little, which helps prevent shadowing artifacts. The fragment shader here usually just outputs 0 (black) as it’s colour isn’t that important. It only needs to discard/clip the fragment if a shadow shouldn’t be cast for that pixel.
There’s also a DepthOnly pass used in URP and HDRP which is very similar to the ShadowCaster but without the bias offsets. It allows a Depth Prepass to occur, which is sometimes used to create a Depth Texture. You can find more information about that in the Depth post. In the Built-in pipeline I believe the Depth Texture is instead created using the ShadowCaster pass again.
Depending on the pipeline, there may be more passes that a shader can include. You can find some listed on the PassTags docs page for the built-in pipeline. Looking at the built-in shaders is also a good way to find out about each pass. You can download the built-in pipeline shaders source via the unity download page, and also find source for URP and HDRP via the Graphics github.
When using Shader Graph and other node editors, most passes are handled for us. We only really focus on the “main” forward pass that handles the colour drawn to the screen.
Forward vs Deferred Rendering
Forward rendering is the “regular” way of rendering things, where the shader calculates the pixel colour that is directly drawn to the screen. In URP there is a limit to the number of lights that can affect each GameObject (8 lights) due to how it’s lighting system works (and likely due to performance reasons too). Built-in has a similar limit but it can also be increased (though with a performance penalty).
This limit is in place because every fragment produced by the mesh has to do light calculations, even if it gets covered by another fragment in the same mesh, or another object later (though objects closer to the camera tend to be drawn first to minimise this).
Deferred Rendering (or Deferred Shading) instead has the advantage of having no limit to the number of lights per GameObject because the lighting is handled later in the pipeline on each screen pixel inside the volume of the light. Having more lights will still effect performance, but only if they are very large on the screen.
However in order for this to work, shaders also have to work differently. A deferred shader pass instead renders data to a bunch of geometry buffers (aka G-Buffers), handling things like Albedo & Occlusion, Specular & Smoothness, Normals, and Emission & GI. These are then used later in the pipeline to calculate lighting/shading and the final pixel colour.
You can read more about deferred rendering in tutorials like :
- https://catlikecoding.com/unity/tutorials/rendering/part-13/
- https://www.patreon.com/posts/shaders-for-who-34008552
- https://gamedevelopment.tutsplus.com/articles/forward-rendering-vs-deferred-rendering--gamedev-12342
I mostly stick to Forward as I use URP which doesn’t support Deferred yet, though it is in progress.
Other Types of Shaders
In the built-in pipeline there’s also Surface shaders, (which you can usually identify by the “#pragma surface surf” and “surf” function). Behind the scenes these generate a vertex and fragment shader while also handling calculations, such as lighting and shadows for you. However these types of shaders do not work in URP and HDRP (though possibly in the future there will be a SRP version). Shader Graph’s PBR (or HDRP’s Lit) Master nodes are already quite similar to surface shaders if you’re okay with working with nodes.
As well as vertex and fragment shaders, there are other stages that can be involved for more complicated techniques, such as :
- Tesellation (domain and hull shaders), which convert triangles into smaller triangles, usually based on viewing distance, in order to add more detail to the mesh.
- Geometry shaders, which can add new geometry (e.g. triangles) for each vertex/triangle already in the mesh. For example, producing grass blades. It is worth nothing that the performance of geometry shaders isn’t great and there may be better methods to achieve a similar effect.

Shows how these extra stages fit into the pipeline.(Though this is simplified and doesn’t include steps like clipping, face culling, early z-test, z-test, blending, etc)
These additional shader stages aren’t needed for every shader program, and aren’t always supported on each platform/GPU. Both of these also aren’t included/possible in Shader Graph (yet), so I won’t be going over them in this post. You can likely find some examples for shader code online though, such as :
- https://catlikecoding.com/unity/tutorials/advanced-rendering/tessellation/
- https://halisavakis.com/my-take-on-shaders-geometry-shaders/
- https://roystan.net/articles/grass-shader.html
- https://github.com/Cyanilux/URP_GrassGeometryShader
There is also Compute shaders, which run separately from the other shaders mentioned above. They are typically used for generating textures (e.g. an output using RWTexture2D
- https://www.ronja-tutorials.com/2020/07/26/compute-shader.html
- http://kylehalladay.com/blog/tutorial/2014/06/27/Compute-Shaders-Are-Nifty.html
- https://lexdev.net/tutorials/case_studies/frostpunk_heatmap.html
Materials & Properties
From a shader we can create Materials, which acts as containers for certain data (such as floats, vectors, textures, etc). This data is exposed by the shader using the Properties section of the Shaderlab syntax (or the Blackboard in the case of Shader Graph). These properties can then be edited for each material in the inspector.
|
|
This allows multiple materials to share the same shader, but we can change textures and other settings to change how the result looks.
For example, the Standard shader that Unity provides (or URP/Lit, HDRP/Lit), can replicate many basic materials fairly well (e.g. plastic, metal, wood, stone, etc.) with the correct texture maps and other values (Albedo, Normal, Occlusion, Metallic/Specular, etc)
Setting Properties from C#
In order to set one of these properties from C#, we need a reference to the Material object and need to know the property name (Reference field in Shader Graph). In order to get the material we can do a few different things. One way is to use a public field :
|
|
Then assign the material in the inspector. Setting the properties on the material using this method will always affect all objects that share that material though.
Alternatively, we can keep the material private and set it based on the value in the Renderer. e.g.
|
|
This assumes the script is on the same GameObject as the Renderer (e.g. MeshRenderer).
We can then set a property using one of the “Set” functions. e.g. SetFloat, SetVector, SetTexture, SetColor listed under the Materials docs scripting page. This could be in Start or Update (though it’s better to only call it when the value actually needs to be changed).
|
|
Here the “_ExampleColor” would be the name/reference of the property as defined in the shader. Typically, if you select the .shader (or .shadergraph) file in the Project window it will list the properties it has in the Inspector window.
You can also find them listed in the Properties section of the Shaderlab code. For Shader Graph, this is the “Reference” field that each property has, not to be confused with its display name.
Typically the reference starts with “_” as a naming convention. I believe this also helps to avoid errors as without it the reference might be something that is used by the shaderlab syntax already, but don’t quote me on that. (e.g. “Offset” might error because of its shaderlab usage, but “_Offset” wont? Same with Cull, ZTest, Blend etc.)
Global Shader Properties
There are also Global shader properties, which can be set through the Shader class and it’s static “SetGlobal” functions. e.g. SetGlobalFloat, SetGlobalVector, SetGlobalTexture, SetGlobalColor, etc.
|
|
In this example, the “_GlobalColor” property can be defined by any shader that needs access to the colour in the same way a regular property would be used, except it doesn’t need to be in the exposed Shaderlab Properties section.
In Shader Graph, each property in the Blackboard has an Exposed tickbox instead. Unexposed ones won’t appear in the material inspector but can still be set by using these SetGlobal functions.
Material Instances & Batching
It’s important to understand the difference between renderer.sharedMaterial
and renderer.material
when dealing with obtaining the material to change it’s properties.
Using .sharedMaterial
, as the name might suggest gives you the material shared by all objects. This means that changing a property on it will affect all objects (similar to setting properties on a public Material
).
Using .material
instead creates a new instance (clone) of the material the first time it is called, and automatically assigns it to the renderer. Setting a property using this will only affect that object. However note that it is also your responsibility to destroy this material instance when it is no longer needed. e.g. in OnDestroy :
|
|
Typically for the built-in pipeline using many material instances is bad, as it breaks something called Batching, which combines meshes that use the same material together so they can be drawn all at once, rather than separately. This helps with performance - though it is worth noting that since it combines meshes, certain data (like the Model matrix, UNITY_MATRIX_M
/ Unity_ObjectToWorld
and inverse ones) is no longer on a per-object basis which may lead to unintended results if the shader relies on Object space.
This same thing occurs with Sprites and Particles as well. For performance reasons they are batched and drawn together, so “object space” isn’t really a thing.
When referring to batching, there’s a few difference types :
- Static Batching, which requires the GameObjects to be marked as static. (This is the tickbox on the GameObject in the top right of the inspector. There’s a dropdown next to it if you want to toggle the specific types, in this case we’re talking about the “Batching Static” option)
- Dynamic Batching which is automatic but has some things to be aware of (see the DrawCallBatching docs page for more information on these types).
Similar to batching, another way to draw lots of objects together but still have different material values is to use GPU Instancing and Material Property Blocks together. MPBs can also be used without GPU Instancing, however it will still lead to separate draw calls so there will be little performance benefit without also making the shader support instancing. (It might provide a bit less memory overhead than an material instance though). Look at their docs pages linked above for more information.
For URP and HDRP there is also another type of batching which is important to discuss :
SRP Batcher
The Scriptable Render Pipelines (URP or HDRP) also have another method of batching – the SRP Batcher.
Rather than combining meshes and draw calls, it batches the setup between those draw calls - which is actually the expensive part. This allows batching to work even with different material instances (as long as they share the same shader variant, meaning they won’t batch if materials use different keywords).
This means in URP and HDRP, it’s actually safe to use renderer.material
to create material instances and change properties as explained in the previous section. They’ll produce some memory overhead but can still be batched together! You should of course still also use material.sharedMaterial if you don’t actually want different property values, in order to avoid unnecessary material instances.
Also, since the SRP Batcher doesn’t combine meshes, the Model matrix and it’s inverse stays intact (assuming those GameObjects also aren’t using static batching). Object space can be used in the shader without the unintended results you would get with the other batching methods.
Shaders created using Shader Graph will automatically support the SRP Batcher, but custom HLSL code shaders need to define a UnityPerMaterial CBUFFER. This should contain all the per-material properties except for textures. Every pass in the shader needs to use the same CBUFFER values, so I’d recommend putting it inside the SubShader in HLSLINCLUDE tags, as this will automatically include it in every pass for you. For example the following should work for URP :
|
|
You can find some full examples in my Writing Shader Code for URP post (currently only on my old wordpress site, sorry for the ads).
Note that Material Property Blocks will also break batching via the SRP Batcher. You can typically see what is being batched by using the Frame Debugger window to check if it’s working properly or not.
More information on the SRP Batcher can be found on it’s docs page.
Thanks for reading!
If this post helped, consider sharing a link with others!