Cyanilux

Game Dev Blog & Tutorials

Custom Renderer Features

One way to customise the Universal Render Pipeline (URP) is by writing custom Scriptable Render Passes. A typical example of this could involve rendering some objects to a separate off-screen texture, which we could then sample later - either in shaders used by any objects in the scene, or as a fullscreen pass to composite it with the camera/screen.

These passes can be enqueued by a Custom Renderer Feature, which are assigned to the list at the bottom of the “Renderer” assets used by URP (i.e. the Universal Renderer asset, known as Forward Renderer prior to 2021, or the 2D Renderer which supports renderer features from 2022.1 onwards)

This post will show the general layout/structure of a custom feature & pass, going through each function separately as I think it’ll be more useful that way.

The post is mostly aimed at Unity 2022+ versions of URP which introduced RTHandles - this was quite a big change and the lack of examples is mainly why I’ve written this. But be aware, I might also mention/link to older examples!

!

Unity 2023.3 / Unity 6 now uses Render Graph (a newer scripting API - not a visual editor!)

It changes the structure of the render pass slightly, but this post does not account for that currently. Methods like OnCameraSetup and Execute are deprecated/obsolete (though you can still use them for compatibility mode), replaced by RecordRenderGraph.

This RenderGraph forum thread explains what it is and some posts below contain examples. (Note that as of 2023.3.0a18, UseTextureFragment has been renamed to SetRenderAttachment (and UseTextureFragmentDepth is SetRenderAttachmentDepth))

If you want to write features that support RenderGraph as well as earlier versions (or compatiblity mode / render graph turned off), looking at how the RenderObjects, FullscreenPass or other URP passes may also be useful.


Sections :


Example Uses

First up, I thought I’d provide some examples so you can get an idea of why you might use a Renderer Feature. But if you prefer, click to skip the examples. Or just see the full example code based on snippets in this post.

Fullscreen / Post Process Effect

A very common use for renderer features is applying shaders/materials as a fullscreen effect, since URP’s Post Processing Volumes do not support custom effects (as of 2022 at least).

Applying fullscreen effects is sometimes referred to as a “blit”. There are various ways to handle it, such as CommandBuffer.Blit, using CommandBuffer.DrawMesh/CommandBuffer.DrawProcedural (either with custom vertex shader, or overriding view/projection matrices), or using the newer Blitter API (Unity 2022+), e.g. Blitter.BlitCameraTexture.

I have a Blit Renderer Feature shared on github - it has various branches showing some of these methods.

Unity 2022.2 also introduced a new Fullscreen Graph type and Fullscreen Pass Renderer Feature built-in to URP, so if you just need to apply a shader to the camera you can use those instead of a custom feature! To blit to a different destination, we’d still need a custom feature.

The later Blit section will go into more detail.

RenderObjects

URP provides the RenderObjects feature which, as the name might suggest, can render objects (e.g. MeshRenderers, SkinnedMeshRenderers, SpriteRenderers, etc), filtered by Opaque/Transparent, Layer Mask and Shader Pass (LightMode) tags, while also providing overrides for the Material, Depth Test/Write, Stencil operations and Camera properties.

Some of those overrides are particularly useful as Shader Graph doesn’t support Stencil operations and didn’t expose Depth Test/Write params until v12 (2021.2).

The Override Material is also useful for applying a material to many objects in the scene at once, such as Outline shaders using the “Inverted Hull” method such as this example from UniversalRenderingExamples, or X-Ray / Highlighting through walls effects also using ZTest “Greater” like this example in the Unity docs.

Note that when you use the feature, it is actually re-rendering those objects. If you only want to render once, you need to also remove the Layer(s) used in the Layer Mask from the default Opaque/Transparent Layer Mask at the top of the Universal Renderer. (The 2D Renderer seems to lack this, so not sure how to do it there…)

Screen Blur

We can use a custom feature to apply blur operations to the screen. This can involve either blitting back to the screen, or to a custom buffer - which could then be sampled in UI shaders for example to make them appear to blur what is behind them.

There are multiple ways to blur in a shader, such as Box Blur, Gaussian, Kawase. If you use those as search terms you should be able to find example implementations in Unity shaders. These can be applied to the screen by using a Blit (may require shader to be modified depending on the blit method used).

Some examples (May be for older URP versions. Also check licenses before use) :

Pre-passes & Edge Detection

As is, the RenderObjects feature only renders to the camera buffers, but with a similar custom feature it is possible to render objects to another buffer (known as a Render Target). This is usually some form of Render Texture, but in features we tend to use RTHandle for 2022+, (or RenderTargetHandle/RenderTargetIdentifier in previous versions). Once you have this buffer it can be passed into other shaders using a Global Texture Property. Generating this buffer is usually referred to as a “pre-pass” as you need this additional pass before your main effect is rendered.

A common example is for Edge Detection style outlines, such as this tutorial by Alexander Ameye. That uses a feature to first render a prepass to obtain the depth and normals of objects in the scene, sent into a global texture property. Another feature (or pass) then applies a fullscreen material, where this texture is sampled multiple times to detect where the edges of objects are.

In URP v10+ (Unity 2020.2+) we can also now get URP to generate depth and normals textures for us, e.g. by using renderPass.ConfigureInput(ScriptableRenderPassInput.Normal); before enqueuing the pass (in AddRenderPasses function). More info on ConfigureInput is provided later.

Screenspace Distortion

Distortion for shaders used in the scene typically involves displacing the screen positions used to sample the Scene Color node. This uses the Opaque Texture enabled on the URP Asset, which is a copy of the camera color buffer after rendering opaques. Since this texture only contains opaques, transparent objects behind it cannot be seen. A potential solution is copying the screen after rendering transparents and applying the distortion later instead, a later example goes over that.

In that case, the regular transparent queue can be distorted at least - but multiple layers of distortion also won’t stack. If that is the desired result, we could instead have a feature render a pre-pass to combine distortion directions/strengths additively into a custom buffer (e.g. via a DrawRenderers call). We can then do a fullscreen blit on the camera targets, while sampling the buffer as a global texture to distort UV coords.

Lens Flares

While URP now supports the Lens Flares (SRP) component, older versions of the URP docs also provided a custom Lens Flare Renderer Feature example which could be a good reference if you need to do someting similar with lights.

Specifically, their example makes use of renderingData.lightData.visibleLights to loop through those lights, then drawing a quad mesh with a flare texture through cmd.DrawMesh.

Internal URP Passes

Internally, URP also uses a bunch of Scriptable Render Passes, listed under the URP package’s Runtime/Passes folder. These can use internal functions which we can’t use, but might still be useful to look at to get an idea of how they work.

For example, here’s a few :

URP also provides Renderer Features and Passes for :

!
The github links here are for the “master” branch so may not be accurate to the version you’re using. For example, you may want to switch to “2022.2/staging” to view the code for that release. Be aware that for older versions, you need to remove the Packages/ from the URL or you’ll see a 404.

Layout of a Custom Feature/Pass

To create a Custom Renderer Feature (& Pass) we can right-click in the Project window (somewhere in Assets) and use Create → Rendering → URP Renderer Feature. This creates a C# script for us with a renderer feature template (inherits ScriptableRendererFeature and includes a nested class inheriting ScriptableRenderPass, with the important methods overriden). Since the pass is nested inside the feature, it’s fairly common to consider the render pass as part of the feature itself, though this isn’t required. The pass could also be put into a separate C# file and referenced from multiple features.

The feature is basically a ScriptableObject (a class that holds data, but saved as an asset rather than in a scene like a MonoBehaviour). It serialises fields that need to be displayed in the inspector and creates & enqueues the pass.

Ignoring the pass for now, It’ll look something like this : (comments may be edited slightly)

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class CustomRenderPassFeature : ScriptableRendererFeature {

    CustomRenderPass m_ScriptablePass;

    public override void Create() {
        m_ScriptablePass = new CustomRenderPass();

        // Configures where the render pass should be injected
        m_ScriptablePass.renderPassEvent = RenderPassEvent.AfterRenderingOpaques;
    }

    // Here you can inject one or multiple render passes in the renderer.
    // This method is called when setting up the renderer once per-camera (every frame!)
    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData) {
        renderer.EnqueuePass(m_ScriptablePass);
    }
}

The render pass is what will actually handle the custom rendering, mostly through the ScriptableRenderContext and CommandBuffer APIs.

In the template, it looks like this :

public class CustomRenderPass : ScriptableRenderPass {
    // Called before executing the render pass.
    // Used to configure render targets and their clear state. Also to create temporary render target textures.
    // When empty this render pass will render to the active camera render target.
    // You should never call CommandBuffer.SetRenderTarget. Instead call <c>ConfigureTarget</c> and <c>ConfigureClear</c>.
    public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData) {

    }

    // Here you can implement the rendering logic.
    // Use <c>ScriptableRenderContext</c> to issue drawing commands or execute command buffers
    public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData) {

    }

    // Cleanup any allocated resources that were created during the execution of this render pass.
    public override void OnCameraCleanup(CommandBuffer cmd) {

    }
}

Depending on your URP version the template may vary slightly. Older versions (prior to v10 / 2020.2) had Configure, which is similar to OnCameraSetup but only provides a RenderTextureDescriptor parameter rather than passing the RenderingData struct. There was also FrameCleanup, which was renamed to OnCameraCleanup.

Typically the first thing you’ll want to do is rename the feature/pass. These can be anything you want, but you would typically keep the Feature and RenderPass suffixes. Be aware that the C# script name should match the name of the feature class for serialisation purposes (same thing applies to ScriptableObject / MonoBehaviours, so you should already be somewhat familiar with that).


Create

This method is used to initialise the ScriptableRenderPass and any required resources. Unity calls this method in OnEnable/OnValidate - so every time the project loads, enter/exit play mode, scripts recompile, or serialisation changes.

Since the method may be called multiple times, If you create/clone any Materials you should make sure to check if it is null first. Then clean up in Dispose - see that section for an example. Also be careful with creating materials in the constructor of the pass for the same reason.

It’s common for the feature to have serialised fields that get exposed in the inspector. We can drag assets from the Project window into here - but like other assets, it can’t contain references from scene objects. (If this is something you need, you may still be able to pass those in at runtime though. A section near the end of the post provides an example of this)

Some fields will need to be sent to the pass. Can do this through parameters in the constructor, create other methods or make the same fields in the pass public. To make things simpler, we can wrap the fields in an additional serialised class (e.g. named “Settings”). Of course, what goes in here will vary depending on what the pass does, but here’s an example setup :

// in feature
[System.Serializable]
public class Settings {
    [Header("Draw Renderers Settings")]
    public LayerMask layerMask = 1;
    public Material overrideMaterial;
    public int overrideMaterialPass;
    public string colorTargetDestinationID = "";

    [Header("Blit Settings")]
    public Material blitMaterial;
}

// exposed values
public Settings settings;
public RenderPassEvent _event = RenderPassEvent.AfterRenderingOpaques;

private CustomRenderPass m_ScriptablePass;

public override void Create() {
    m_ScriptablePass = new CustomRenderPass(settings, name);
    m_ScriptablePass.renderPassEvent = _event;
    ...
}

// in pass 
private Settings settings;
private ProfilingSampler _profilingSampler;

// (constructor, method name should match class name)
public CustomRenderPass(Settings settings, string name){
    // pass our settings class to the pass, so we can access them inside OnCameraSetup/Execute/etc
    this.settings = settings;

    // set up ProfilingSampler used in Execute method
    _profilingSampler = new ProfilingSampler(name);
}

Here I’ve also created a ProfilingSampler which will be used later by the Execute method. The name passed through is a field of the ScriptableRendererFeature class, which gets exposed to the inspector allowing the user to rename it. If the pass isn’t going to be attached to a feature, it could also use a hardcoded string or use nameof() the class.

RenderPassEvent

Also in this Create method (or the constructor of the render pass) we typically set the renderPassEvent field of the pass. This configures when the render pass will run.

It uses the RenderPassEvent enum, which contains these entries/values :

BeforeRendering = 0
BeforeRenderingShadows = 50
AfterRenderingShadows = 100
BeforeRenderingPrePasses = 150
AfterRenderingPrePasses = 200
BeforeRenderingGbuffer = 210
AfterRenderingGbuffer = 220
BeforeRenderingDeferredLights = 230
AfterRenderingDeferredLights = 240
BeforeRenderingOpaques = 250
AfterRenderingOpaques = 300
BeforeRenderingSkybox = 350
AfterRenderingSkybox = 400
BeforeRenderingTransparents = 450
AfterRenderingTransparents = 500
BeforeRenderingPostProcessing = 550
AfterRenderingPostProcessing = 600
AfterRendering = 1000

Be aware that camera matrices and stereo rendering is not set up until the BeforeRenderingPrePasses event (value of 150).

More enum entries may be added in newer versions. Can find it’s declaration in ScriptableRenderPass.cs.

Typically you’ll set the field using the enum itself :

m_ScriptablePass.renderPassEvent = RenderPassEvent.AfterRenderingOpaques;
// (aka value of 300)

For passes with the same renderPassEvent value, the order should be the same as it appears on the Renderer Features list (or order of EnqueuePass if enqueuing multiple passes in a single feature).

If you need to specify a pass to run at a particular point inbetween other passes, you can also provide an offset to the values. For example the following would run after any passes with just RenderPassEvent.BeforeRenderingPostProcessing (value of 550) :

m_ScriptablePass.renderPassEvent = RenderPassEvent.BeforeRenderingPostProcessing + 1;
// (aka value of 551)

Just don’t make that offset too high, BeforeRenderingPostProcessing has a value of 550 in the enum. So doing BeforeRenderingPostProcessing + 50 would then be equivalent to AfterRenderingPostProcessing.


Dispose

While not a part of the template, we can add a Dispose method to the feature. The method can be useful for releasing any resources that have been allocated. For example, material instances (see below) or RTHandles (see RTHandle section)

In editor, the method is called when removing features, recompling scripts, entering/exiting play mode. (Not too sure when it gets called in builds, probably when changing scenes?)

public Shader shader; // expose a Shader field

private Material material;

public override void Create() {
    // Create may be called multiple times... so :
    if (material == null || material.shader != shader){
        // only create material if null or different shader has been assigned

        if (material != null) CoreUtils.Destroy(material);
        // destroy material using previous shader
        
        material = CoreUtils.CreateEngineMaterial(shader);
        // or alternative method that uses the shader name (string):
        //material = CoreUtils.CreateEngineMaterial("Hidden/Internal-DepthNormalsTexture");
        // assumes the required shader is in the build (and variant, if keywords are set)
        // e.g. could add the shader to the "Always Included Shaders" in Project Settings -> Graphics
    }
    m_ScriptablePass = new CustomRenderPass(material, name);
    ...
}

protected override void Dispose(bool disposing) {
    CoreUtils.Destroy(material);
    // (will use DestroyImmediate() or Destroy() depending if we're in editor or not)
}

It’s very possible that Unity will automatically clean up some unused resources (e.g. during a Scene change), but it’s still a good practice to handle it ourselves.


AddRenderPasses

This method is responsible for injecting/enqueuing ScriptableRenderPasses with URP’s Renderer. By default it’ll already have renderer.EnqueuePass(m_ScriptablePass) and that may be all we need here. But it is possible to enqueue multiple passes if required.

The comment in the template mentions this method is called once for each camera, but be aware this is also every frame/update, so avoid creating/instantiating anything in here (can use the Create method for that).

Also note that by default it would enqueue the pass for all cameras - including ones used by the Unity Editor. In order to avoid this, we can test the camera type and return before enququeing (or can check later during Execute to prevent that function running, if you prefer).

public bool showInSceneView;

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData) {
    if (renderingData.cameraData.isPreviewCamera) return;
    // Ignore feature for editor/inspector previews & asset thumbnails
    if (renderingData.cameraData.isSceneViewCamera) return;
    // Ignore feature for scene view
    // If the feature uses camera targets, you may want to expose a bool/tickbox instead, e.g.
    if (!showInSceneView && renderingData.cameraData.isSceneViewCamera) return;

    // (could alternatively use "cameraData.cameraType == CameraType enum" for these)

    if (renderingData.cameraData.camera != Camera.main) return;
    // Ignore all cameras except the camera tagged as MainCamera
    // Though may be better to use Multiple Renderer Assets (see below)
    
    renderer.EnqueuePass(m_ScriptablePass);
}

As shown, we can also test against Camera.main if you only want the feature to run on the Main Camera. If you only want it to run on a different specific camera you could potentially set a Camera field at runtime (can’t during editor as assets can’t serialise scene references). Though a better way to handle these would be to create multiple “Renderer” assets (e.g. Universal Renderers), assign them to the list on the URP Asset(s) (may have multiple per quality setting), then use the Renderer dropdown on each Camera component to select which index it should use. That way, you can choose which renderer features are used on a per-camera basis (without it being hardcoded).

In 2021.2+ you should avoid accessing camera targets here (e.g. cameraColorTargetHandle / cameraDepthTargetHandle, or older cameraColorTarget / cameraDepthTarget on the ScriptableRenderer param) as these may not have been allocated yet! We can either obtain those targets directly in the passes Execute method, or add the SetupRenderPasses method (see below)

ConfigureInput

We can also call ConfigureInput in this method, which allows us to request URP to generate certain textures (via the ScriptableRenderPassInput enum) :

This appears to only work for the Universal Renderer. The 2D Renderer seems to ignore it.

These textures are generated at usual events, so won’t necessarily be ready for the event the pass is enqueued at. _CameraOpaqueTexture would only be used in the BeforeRenderingTransparents event or later for example. However the depth texture will be generated using a DepthPrepass if using the BeforeRenderingOpaques event and CopyDepth when using AfterRenderingOpaques (assuming it isn’t already using a prepass for other reasons). Can always check the Frame Debugger window to see the order of everything!

If you need multiple of these inputs, don’t call ConfigureInput multiple times (that will just override the value). As the enum has the [Flags] attribute, you can use the | operator to combine them instead. Example below.

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData) {

    // Tell URP to generate the Camera Depth Texture
    m_ScriptablePass.ConfigureInput(ScriptableRenderPassInput.Depth);

    // Tell URP to generate the Camera Normals and Depth Textures
    // m_ScriptablePass.ConfigureInput(ScriptableRenderPassInput.Normal | ScriptableRenderPassInput.Depth);

    renderer.EnqueuePass(m_ScriptablePass);
}

SetupRenderPasses

This method was added in Unity 2021.2 (URP v12). It doesn’t exist in the template, but it can be added inside the feature class. If your feature requires accessing the camera targets, you should do it inside this method rather than AddRenderPasses.

For example :

// in feature
public override void SetupRenderPasses(ScriptableRenderer renderer, in RenderingData renderingData) {
    RTHandle color = renderer.cameraColorTargetHandle;
    RTHandle depth = renderer.cameraDepthTargetHandle;
    m_ScriptablePass.Setup(color, depth);
    // For versions prior to 2022, use RenderTargetIdentifier type instead,
    // with renderer.cameraColorTarget / renderer.cameraDepthTarget
}

// in pass
private RTHandle rtDestinationColor;
private RTHandle rtDestinationDepth;

public void Setup(RTHandle destColor, RTHandle destDepth){
    rtDestinationColor = destColor;
    rtDestinationDepth = destDepth
}

Personally I usually access targets directly in Execute which also works (though I guess that is more hardcoded). Accessing them here and passing them into the pass may make the pass more flexible.

To be clear, Setup here could be named anything, it would just be a method inside the pass that is used to pass the RTHandle through. The destination RTHandles would then be used by the pass. (For example, could be used to configure the target in OnCameraSetup or used in a blit call in Execute. Later sections will explain these). This way, the same ScriptableRenderPass could potentially be used in multiple features with different destinations.


OnCameraSetup

This method is responsible for configuring the render targets that will be used. By default if you do nothing, URP will already configure the pass to use the camera colour and depth targets for you.

But in the cases that we want to specify our own targets, we can use one of the ConfigureTarget function overloads from the ScriptableRenderPass class.

RTHandle

RTHandles are the way to handle render targets in Unity 2022+. Typically we allocate one inside OnCameraSetup using RenderingUtils.ReAllocateIfNeeded :

// To create a Color Target :
var colorDesc = renderingData.cameraData.cameraTargetDescriptor;
colorDesc.depthBufferBits = 0; // must set to 0 to specify a colour target
// to use a different format, set .colorFormat or .graphicsFormat
RenderingUtils.ReAllocateIfNeeded(ref colorTarget, colorDesc, 
    name: settings.colorDestinationID);

// To create a Depth Target :
var depthDesc = renderingData.cameraData.cameraTargetDescriptor;
depthDesc.depthBufferBits = 32; // should be default anyway
RenderingUtils.ReAllocateIfNeeded(ref depthTarget, depthDesc,
    name: settings.depthDestinationID);

There is also RTHandles.Alloc (various overloads, see docs). This should only run once, so can do a null check :

if (colorTarget == null) {
    colorTarget = RTHandles.Alloc(Vector2.one, colorDesc,
        name: settings.colorDestinationID);
}

There’s a lot of parameters for some of these methods, but they have default values so we don’t need to specify all of them. If you want to override/set a specific parameter, can use <param name>:<value>, such as filterMode:FilterMode.Bilinear, wrapMode:TextureWrapMode.Clamp, etc.

There are also Alloc overrides to create an RTHandle from a RenderTargetIdentifier, RenderTexture, or Texture object, so you can technically still use those types with the new system.

RTHandles also need to be released when they are no longer needed (by calling .Release(); on it). You may want to do this in a function called by the Dispose function of the feature. e.g.

public class CustomRendererFeature : ScriptableRendererFeature {
    class CustomRenderPass : ScriptableRenderPass {
        ...
        public void ReleaseTargets() {
            colorTarget?.Release();
            depthTarget?.Release();
        }
    }
    ...
    protected override void Dispose(bool disposing) {
        m_ScriptablePass.ReleaseTargets();
    }
}

If you aren’t familiar, the ? here is the null-propagation operator - which means if the object is null, the function won’t be called. This helps avoid potential NullPointerExceptions if the feature is disposed without the targets being set (e.g. feature is added to list but disabled).

In case it’s useful, the docs for RTHandle System Fundamentals and Using the RTHandle system may provide additional info.

ConfigureTarget

These take either one or two parameters, the first being the colour target (like the colours you see on the screen in Scene/Game view) and the second being an optional depth target (like the depth buffer, used for ZWrite & ZTest. Can also contain bits for Stencil values). These parameters need to be of the type RTHandle (or RenderTargetIdentifier but those functions are deprecated as of Unity 2022).

private RTHandle colorTarget, depthTarget;

public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData) {
    var colorDesc = renderingData.cameraData.cameraTargetDescriptor;
    colorDesc.depthBufferBits = 0; // must set to 0 to specify a colour target
    // to use a different format, set .colorFormat or .graphicsFormat
    
    if (settings.colorDestinationID != ""){
        RenderingUtils.ReAllocateIfNeeded(ref colorTarget, colorDesc, name: settings.colorDestinationID);
        // if you need to specify texture filter and wrap modes :
        // RenderingUtils.ReAllocateIfNeeded(ref colorTarget, colorDesc, FilterMode.Point, TextureWrapMode.Clamp, name: settings.colorDestinationID);
    }else{
        colorTarget = renderingData.cameraData.renderer.cameraColorTargetHandle;
    }

    var depthDesc = renderingData.cameraData.cameraTargetDescriptor;
    depthDesc.depthBufferBits = 32; // should be default anyway
    if (settings.depthDestinationID != ""){
        RenderingUtils.ReAllocateIfNeeded(ref depthTarget, depthDesc, name: settings.depthDestinationID);
    }else{
        depthTarget = renderingData.cameraData.renderer.cameraDepthTargetHandle;
    }
    
    //ConfigureTarget(colorTarget);
    // Later rendering commands will render into colorTarget
    // No depth target, so ZWrite, ZTest and Stencil operations will not do anything

    // OR 

    ConfigureTarget(colorTarget, depthTarget);
    // Later rendering commands will render into colorTarget
    // and ZWrite/ZTest/Stencil based on depthTarget

    ConfigureClear(ClearFlag.Color, Color.black);
    // Set all pixels in the target to black
}

The first param of ConfigureTarget can also be an array of colour targets, which sets up what is known as Multi-Target Rendering (MRT) assuming the target platform supports it. That allows you to render objects into multiple buffers at the same time by having the fragment shader use SV_Target0, SV_Target1, SV_Target2, etc in the fragment shader output (rather than just SV_Target). An example of this is Deferred Rendering, which sets up multiple “gbuffer” targets and configures them using this.

ConfigureClear

In the above example you might also notice ConfigureClear being used, which allows us to clear the render target.

By default, RTHandle targets are uninitalised and may contain data from the previous frame/camera. This behaviour isn’t typically wanted, so we can use this function to set the entire texture to a particular colour (e.g. black) before rendering that camera. It can also clear depth and stencil values - this is specified by the first parameter of type ClearFlag (enum with [Flags]). Stencil was added in 2021.2. In older versions that than, Depth cleared both.

The function can be used even without ConfigureTarget, which would apply the clear to the camera targets. Typically you wouldn’t clear the camera’s colour (URP kinda does this for us anyway with the Background Color / Skybox) but for some effects clearing the depth or stencil values at a specific event could probably be useful.


OnCameraCleanup

This method is called once for each camera (every frame) after rendering. You’d use this method to clean up some resources created during the other passes (that aren’t needed across multiple frames)! Some examples below.

If you used CommandBuffer.GetTemporaryRT somewhere, you’d typically use CommandBuffer.ReleaseTemporaryRT in this method.

With the change to RTHandles, you might think to release those in here too but I’ve found this causes glitchy rendering - especially in scene view. It’s better to instead release those in a method called by the feature’s Dispose method (see RTHandle section for an example). I have seen some examples set private RTHandle fields to null in this method, but I don’t think that is required. (It may be to avoid accidently rendering to targets if the feature is called without using it’s Setup function?)

If your feature relies on shader keywords, you might also enable those during Execute and disable them in OnCameraCleanup. This can either involve using CommandBuffer.EnableShaderKeyword and CommandBuffer.DisableShaderKeyword or the CoreUtils.SetKeyword function which provides a boolean which calls either of those for you. For example, the decal passes such as DecalScreenSpaceRenderPass do this.

DeferredLights (used by URP’s Deferred Rendering path) appears to dispose some NativeArrays in here.


Execute

The Execute function is where most of our custom rendering code goes. This is mostly handled through the ScriptableRenderContext and CommandBuffer APIs (or other functions that end up calling those APIs, like ScriptableRenderPass.Blit and the Blitter class which passes a CommandBuffer as a parameter).

We do not need to call ScriptableRenderContext.Submit as URP handles this for us.

To properly title things in the Profiler & Frame Debugger windows, the usual way to set up the Execute function is like this :

// (in Pass)
private ProfilingSampler m_ProfilingSampler;
...
// (constructor, method name should match class name)
public CustomRenderPass(string name) {
    m_ProfilingSampler = new ProfilingSampler(name);
}
...
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData) {
    CommandBuffer cmd = CommandBufferPool.Get();
    using (new ProfilingScope(cmd, m_ProfilingSampler)) {
        context.ExecuteCommandBuffer(cmd);
        cmd.Clear();
        /*
        Note : should always ExecuteCommandBuffer at least once before using
        ScriptableRenderContext functions (e.g. DrawRenderers) even if you 
        don't queue any commands! This makes sure the frame debugger displays 
        everything under the correct title.
        */

        // Do stuff!
        // Would recommend keeping all your rendering code in this using statement.
        ...
    }
    // Execute Command Buffer one last time and release it
    // (otherwise we get weird recursive list in Frame Debugger)
    context.ExecuteCommandBuffer(cmd);
    cmd.Clear();
    CommandBufferPool.Release(cmd);
}

What rendering commands you do in here depends on what the feature is meant to do, but I’ll be providing some common examples below, such as DrawRenderers and Blit calls.

Before moving on, it’s very important to understand a few things :


DrawRenderers

!

As of Unity 2023, DrawRenderers is now obsolete. It’s replacement is to create a RendererList using ScriptableRenderContext.CreateRendererList and then draw it using CommandBuffer.DrawRendererList. I haven’t got an example of this currently - I’ve mostly stuck to 2022 versions - but it should be a similar setup to DrawRenderers (as shown below)

Note there is two overloads for CreateRendererList: one using a RendererListDesc struct (looks slightly simpler / concise), while the other uses RendererListParams which contains other structs like CullingResults, FilteringSettings, DrawingSettings, etc that were also used by DrawRenderers.

As the method name suggests, this allows us to draw renderers to the current render target (specified in OnCameraSetup by ConfigureTarget as previously discussed). Can see the function’s params/overloads in the Unity docs for ScriptableRenderContext.DrawRenderers

The DrawingSettings struct parameter allows us to configure how the renderers will be drawn. It’s typically created using the CreateDrawingSettings method of the ScriptableRenderPass class (which calls the same method in RenderingUtils). After this, can set properties on it.

Can see which properties are available in the Unity docs for DrawingSettings. For example :

Unity will automatically get renderers from the cullResults, but we can specify filters in the FilteringSettings struct parameter. For creating this, see the FilteringSettings constructor. If you don’t want to filter anything, can use FilteringSettings.defaultValue.

Below is an example of using DrawRenderers to render any opaque objects, filtered by a LayerMask and specifying an Override Material (& pass index). Settings is a serialised class in the feature, as set up in the Create section.

If drawing transparent objects instead, you’d want to use RenderQueueRange.transparent and SortingCriteria.CommonTransparent. (And the RenderPassEvent would likely be set to at least AfterRenderingSkybox)

// in pass
private Settings settings;
private FilteringSettings filteringSettings;
private List<ShaderTagId> shaderTagsList = new List<ShaderTagId>();
private ProfilingSampler _profilingSampler;

// (constructor)
public CustomRenderPass(Settings settings, string name) {
    this.settings = settings;
    _profilingSampler = new ProfilingSampler(name);
    filteringSettings = new FilteringSettings(RenderQueueRange.opaque, settings.layerMask);
    // Use URP's default shader tags
    shaderTagsList.Add(new ShaderTagId("SRPDefaultUnlit"));
    shaderTagsList.Add(new ShaderTagId("UniversalForward"));
    shaderTagsList.Add(new ShaderTagId("UniversalForwardOnly"));
}
...
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData) {
    CommandBuffer cmd = CommandBufferPool.Get();
    using (new ProfilingScope(cmd, m_ProfilingSampler)) {
        context.ExecuteCommandBuffer(cmd);
        cmd.Clear();
        
        // Draw Renderers to current Render Target (set in OnCameraSetup)
        SortingCriteria sortingCriteria = renderingData.cameraData.defaultOpaqueSortFlags;
        DrawingSettings drawingSettings = CreateDrawingSettings(shaderTagsList, ref renderingData, sortingCriteria);
        if (settings.overrideMaterial != null) {
            drawingSettings.overrideMaterialPassIndex = settings.overrideMaterialPass;
            drawingSettings.overrideMaterial = settings.overrideMaterial;
        }
        context.DrawRenderers(renderingData.cullResults, ref drawingSettings, ref filteringSettings);
    }
    context.ExecuteCommandBuffer(cmd);
    cmd.Clear();
    CommandBufferPool.Release(cmd);
}

Blit

A blit is used to copy pixels from a “source” texture to a “destination” target. To do this, it draws a fullscreen quad (or triangle). It optionally allows you to specify a material if you want to use a custom shader, otherwise it’ll use a built-in one (specifically Hidden/Universal/CoreBlit).

There are a number of ways to handle blits. For Unity 2022+ versions we should now use the Blitter API. Most commonly :

For older versions, these methods are available :

Example

When rendering we need to make sure we do not read and write to the same texture/target as this can cause “unintended behaviour” (to quote the CommandBuffer.Blit documentation). Because of this, if the source/destination needs to be the same, we actually need to instead use two blits with an additional target in-between.

In older versions this would typically be a “Temporary Render Texture”, but with the change to RTHandles that’s not really used anymore. In the example below I’ve used RenderingUtils.ReAllocateIfNeeded so the texture won’t be set up multiple times if the feature is used multiple times (or if other features use the same _TemporaryColorTexture reference) - so in a way, this still acts as a “temporary” texture (kinda).

// In pass :
private RTHandle rtTemp;
...
// OnCameraSetup
RenderingUtils.ReAllocateIfNeeded(ref rtTemp, desc, name: "_TemporaryColorTexture");
...
// Execute (inside using statement)
RTHandle rtCamera = renderer.cameraColorTargetHandle;
Blitter.BlitCameraTexture(cmd, rtCamera, rtTemp, settings.blitMaterial, settings.blitMaterialPassIndex);
Blitter.BlitCameraTexture(cmd, rtTemp, rtCamera, Vector2.one);
...
// Should also clean-up our allocated RTHandle, so :
public void ReleaseTargets() {
    temp?.Release();
}
...
// In feature :
protected override void Dispose(bool disposing) {
    blitPass.ReleaseTargets();
}

Of course some effects may require multiple passes/blits anyway, such as two-pass blurs. This would be the same as the above, but we’d specify the material in both with different pass indices :

Blitter.BlitCameraTexture(cmd, rtCamera, rtTemp, settings.blurMaterial, 0);
Blitter.BlitCameraTexture(cmd, rtTemp, rtCamera, settings.blurMaterial, 1);

Example (Copy Color)

A blit could also be used to copy the camera colour target to a custom one (initialised similar to rtTemp above but renamed).

This would be similar to what the Opaque Texture does (used by Scene Color node), but that always occurs AfterRenderingSkybox so won’t contain transparent objects. With a custom blit feature, we could copy the screen during different events, such as AfterRenderingTransparents. That way, the texture contains anything rendered in the normal transparent queue.

In this case you likely wouldn’t specify a material, and since the targets are different only a single call is needed :

Blitter.BlitCameraTexture(cmd, rtCamera, rtCustom, Vector2.one);

// Pass as global shader texture
CommandBuffer.SetGlobalTexture("_SomeReference", rtCustom);
// In Shader Graphs could obtain this using Texture2D property,
// set same Reference, untick Exposed.

As shown we then pass our custom target (containing the camera copy) as a global texture. We can then sample that in shaders used by objects in the scene.

However to prevent graphical artifacts, it is important that any objects/shaders that sample the texture are rendered in a later event! Can force this by putting the object on a layer removed from the Default Opaque/Transparent Layer Mask at the top of the UniversalRenderer, and render that layer with a DrawRenderers call, or use the RenderObjects feature.


Full Renderer Feature Example (Unity 2022)

Here’s a full code example based on snippets mentioned in this post. It involves rendering objects to a custom texture, then a fullscreen pass on the camera which also uses that texture. (The usage foldout below also provides example shaders that the materials could use)

using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class CustomRendererFeature : ScriptableRendererFeature {

    public class CustomRenderPass : ScriptableRenderPass {

        private Settings settings;
        private FilteringSettings filteringSettings;
        private ProfilingSampler _profilingSampler;
        private List<ShaderTagId> shaderTagsList = new List<ShaderTagId>();
        private RTHandle rtCustomColor, rtTempColor;

        public CustomRenderPass(Settings settings, string name) {
            this.settings = settings;
            filteringSettings = new FilteringSettings(RenderQueueRange.opaque, settings.layerMask);
            
            // Use default tags
            shaderTagsList.Add(new ShaderTagId("SRPDefaultUnlit"));
            shaderTagsList.Add(new ShaderTagId("UniversalForward"));
            shaderTagsList.Add(new ShaderTagId("UniversalForwardOnly"));
            
            _profilingSampler = new ProfilingSampler(name);
        }

        public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData) {
            var colorDesc = renderingData.cameraData.cameraTargetDescriptor;
            colorDesc.depthBufferBits = 0;

            // Set up temporary color buffer (for blit)
            RenderingUtils.ReAllocateIfNeeded(ref rtTempColor, colorDesc, name: "_TemporaryColorTexture");

            // Set up custom color target buffer (to render objects into)
            if (settings.colorTargetDestinationID != ""){
                RenderingUtils.ReAllocateIfNeeded(ref rtCustomColor, colorDesc, name: settings.colorTargetDestinationID);
            }else{
                // colorDestinationID is blank, use camera target instead
                rtCustomColor = renderingData.cameraData.renderer.cameraColorTargetHandle;
            }

            // Using camera's depth target (that way we can ZTest with scene objects still)
            RTHandle rtCameraDepth = renderingData.cameraData.renderer.cameraDepthTargetHandle;

            ConfigureTarget(rtCustomColor, rtCameraDepth);
            ConfigureClear(ClearFlag.Color, new Color(0,0,0,0));
        }

        public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData) {
            CommandBuffer cmd = CommandBufferPool.Get();
            // Set up profiling scope for Profiler & Frame Debugger
            using (new ProfilingScope(cmd, _profilingSampler)) {
                // Command buffer shouldn't contain anything, but apparently need to
                // execute so DrawRenderers call is put under profiling scope title correctly
                context.ExecuteCommandBuffer(cmd);
                cmd.Clear();

                // Draw Renderers to Render Target (set up in OnCameraSetup)
                SortingCriteria sortingCriteria = renderingData.cameraData.defaultOpaqueSortFlags;
                DrawingSettings drawingSettings = CreateDrawingSettings(shaderTagsList, ref renderingData, sortingCriteria);
                if (settings.overrideMaterial != null) {
                    drawingSettings.overrideMaterialPassIndex = settings.overrideMaterialPass;
                    drawingSettings.overrideMaterial = settings.overrideMaterial;
                }
                context.DrawRenderers(renderingData.cullResults, ref drawingSettings, ref filteringSettings);

                // Pass our custom target to shaders as a Global Texture reference
                // In a Shader Graph, you'd obtain this as a Texture2D property with "Exposed" unticked
                if (settings.colorTargetDestinationID != "") 
                    cmd.SetGlobalTexture(settings.colorTargetDestinationID, rtCustomColor);
                
                // Apply material (e.g. Fullscreen Graph) to camera
                if (settings.blitMaterial != null) {
                    RTHandle camTarget = renderingData.cameraData.renderer.cameraColorTargetHandle;
                    if (camTarget != null && rtTempColor != null) {
                        Blitter.BlitCameraTexture(cmd, camTarget, rtTempColor, settings.blitMaterial, 0);
                        Blitter.BlitCameraTexture(cmd, rtTempColor, camTarget);
                    }
                }
            }
            // Execute Command Buffer one last time and release it
            // (otherwise we get weird recursive list in Frame Debugger)
            context.ExecuteCommandBuffer(cmd);
            cmd.Clear();
            CommandBufferPool.Release(cmd);
        }

        public override void OnCameraCleanup(CommandBuffer cmd) {}

        // Cleanup Called by feature below
        public void Dispose() {
            if (settings.colorTargetDestinationID != "")
                rtCustomColor?.Release();
            rtTempColor?.Release();
        }
    }

    // Exposed Settings

    [System.Serializable]
    public class Settings {
        public bool showInSceneView = true;
        public RenderPassEvent _event = RenderPassEvent.AfterRenderingOpaques;

        [Header("Draw Renderers Settings")]
        public LayerMask layerMask = 1;
        public Material overrideMaterial;
        public int overrideMaterialPass;
        public string colorTargetDestinationID = "";

        [Header("Blit Settings")]
        public Material blitMaterial;
    }

    public Settings settings = new Settings();

    // Feature Methods

    private CustomRenderPass m_ScriptablePass;

    public override void Create() {
        m_ScriptablePass = new CustomRenderPass(settings, name);
        m_ScriptablePass.renderPassEvent = settings._event;
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData) {
        CameraType cameraType = renderingData.cameraData.cameraType;
        if (cameraType == CameraType.Preview) return; // Ignore feature for editor/inspector previews & asset thumbnails
        if (!settings.showInSceneView && cameraType == CameraType.SceneView) return;
        renderer.EnqueuePass(m_ScriptablePass);
    }

    protected override void Dispose(bool disposing) {
        m_ScriptablePass.Dispose();
    }
}

The feature can be used to render objects with a given material, into a custom buffer (specified by Color Target Destination ID setting). This texture reference can be sampled as a global/unexposed Texture2D in a blit/fullscreen material.

For example, feature could be used with materials using the following graphs (click images to view full screen) :

(Image)

Override Material (Unlit Graph)

(Image)

Blit Material (Fullscreen Graph)

Resulting in :

(Image)

Cyan objects are on separate Layer, not used by feature’s LayerMask setting. Note the cube doesn’t look great since we used a Fresnel Effect. Alternative outline methods may work better.

Another similar example is this Horizon Zero Dawn inspired highlight/glitch effect I made a while ago.


Setting values on features at Runtime

For some effects, you may want to set public or serialised fields/properties on a feature, from a C# Script at runtime.

If this is only for a specific instance of a feature on a single Renderer asset, you should be able to do this quite easily by exposing a public field in your MonoBehaviour. e.g.

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class SomeScript : MonoBehaviour {

    public CustomRendererFeature feature;
    // This feature uses the same "Settings class" example as shown in other sections
    // If you set public fields on the feature directly, that won't update the pass (unless you call Dispose and Create)

    // Call this method to set Override Material used by feature
    void SetMaterial(Material material, int passIndex){
        feature.settings.overrideMaterial = material;
        feature.settings.overrideMaterialPass = passIndex;
    }

}

If you need to do this for multiple instances of a feature, you can use an array of those features instead (so HighlightRendererFeature[] in this example). Another method could be to use a ScriptableObject to hold the Settings data, which the feature/pass and our SomeScript would have a reference to.

You could also try to get the Renderer Asset and loop through the features. But those aren’t public, so might require Reflection, not ideal.


Connecting a feature to a URP Volume

It is also possible to create a custom VolumeComponent to expose fields/properties which can communicate with a custom pass/feature. I won’t be including an example here as Febucci already has good tutorial of this (though note the code for the pass uses RenderTargetIdentifier and should be converted to RTHandles in 2022+)

!
2023.3 / Unity 6 may also have a new URP Post-processing Effect template to handle this automatically.

The next section may also provide a small improvement to this, as we can get the ScriptableRenderPass to run without even needing a Renderer Feature. The example below is for a MonoBehaviour but maybe the same thing could work for VolumeComponent.OnEnable/OnDisable.


RenderPipelineManager

While not necessarily the scope of this post, we can also inject code before or after rendering each frame/camera using the events in the RenderPipelineManager class.

I wanted to mention this as it is possible to enqueue Scriptable Render Passes to the renderer here if you don’t want to have to create/assign the Renderer Feature. This is also useful for older versions of URP where the 2D Renderer did not support features.

Here’s an example of this. Note that it still uses the Settings and CustomRenderPass classes nested inside a CustomRendererFeature, but they could also be separate - the feature isn’t used! The feature used here is the same as the Renderer Feature Example (2022) from earlier.

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class SomeScript : MonoBehaviour {

    public CustomRendererFeature.Settings settings;
    private CustomRendererFeature.CustomRenderPass m_ScriptablePass;
    // or just CustomRenderPass if not nested
    // if nested, make sure CustomRenderPass is marked as public

    private void OnEnable(){
        // Setup same way as in CustomRendererFeature.Create
        m_ScriptablePass = new CustomRendererFeature.CustomRenderPass(settings, "Example");
        m_ScriptablePass.renderPassEvent = settings.renderPassEvent;

        // Register method
        RenderPipelineManager.beginCameraRendering += BeginCameraRendering;
    }
    
    private void OnDisable(){
        // same as CustomRendererFeature.Dispose
        m_ScriptablePass.ReleaseTargets();
        m_ScriptablePass = null;

        // Unregister method
        RenderPipelineManager.beginCameraRendering -= BeginCameraRendering;
    }

    // if fields are changed in editor, update pass by disposing & recreate
    private void OnValidate(){
        if (m_ScriptablePass != null) {
            OnDisable();
            OnEnable();
        }
    }

    private void BeginCameraRendering(ScriptableRenderContext context, Camera camera) {
        // Similar to CustomRendererFeature.AddRenderPasses
        CameraType cameraType = camera.cameraType;
        if (cameraType == CameraType.Preview) return; // Ignore feature for editor/inspector previews & asset thumbnails
        if (!settings.showInSceneView && cameraType == CameraType.SceneView) return;

        ScriptableRenderer renderer = camera.GetUniversalAdditionalCameraData().scriptableRenderer;
        renderer.EnqueuePass(m_ScriptablePass);
    }
}

Note that while the pass is enqueued at the beginning of the camera render, the blit will still occur later as set by the RenderPassEvent.

Of course the script here would still need to be put onto an GameObject in the scene - but that could be considered easier than adding a Renderer Feature? 🤷


Thanks for reading! 😊

If you find this post helpful, please consider sharing it with others / on socials
Donations are also greatly appreciated! 🙏✨

(Keeps this site free from ads and allows me to focus more on tutorials)


License / Usage Cookies & Privacy RSS Feed