Tutorial

Rendering into custom texture in Unity

Sep 17, 2025

30 minutes

A month ago, a friend of mine asked me if I wanted to help him improve the visuals of the game he develops in his free time. After discussing a few ideas, we decided to implement volumetric fog that would be dispersed by running characters. Additionally, we aimed to add zen gardens and create an effect where characters mess up the sand when running through. In this article series, I explain my thought process behind creating such effects. Here, I will focus only on implementing rendering game metadata, like character trails, into the texture.

What is the goal?

This is an advanced article. Rendering custom data into a texture requires writing a lot of code and managing many moving parts.

My overall goal is to implement fog and zen gardens. Throughout this article series, I'll take this:

And I will achieve this.

:image-description:

EPIC!

___

How to approach this effect

Let's imagine that the effect above is only in my head. To render fog or the sand, I need access to data about where the characters have been. Therefore, it is intuitive for me to start implementing the effect by rendering character trails into a top-down texture.

Gameplay takes place on the XZ plane, with Y representing the upward direction.

Enemies are implemented by instancing prefabs that move on the map.

Imagine a terrain with texture, where each enemy is represented by a quad that renders into this texture. Imagine they are brushes that run on the big painting canvas.

So, as an initial step, I will just implement this effect - rendering into a texture and sampling it in another shader:

This is the result texture.

Each character will render trails using the red and green channels. In later articles, I will use the red channel to disperse fog, and the green channel to mess up the sand.


___

How to render into a texture:

To render custom content into a texture I could use a built-in camera with a texture as a target. That seems easy, no? I would simply add renderers, create another camera that renders only these, and it's done.

No. To render trails, I need the content of the texture to be preserved between the frames, but Unity ignored my camera settings and was always clearing the texture content at the beginning of each frame.

:center-px:

Unity does not respect this setting when rendering into a custom texture. For Unity, it is more like a suggestion than a rule.

I could solve that by:

  1. Modifying the render pipeline source code to respect that setting.

  2. Implement custom rendering into the texture.

I chose the second option: custom rendering into texture. Here are my reasons:

  • I am implementing this feature for another team and do not want to leave them maintaining manual render pipeline updates.

  • I want to render all objects as quads using a simple shader. This is ideal for efficient instanced rendering. The built-in camera has high overhead, while custom rendering does only what's necessary.


___

Rendering into texture - implementation

I created another article explaining what is required to render custom content into a texture and why I think this is the most important skill each technical artist can master. You can read it here:

https://www.proceduralpixels.com/blog/rendering-into-texture-the-most-important-ta-skill

TLDR: to render custom content into a texture, I need to:

  1. Track and filter objects to render

  2. Define a camera - create a view-projection matrix, create a texture

  3. Execute draw calls

  4. Prepare shaders that will render into a texture

  5. Use the rendered texture in other shaders


To achieve this, the idea is to render in 3 passes.

  1. First pass will render into the red channel using additive blending. It will ensure that all the trails stay in the texture.

  2. The second pass will render into the red channel using multiply blending. It will allow me to slowly fade the trails away.

  3. The third pass will render into a green channel using additive blending - again, this will allow the trails to stay in the texture.

I will go through each step and implement it below. However, I warn you - this will be a lengthy journey through the code. Let's dive in!


___

1. Track and filter objects to render

:center-px:

The goal of this step is to have a single collection with all the objects that needs to be rendered. Preferably a list or an array of structs that can be later used to set the graphics buffer.

I want to render objects using quads, so I need to have information about their position, rotation, and scale. Additionally, I would like to control which channel is rendered and its alpha value to adjust the trail intensity.

The position, rotation and scale can be stored in a single matrix, so I will just use that.

[StructLayout(LayoutKind.Sequential)]  
public struct HeatmapObjectRendererData  
{  
   public Matrix4x4 localToWorldMatrix;  
   public float alpha;  
   public int blendMode;  
}

This is the data of each rendered object.

Ok, let's create a component that will be responsible for tracking the objects. To follow Unity convention, I will name it HeatmapObjectRenderer. Notice that I store a static list of all renderers as well as low-level list of renderers data. The list of renderers will be used to update the low-level data. And low-level data will be later used to set the graphics buffer used in the shader.

public unsafe class HeatmapObjectRenderer : MonoBehaviour  
{  
   // Collections that track all the renderers  
   private static List<HeatmapObjectRenderer> allInstances = new();  
   private static UnsafePtrList<HeatmapObjectRendererData>* AllInstanceData = null;
   public static int InstanceCount => allInstances.Count;
   // Internal data of this renderer  
   private HeatmapObjectRendererData* dataPtr;
   // Inspector properties  
   [Range(0.0f, 1.0f)]  
   public float alpha;  
   public HeatmapObjectBlendMode blendMode;  
}

public enum HeatmapObjectBlendMode  
{  
   FogCut = 0, // Will be used to render red channel  
   FogRestore = 1, // Will be used to restore red channel  
   DrawSteps = 2 // Will be used to render green channel  
}

I use low-level, unsafe collections to enable the use of this code with the Burst compiler for optimization purposes, if needed.

Ok, so the renderer data layout is now ready. I will use OnEnable and OnDisable to register/unregister the renderer.

 
public unsafe class HeatmapObjectRenderer : MonoBehaviour  
{  
  private static List<HeatmapObjectRenderer> allInstances = new();  
  private static UnsafePtrList<HeatmapObjectRendererData>* AllInstanceData = null;
  ...
  private HeatmapObjectRendererData* dataPtr;
  ...
  private void OnEnable()  
  {  
    // Allocate renderer data  
    dataPtr = AllocatorManager.Allocate<HeatmapObjectRendererData>(Allocator.Persistent);  
    UpdateData(); // not implemented yet - this will update the low-level data of the renderer  
    
    // Lazy-initialize collections  
    if (AllInstanceData == null)  
    AllInstanceData = UnsafePtrList<HeatmapObjectRendererData>.Create(1024, Allocator.Persistent);  
    
    // Start tracking the instance  
    allInstances.Add(this);  
    AllInstanceData->Add(dataPtr);  
  }  
  
  private void OnDisable()  
  {  
    // Stop tracking the instance  
    allInstances.RemoveSwapBack(this);  
    AllInstanceData->RemoveSwapBack(dataPtr);  
    
    // Release the renderer data  
    AllocatorManager.Free(Allocator.Persistent, dataPtr);  
    dataPtr = null;  
    
    // Dealloc a collection if this was the last renderer  
    if (AllInstanceData->Length <= 0)  
    {  
      UnsafePtrList<HeatmapObjectRendererData>.Destroy(AllInstanceData);  
      AllInstanceData = null;  
    }  
  }  
}

When enabled, the component allocates data and registers itself in a static collection. When disabled, it unregisters and deallocates the data.

If enemies are moving, I need a function that will update the data of the all instances before they are rendered. I will also implement a function that I will use to fetch the data for the rendering.

public unsafe class HeatmapObjectRenderer : MonoBehaviour  
{  
  private static List<HeatmapObjectRenderer> allInstances = new();  
  private static UnsafePtrList<HeatmapObjectRendererData>* AllInstanceData = null;
  
  public static int InstanceCount => allInstances.Count;
  
  private HeatmapObjectRendererData* dataPtr;
  
  [Range(0.0f, 1.0f)] public float alpha;  
  public HeatmapObjectBlendMode blendMode;  
   
  ...  
  
  // Updates the low-level data of the renderer.  
  private void UpdateData()  
  {  
    // If the data exists  
    if (dataPtr == null)  
      return;  
    
    // Fill the low-level renderer data  
    dataPtr->alpha = alpha;  
    dataPtr->blendMode = (int)blendMode;  
    dataPtr->localToWorldMatrix = transform.localToWorldMatrix;  
  }  
  
  // Updates low-level data of each renderer.  
  public static void UpdateInstances()  
  {  
    // Iterate through each instance  
    for (int i = 0; i < allInstances.Count; i++)  
      allInstances[i].UpdateData();  
  }  
  
  // Returns the low-level collection that can be used for rendering.  
  // It converts a list of pointers into a list of structs, which I will use in the rendering.  
  public static void FetchInstanceData(UnsafeList<HeatmapObjectRendererData>* targetListPtr)  
  {  
    // Return the data that will be used in rendering  
    for (int i = 0; i < AllInstanceData->Length; i++)  
      targetListPtr->Add(*(AllInstanceData->ElementAt(i)));  
  }  
}

Perfect. Let's add this component to an enemy prefab.


The component is added, but there is no visual feedback in the scene view to show that this object renders into a texture. It is good practice to implement gizmos for such components. I assumed the quad will be drawn on the XZ plane, spanning from (-0.5, 0, -0.5) to (0.5, 0, 0.5) in object space.

This is a gizmo I came up with, it draws a red quad and a transparent circle inside:

public unsafe class HeatmapObjectRenderer : MonoBehaviour  
{  
  // Used to display the drawn quad in the editor.  
  private void OnDrawGizmosSelected()  
  {  
    // Visualise heatmap object in some nice way  
    Gizmos.color = new Color(1.0f, 0.2f, 0.2f, 1.0f * math.lerp(alpha, 1.0f, 0.5f));  
    Gizmos.matrix = transform.localToWorldMatrix * Matrix4x4.Scale(new Vector3(1.0f, 0.000001f, 1.0f));
    
    // Borders  
    Gizmos.DrawWireCube(Vector3.zero, new Vector3(1.0f, 0.0000001f, 1.0f));
    
    // Circle inside  
    Gizmos.color = new(1.0f, 0.2f, 0.2f, 0.5f * alpha);  
    Gizmos.DrawSphere(Vector3.zero + Vector3.up * 0.00001f, 0.5f);  
  }  
}

Gizmo that draws the red quad.

Red quad is now visible. This is the quad shape I want to render into a texture.

Now, all the code for tracking the objects is ready. Time to implement a custom camera.

___

2. Define a camera

:center-px:

In this step, I will implement a custom component that will represent my camera.

In this component I want to allocate the texture and define a view-projection matrices.

The view matrix and the projection matrix will define what area of the world this camera renders.

Let's start with this snippet.

public class HeatmapCamera : MonoBehaviour  
{  
   // Singleton(ish) pattern  
   private static List<HeatmapCamera> allInstances = new();  
   public static HeatmapCamera MainInstance => allInstances.LastOrDefault();
  
   // Struct that stores all camera parameters required for rendering.  
   public HeatmapCameraParameters Parameters { get; private set; } = default;
  
   // Inspector field that allows me to select the rendered resolution  
   [SerializeField]  
   public HeatmapTextureResolution resolution = HeatmapTextureResolution._4096;
  
   // Target texture for the camera  
   private RenderTexture heatmapTexture;  
}

// Camera parameters  
public struct HeatmapCameraParameters  
{  
   public Matrix4x4 worldToCameraMatrix; // View matrix  
   public Matrix4x4 projectionMatrix; // Projection matrix  
   public RenderTexture heatmapTexture; // Target texture  
}

// Possible texture resolutions to select in the inspector  
public enum HeatmapTextureResolution  
{  
   _512 = 512,  
   _1024 = 1024,  
   _2048 = 2048,  
   _4096 = 4096  
}

Now I will allocate the target texture. Similarly to the renderers, I will allocate data in OnEnable and deallocate it in OnDisable. The texture will store half-precision RGBA content.

 
public class HeatmapCamera : MonoBehaviour  
{  
  private static List<HeatmapCamera> allInstances = new();
  
  [SerializeField] public HeatmapTextureResolution resolution = HeatmapTextureResolution._4096;
  
  private RenderTexture heatmapTexture;  
  
  // Initialize a whole component  
  private void OnEnable()  
  {  
    // Register the instance  
    allInstances.Add(this);
    
    // Create a render texture  
    heatmapTexture = new RenderTexture(  
      (int)resolution, (int)resolution,  
      UnityEngine.Experimental.Rendering.GraphicsFormat.R16G16B16A16_SFloat,  
      UnityEngine.Experimental.Rendering.GraphicsFormat.None, 0  
    );  
    heatmapTexture.name = gameObject.name + "_RenderTexture"; // Always name resources  
    heatmapTexture.enableRandomWrite = true;  
    heatmapTexture.Create();
      
    // TODO: create matrices and parameters  
  }
  
  // Deinitialize the component  
  private void OnDisable()  
  {  
    // Unregister the instance  
    allInstances.Remove(this);  
    
    // Release the texture  
    heatmapTexture.Release();  
  }  
}

Ok, the easy part of a camera is done. Now I need to create view and projection matrices.

View matrix works by converting world space position into a camera space. The role of the projection matrix is to convert this camera space into clip space, which defines the position of the objects on the screen.

I want to create a setup that:

  • Renders orthogonal frustum.

  • I want everything in between (-1.0, -1.0, -1.0) and (1.0, 1.0, 1.0) in camera space to be rendered into a texture.

  • I want the camera to render along the Z axis. The object-space X-axis is left-right, and the Y-axis is top-down; the Z-axis is forward.

Let's start with projection matrix. Unity has a nice Matrix4x4.Ortho() function to create orthogonal projection matrices. Let's use that.

private void OnEnable()  
{  
  ...  
  
  // Create the orthogonal projection matrix for the camera.  
  // It renders object-space between (-1.0, -1.0, 0.0) to (1.0, 1.0, 2.0)  
  Matrix4x4 projectionMatrix = Matrix4x4.Ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.00001f, 2.0f);  
  
  // Move the projection back.  
  // It makes it render object-space between (-1.0, -1.0, -1.0) to (1.0, 1.0, 1.0)  
  projectionMatrix = projectionMatrix * Matrix4x4.Translate(new Vector3(0.0f, 0.0f, -1.0f));  
}

I noticed that Matrix4x4.Ortho() can't use negative near/far plane values. So I moved the projection matrix back a little using a translation matrix.

Let's think about the view-matrix. The view matrix is used to convert a world space position into camera space. Therefore, if the component is assigned to the transform, the object space of this transform is equivalent to my camera space.

So I can just use transform.worldToLocalMatrix as my view matrix.

Now it's time to set the parameters that will be publicly available for rendering.

private void OnEnable()  
{  
  ...  
    
  Matrix4x4 projectionMatrix = Matrix4x4.Ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.00001f, 2.0f);  
  projectionMatrix = projectionMatrix * Matrix4x4.Translate(new Vector3(0.0f, 0.0f, -1.0f));  
    
  // Update public parameters  
  HeatmapCameraParameters parameters = new HeatmapCameraParameters();  
  parameters.projectionMatrix = projectionMatrix;  
  parameters.worldToCameraMatrix = transform.worldToLocalMatrix; // View matrix is just transform.worldToLocalMatrix.  
  parameters.heatmapTexture = heatmapTexture;  
  Parameters = parameters;
  
  // Set global view and projection matrices for the heatmap.
  // Projection matrix is converted from unified projection to the correct graphics API format. 
  Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(parameters.projectionMatrix, false));
  Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixV, parameters.worldToCameraMatrix);  
}

Time to assign the created component. I made a prefab to store the heatmap camera and attached the component. When camera object space defines the rendered content, I can use its transform to control how much of the world is rendered into the texture.

I set up the camera to render 400 units in width/height, and 200 units in depth.

However, it is now impossible to see if the frustum is set up properly. I will implement a gizmo. Because the camera renders the content inside an object-space box, I can simply use a gizmo that draws a simple box.

public class HeatmapCamera : MonoBehaviour  
{  
  ...  
  // Drawing a gizmo to see where the camera is  
  private void OnDrawGizmosSelected()  
  {  
    // Draw a yellow box in object space  
    Gizmos.color = Color.yellow;  
    Gizmos.matrix = transform.localToWorldMatrix;  
    Gizmos.DrawWireCube(Vector3.zero, Vector3.one * float3(2.0f, 2.0f, 2.0f));  
    
    // Draw a line from near the plane to the center  
    Gizmos.color = Color.yellow * new Color(1, 1, 1, 0.5f);  
    Gizmos.DrawLine(new Vector3(0.0f, 0.0f, -1.0f), Vector3.zero);  
  }  
}

Much better!

Ok. I have a collection of objects to render, allocated texture, and calculated view and projection matrices. Time to execute some draw calls!

___

3. Execute draw calls


:center-px:


I'm working with Unity 6000.0.39f1 with URP. Therefore, I will implement a ScriptableRendererFeature that injects a custom pass into a RenderGraph.

I will start by implementing a ScriptableRendererFeature. This is a boilerplate code required by Unity. The only thing it does is inject a custom pass into a render pipeline. The meat is implemented later.

public class HeatmapRendererFeature : ScriptableRendererFeature  
{  
  // Material with a shader that I want to use to render all the objects.  
  public Material heatmapObjectRenderMaterial;  
  
  // Pass that implements the rendering  
  private HeatmapRenderPass heatmapRenderPass;
  public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)  
  {  
    // Only render for a game camera  
    bool shouldRenderHeatmap = renderingData.cameraData.cameraType == CameraType.Game;  
    if (!shouldRenderHeatmap)  
      return;
    
    // Batch-update all instances before rendering  
    HeatmapObjectRenderer.UpdateInstances();
    
    // Enqueue heatmap texture render pass  
    renderer.EnqueuePass(heatmapRenderPass);  
  }
  
  public override void Create()  
  {  
    // Create a render pass.  
    heatmapRenderPass = new HeatmapRenderPass(heatmapObjectRenderMaterial);  
  }
  
  protected override void Dispose(bool disposing)  
  {  
    // Release the render pass.  
    if (heatmapRenderPass != null)  
    {  
      heatmapRenderPass.Release();  
      heatmapRenderPass = null;  
    }  
  }  
}

// This class will implement the rendering logic  
public unsafe class HeatmapRenderPass : ScriptableRenderPass  
{  
  // Material that will be used to render the objects into a texture  
  internal Material heatmapRenderMaterial;  
  
  // Constructor used to allocate the resources  
  public HeatmapRenderPass(Material heatmapRenderMaterial)  
  {  
    this.heatmapRenderMaterial = heatmapRenderMaterial;  
  }  
  
  // Method used to release the resources.  
  public void Release()  
  {  
    heatmapRenderMaterial = null;  
  }  
  
  // I will implement the rendering here.  
}

First of all, I want to explain how I want to implement the rendering:

  1. I don't want to render anything if there is no HeatmapCamera component or if there is no object to be rendered.

  2. I want to have a graphics buffer that will store the data of all objects to render.

  3. I want to use instanced rendering to draw all objects at once.

    • In the first pass, I want to render objects into the red channel additively.

    • In the second pass, I want to render objects into the red channel with a multiply blend.

    • In the third pass, I want to render into the green channel additively.

Started to implement the rendering

I will implement the whole rendering in the RecordRenderGraph method.

I mentioned that I don't want to render anything when there is no camera component present or no objects to render, lets start by implementing that.

// This class will implement the rendering logic  
public unsafe class HeatmapRenderPass : ScriptableRenderPass  
{  
  ...  
  
  // This method will implement the whole rendering  
  public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)  
  {  
    // Don't render when the camera is not present.  
    var heatmapCamera = HeatmapCamera.MainInstance;  
    if (heatmapCamera == null)  
      return;  
    
    // Don't render anything when there are no instances to render  
    if (HeatmapObjectRenderer.InstanceCount <= 0)  
      return;  
    
    // Don't render when the material is not set  
    if (heatmapRenderMaterial == null)  
      return;  
  }  
}

Allocate data for instances

Now I need to create a graphics buffer that will store the data. Some while ago, in the HeatmapObjectRenderer, I created a method that allows for fetching this data:

HeatmapObjectRenderer.FetchInstanceData(UnsafeList<HeatmapObjectRendererData>* targetListPtr)

The method fills the unsafe list. So I need to create one in my render pass. I modified the render pass to store the list.

public unsafe class HeatmapRenderPass : ScriptableRenderPass  
{  
  // Added unsafe list  
  private UnsafeList<HeatmapObjectRendererData>* heatmapObjectRendererInstances = null;  
  ...  
  
  public HeatmapRenderPass(Material heatmapRenderMaterial)  
  {  
    // Allocating the list in the constructor  
    heatmapObjectRendererInstances = UnsafeList<HeatmapObjectRendererData>.Create(1024, Allocator.Persistent);  
  }  
  ...  
  
  public void Release()  
  {  
    // Deallocating the list.  
    UnsafeList<HeatmapObjectRendererData>.Destroy(heatmapObjectRendererInstances);  
    heatmapObjectRendererInstances = null;  
  }  
  ...  
}

Creating graphics buffer with instance data

Now I need to modify the RecordRenderGraph function to allocate the graphics buffer.

public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)  
{  
  ...
  
  // Prepare buffer descriptor  
  BufferDesc heatmapObjectInstanceBufferDesc = new BufferDesc(HeatmapObjectRenderer.InstanceCount, sizeof(HeatmapObjectRendererData), GraphicsBuffer.Target.Structured);  
  heatmapObjectInstanceBufferDesc.name = nameof(Uniforms._HeatmapObjectInstanceBuffer);  
    
  // Allocate the graphics buffer using the render graph API  
  BufferHandle heatmapObjectInstanceBuffer = renderGraph.CreateBuffer(heatmapObjectInstanceBufferDesc);

Then, it's time to fill the graphics buffer with renderers data. I will use a compute pass to do that.

And yes, this compute pass doesn't really compute anything. It is here only to prepare the buffer.

// Using compute pass to set the buffer data  
using (var computeBuilder = renderGraph.AddComputePass<PassData>($"{nameof(HeatmapRenderPass)}_Compute", out PassData passData))  
{  
  // Ensure it is always executed  
  computeBuilder.AllowPassCulling(false);  
  
  // Notify that the pass will use the created instance buffer.  
  computeBuilder.UseBuffer(heatmapObjectInstanceBuffer, AccessFlags.ReadWrite);
  
  // Set render function  
  computeBuilder.SetRenderFunc((PassData passData, ComputeGraphContext context) =>  
  {  
    // Clear all instances  
    heatmapObjectRendererInstances->Clear();  
    
    // Fetch instance data into a low-level list  
    HeatmapObjectRenderer.FetchInstanceData(heatmapObjectRendererInstances);  
    
    // Set the buffer data.  
    context.cmd.SetBufferData(heatmapObjectInstanceBuffer, heatmapObjectRendererInstances);  
  });  
}

In the code above I used a custom command buffer extension that allowed me to use unsafe lists to set the buffer data. This is part of my private library, which I use to implement small utilities that make my life easier in Unity. Here is the snippet:

public unsafe static class ComputeCommandBufferExtensions  
{  
  // Uses unsafe API to set the graphics buffer data using the UnsafeList.  
  public static void SetBufferData<T>(this ComputeCommandBuffer cmd, GraphicsBuffer buffer, UnsafeList<T>* dataPtr) where T : unmanaged  
  {  
    // Reinterpret list as native array  
    var array = NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray<T>(dataPtr->Ptr, dataPtr->Length, Allocator.None);
    
    #if ENABLE_UNITY_COLLECTIONS_CHECKS  
      // Set safety handle - it fixes safety errors.  
      var safetyHandle = AtomicSafetyHandle.Create();  
      NativeArrayUnsafeUtility.SetAtomicSafetyHandle(ref array, safetyHandle);  
    #endif  
    
    // Set buffer data  
    cmd.SetBufferData(buffer, array);
    
    #if ENABLE_UNITY_COLLECTIONS_CHECKS  
      // Release safety handle.  
      AtomicSafetyHandle.Release(safetyHandle);  
    #endif


Executing draw calls into a custom texture

Now, when the graphics buffer is ready, I will render the objects into a custom texture. I will use the raster pass to do that. The goal is to:

  1. Set the previously allocated texture as a render attachment.

  2. Set all the shader properties, like matrices and instance buffer.

  3. Execute draw calls.

I will start by importing an allocated previously texture into a render graph, because all external resources that will be used during the rendering needs to be imported into render graph.

public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)  
{
  ...  
  
  // Access heatmap camera parameters  
  HeatmapCameraParameters heatmapParameters = heatmapCamera.Parameters;  
  
  // Convert its texture into RTHandle  
  var heatmapTextureRT = RTHandles.Alloc(heatmapParameters.heatmapTexture);  
  
  // Import RTHandle into render graph to be able to use it as a render attachment  
  var heatmapTextureHandle = renderGraph.ImportTexture(heatmapTextureRT);  
}

Then, I will create a render pass and declare render attachment (target texture) and used graphics buffer:

public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)  
{
  ...  
  var heatmapTextureHandle = renderGraph.ImportTexture(heatmapTextureRT);
  
  // Render the instances into a texture  
  using (var rasterBuilder = renderGraph.AddRasterRenderPass<PassData>($"{nameof(HeatmapRenderPass)}_Raster", out PassData passData))  
  {  
    // Set target texture.  
    rasterBuilder.SetRenderAttachment(heatmapTextureHandle, 0, AccessFlags.ReadWrite, 0, 0);  
    
    // Declare that I will use the instance buffer created before  
    rasterBuilder.UseBuffer(heatmapObjectInstanceBuffer, AccessFlags.Read);  
    
    // Ensure that the rendered texture is available for all shaders after rendering this pass  
    rasterBuilder.SetGlobalTextureAfterPass(heatmapTextureHandle, Uniforms._HeatmapTexture);  
  }  
}

The RenderGraph API requires creating a separate class that will hold all resources used by the render function. The object of this class is managed internally by the RenderGraph. So I will create a class named PassData, I will set it's resources and use them in the render function.

To execute draw calls, I need to have access to camera matrices, instance buffer, material with the proper shader, and instance count.

// Defines the resources used by the render function  
private class PassData  
{  
	public HeatmapCameraParameters cameraParameters;  
	public BufferHandle heatmapObjectInstanceBuffer;  
	public Material renderMaterial;  
	public int instanceCount;  
}
public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)  
{
	
	...  
	var heatmapTextureHandle = renderGraph.ImportTexture(heatmapTextureRT);
	
	using (var rasterBuilder = renderGraph.AddRasterRenderPass<PassData>($"{nameof(HeatmapRenderPass)}_Raster", out PassData passData))  
	{  
		rasterBuilder.SetRenderAttachment(heatmapTextureHandle, 0, AccessFlags.ReadWrite, 0, 0);  
		rasterBuilder.UseBuffer(heatmapObjectInstanceBuffer, AccessFlags.Read);  
		
		rasterBuilder.SetGlobalTextureAfterPass(heatmapTextureHandle, Uniforms._HeatmapTexture);  
		
		// Set the resources used by the render function  
		passData.cameraParameters = HeatmapCamera.MainInstance.Parameters;  
		passData.heatmapObjectInstanceBuffer = heatmapObjectInstanceBuffer;  
		passData.renderMaterial = heatmapRenderMaterial;  
		passData.instanceCount = HeatmapObjectRenderer.InstanceCount;  
		
		// Create the render function  
		rasterBuilder.SetRenderFunc((PassData passData, RasterGraphContext context) =>  
		{  
		// I will execute draw calls here using the resources in the passData.  
		});  
	}  
}

All the resources are ready, and the target texture is set. It's finally a time to write a render function. I will get a material property block and set all the properties used by the shader - the view-projection matrices and the instance buffer.

rasterBuilder.SetRenderFunc((PassData passData, RasterGraphContext context) =>  
{  
	// Get the property block using the render graph API.  
	var propertyBlock = context.renderGraphPool.GetTempMaterialPropertyBlock();  
	  
	// Set instance buffer shader property  
	propertyBlock.SetBuffer(Uniforms._HeatmapObjectInstanceBuffer, passData.heatmapObjectInstanceBuffer);  
	  
	// Set view and projection matrices.  
	propertyBlock.SetMatrix(Uniforms._HeatmapMatrixV, passData.cameraParameters.worldToCameraMatrix);  
	propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(passData.cameraParameters.projectionMatrix, true));  
});

I like to store the name of shader properties in a class like this:

public static class Uniforms  
{  
	public static readonly int _HeatmapObjectInstanceBuffer = Shader.PropertyToID(nameof(_HeatmapObjectInstanceBuffer));  
	public static readonly int _HeatmapMatrixV = Shader.PropertyToID(nameof(_HeatmapMatrixV));  
	public static readonly int _HeatmapMatrixP = Shader.PropertyToID(nameof(_HeatmapMatrixP));  
	public static readonly int _HeatmapTexture = Shader.PropertyToID(nameof(_HeatmapTexture));  
}

Now, I will execute 3 instance draws.

  • First draw call - render into red channel additive blending.

  • Second draw call - render into red channel with multiply blending

  • Third draw call - render into green channel with additive blending.

Each object instance in the buffer has a property that defines whether it should be rendered. So I will implement instance culling in the vertex shader - by moving culled vertices out of the frustum.

All the draws are instanced. Each instance is a single quad made of 6 vertices (2 triangles). I will use the DrawProcedural function that requires no mesh. The mesh will be defined in a shader code.

rasterBuilder.SetRenderFunc((PassData passData, RasterGraphContext context) =>  
{  
  var propertyBlock = context.renderGraphPool.GetTempMaterialPropertyBlock();  
   
  propertyBlock.SetBuffer(Uniforms._HeatmapObjectInstanceBuffer, passData.heatmapObjectInstanceBuffer);  
   
  propertyBlock.SetMatrix(Uniforms._HeatmapMatrixV, passData.cameraParameters.worldToCameraMatrix);  
  propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP, passData.cameraParameters.projectionMatrix);  
   
  // Draw additive passes.  
  context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 0, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);
  // Draw multiply passes  
  context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 1, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);
  // Draw green color max passes  
  context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 2, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);  
});

Ok! The whole rendering code is ready... At least for the CPU.

Let's add the render feature to the renderer asset.

:center-px:

This is how HeatmapRendererFeature looks in the renderer asset, notice that there is a field with a material. Shader in this material is used to render objects into a texture.

But I can't render anything without a shader...


___

4. Prepare shaders that will render into a texture

:center-px:


In this section, I will write shaders that render into a texture. I will create a shader, but I want to start with something super simple - I want to ensure that the feature is working correctly.


Render anything

In the first step, I will ensure that the render texture is set correctly, and the objects can be drawn into this texture. I will define the XZ quad in the shader constant array. Then I will use this quad to render it into this texture to see if it can fill some of the pixels of the texture. The goal is to basically change some of the colors in the texture - nothing more.

Shader "Heatmap/Object"  
{  
   SubShader  
   {  
       Tags { "RenderType"="Transparent" "RenderQueue"="3000" }  
       LOD 100
       // Pass 0  
       Pass  
       {  
           // Additive blending  
           Blend One Zero  
           ZTest Off  
           Cull Off  
           ZWrite Off
           HLSLPROGRAM
           #pragma vertex vert  
           #pragma fragment frag  
                     
           #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
           // Quad vertices in object space. 2 triangles.  
           static const float4 quadVerticesOS[] =  
           {  
               float4(-0.5, 0.0,-0.5, 1.0),  
               float4(-0.5, 0.0, 0.5, 1.0),  
               float4( 0.5, 0.0, 0.5, 1.0),  
               float4(-0.5, 0.0,-0.5, 1.0),  
               float4( 0.5, 0.0, 0.5, 1.0),  
               float4( 0.5, 0.0,-0.5, 1.0)  
           };
           // Vertex shader input - contains only vertex ID and instance ID  
           struct VertexData  
           {  
               uint instanceID : SV_InstanceID;  
               uint vertexID : SV_VertexID;  
           };  
             
           // No interpolators for fragment shader, just position on the screen  
           struct FragmentData  
           {  
               float4 positionCS_SV : SV_Position;  
           };
           // Vertex shader  
           FragmentData vert(VertexData input)  
           {  
               FragmentData output;
               // Read vertex position  
               float4 positionOS = quadVerticesOS[input.vertexID];
               // Display it as a quad on the screen  
               float4 positionCS = positionOS.xzyw;  
               output.positionCS_SV = positionCS;
               return output;  
           }
           // Fragment shader  
           float4 frag(FragmentData input) : SV_Target0  
           {  
               // Output yellow color  
               return float4(1.0, 1.0, 0.0, 1.0);  
           }
           ENDHLSL  
       }  
   }  
}

Ok, now I will create the material with this shader and assign it to my render feature.

:center-px:

I entered play mode and launched the frame debugger. Looks like the rendering works fine!

:center-px:

Wow, all of that to see a square on the screen. My render feature is correctly executed by the render graph. It renders into a HeatmapCamera_RenderTexture I created, and the texture has a correct resolution and format. The draw call renders 157 instances.


Use Instance ID

Let's keep the frame debugger open and iterate on the shader. To make sure that the texture is cleared each frame (it will be easier to debug), I added clearing the texture content to the render function.

...
rasterBuilder.SetRenderFunc((PassData passData, RasterGraphContext context) =>  
{  
// Clear the texture between each draw - this change is temporary  
context.cmd.ClearRenderTarget(false, true, Color.clear);
...

Let's use the instance ID to render instances in different places. I modified the shader to render smaller quads that are slightly offset by the instance ID to see if instancing works correctly.

// Vertex shader  
FragmentData vert(VertexData input)  
{  
   FragmentData output;
   float4 positionOS = quadVerticesOS[input.vertexID];
   float4 positionCS = positionOS.xzyw;
   positionCS.xy *= 0.05; // Render small quad  
   positionCS.xy += input.instanceID * 0.05; // Apply offset using instance ID
   output.positionCS_SV = positionCS;
   return output;  
}

Modified vertex shader code.

:center-px:

The frame debugger indicates that the texture was cleared, and then multiple quads were rendered.


Use instance buffer

Now that I know the instancing works correctly, it's time to access the renderer data of each instance and render them in the correct place in the texture. I will start by declaring the renderer data, buffer, and view-projection matrices at the beginning of the shader code. I set all those resources in my C# rendering code, so now I can access them in the shader.

// Data defined for each renderer  
struct HeatmapObjectRendererData  
{  
  float4x4 localToWorldMatrix;  
  float alpha;  
  int blendMode;  
};

// Buffer that stores the data of all instances  
StructuredBuffer<HeatmapObjectRendererData> _HeatmapObjectInstanceBuffer;

// View-projection matrices  
float4x4 _HeatmapMatrixV;  
float4x4 _HeatmapMatrixP;

Declaring instance buffer and view-projection matrices. Those properties should have the same layout and names as the properties set in the C# code.

// Vertex shader  
FragmentData vert(VertexData input)  
{  
	FragmentData output;
	
	// Access renderer data using the instance ID.  
	HeatmapObjectRendererData instanceData = _HeatmapObjectInstanceBuffer[input.instanceID];
	
	// Get vertex position in object space  
	float4 positionOS = quadVerticesOS[input.vertexID];
	
	// Convert vertex position from object space into world space using the matrix in the renderer data,  
	float4 positionWS = mul(instanceData.localToWorldMatrix, positionOS);
	
	// Convert world-space into view space using the view matrix.  
	float4 positionVS = mul(_HeatmapMatrixV, positionWS);
	
	// Convert view-space into clip space.  
	float4 positionCS = mul(_HeatmapMatrixP, positionVS);
	
	output.positionCS_SV = positionCS;
	
	return output;  
}

And in the frame debugger, I can now see that quads are rendered in different positions. To be able to see that I needed to make the region rendered by the camera smaller.

:center-px:


Draw texture on the screen

It would be nice to observe the content of this texture in the game's screen. Let's modify the HeatmapCamera component to do that.

private void OnGUI()  
{  
	if (heatmapTexture != null)  
		GUI.DrawTexture(new Rect(0, 0, 512, 512), heatmapTexture, ScaleMode.ScaleToFit, false);  
}

I can see that quads in the texture move like the characters on the screen. So it may be working properly.


Render blobs instead of quads

Let's make those quads render a smooth blob inside. I changed the blending to additive.

 
Pass  
{  
  // Changed blending to additive  
  Blend One One

Then I added an interpolator for UV.

struct FragmentData  
{  
	float4 positionCS_SV : SV_Position;
	
	// Added interpolator for UV  
	float2 uv : TEXCOORD0;  
};

And I rendered a blob using the UV calculated in the vertex shader.

FragmentData vert(VertexData input)  
{  
	FragmentData output;
	
	HeatmapObjectRendererData instanceData = _HeatmapObjectInstanceBuffer[input.instanceID];
	
	float4 positionOS = quadVerticesOS[input.vertexID];  
	float4 positionWS = mul(instanceData.localToWorldMatrix, positionOS);  
	float4 positionVS = mul(_HeatmapMatrixV, positionWS);  
	float4 positionCS = mul(_HeatmapMatrixP, positionVS);  
	output.positionCS_SV = positionCS;
	
	// Calculate UV using object space  
	// Object space is from (-0.5 to 0.5) in the XZ axis, so I just need to add 0.5 to make it 0-1 range.  
	float2 uv = positionOS.xz + 0.5f;  
	output.uv = uv;
	
	return output;  
}
float4 frag(FragmentData input) : SV_Target0  
{    
	// Calculate the blob intensity  
	float distanceToCenter = distance(input.uv, float2(0.5f, 0.5f));  
	float blob = smoothstep(0.5, 0.0, distanceToCenter);
	
	// Draw blob into red channel  
	return float4(blob, 0.0, 0.0, 0.0);  
}

Let's see the blobs in action.


Use instance alpha

Each renderer has its own alpha channel, which determines the blending intensity. Let's implement it. I will forward the alpha value from the vertex shader into a fragment shader using an interpolator.

struct FragmentData  
{  
	...
	
	// Added alpha interpolator  
	float alpha : TEXCOORD1;  
};

FragmentData vert(VertexData input)  
{  
	FragmentData output;
	
	HeatmapObjectRendererData instanceData = _HeatmapObjectInstanceBuffer[input.instanceID];  
	...  
	  
	// Setting the interpolator  
	output.alpha = instanceData.alpha;  
	  
	...  
	return output;  
}
float4 frag(FragmentData input) : SV_Target0  
{    
	...
	
	// Include the alpha in the blending  
	return float4(blob * input.alpha, 0.0, 0.0, 0.0);  
}


Render 3 passes

My goal was to render trails using 3 passes. Each renderer component has a blend mode property that defines which pass it needs to use:

  1. Render into red channel - additive blending

  2. Render into red channel - multiply blending

  3. Render into green channel - additive blending

I will start by moving all the common code into a shared HLSLINCLUDE section in the shader file. This is how the shader looks right now:

Shader "Heatmap/Object"  
{  
	HLSLINCLUDE  
	 
	// Code in the HLSLINCLUDE section will be included in all the passes. I moved the vertex shader and all shader properties here.
	
	struct HeatmapObjectRendererData  
	...
	
	static const float4 quadVerticesOS[] =  
	...
	
	struct VertexData  
	...  
	 
	struct FragmentData  
	...
	
	// Renamed the vertex shader name  
	FragmentData vertShared(VertexData input)  
	{  
	   ...  
	}
	
	ENDHLSL
	
	SubShader  
	{  
		Tags { "RenderType"="Transparent" "RenderQueue"="3000" }  
		LOD 100
		
		// Pass 0  
		Pass  
		{  
			Blend One One  
			ZTest Off  
			Cull Off  
			ZWrite Off
			
			HLSLPROGRAM
			
			#pragma vertex vert  
			#pragma fragment frag  
			
			#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
			
			// Moved all the code from here.
			
			float4 frag(FragmentData input) : SV_Target0  
			{    
				float distanceToCenter = distance(input.uv, float2(0.5f, 0.5f));  
				float blob = smoothstep(0.5, 0.0, distanceToCenter);
				
				return float4(blob * input.alpha, 0.0, 0.0, 0.0);  
			}
			
			ENDHLSL  
		}  
	}  
}

Now I want to modify the vertex shader to cull instances for the specific pass. I will introduce the argument in the vertex shader , int blendMode, to conditionally move vertices out of the screen.

HLSLINCLUDE
...
// Added blendMode argument  
FragmentData vertShared(VertexData input, int blendMode)  
{  
	FragmentData output;
	
	...
	
	// If blendMode mismatches, move vertices outside of the screen  
	if (instanceData.blendMode != blendMode)  
		output.positionCS_SV.xyzw = float4(2.0, 2.0, 1.0, 1.0);
	
	return output;  
}  
...
ENDHLSL

Rendered area from clip space is -1.0 to 1.0. Setting vertices to (2.0, 2.0, 1.0, 1.0) will make the GPU cull the triangles before they are rasterized.

However, this vertex shader is now more like a utility function. Let's use that in my pass.

Pass  
{  
...  
  
HLSLPROGRAM
#pragma vertex vert  
#pragma fragment frag  
   
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
// Here, the vertex shader only executes the shared vertex shader, but provides additional arguments.  
FragmentData vert(VertexData input)  
{  
	return vertShared(input, 0);  
}
float4 frag(FragmentData input) : SV_Target0  
{    
	float distanceToCenter = distance(input.uv, float2(0.5f, 0.5f));  
	float blob = smoothstep(0.5, 0.0, distanceToCenter);
	
	return float4(blob * input.alpha, 0.0, 0.0, 0.0);  
}
ENDHLSL  
}


Let's adjust all the passes accordingly. Additive blend into the red channel:

// Pass 0  
Pass  
{  
	Blend One One // Additive blending.  
	...  
	ColorMask R // Render only red channel
	
	HLSLPROGRAM  
	...
	
	FragmentData vert(VertexData input)  
	{  
		return vertShared(input, 0); // Render only 0 blend mode.  
	}
	
	float4 frag(FragmentData input) : SV_Target0  
	{    
		float distanceToCenter = distance(input.uv, float2(0.5f, 0.5f));  
		float blob = smoothstep(0.5, 0.0, distanceToCenter);
		
		return float4(blob * input.alpha, 0.0, 0.0, 0.0); // Render into red channel  
	}
	
	ENDHLSL  
}


Multiply blend into the red channel:

// Pass 1  
Pass  
{  
	Blend DstColor Zero // Multiply blend.  
	...  
	ColorMask R // Only red channel
	
	HLSLPROGRAM  
	...  
	
	FragmentData vert(VertexData input)  
	{  
		return vertShared(input, 1); // Render only instances with blendMode 1.  
	}
	
	float4 frag(FragmentData input) : SV_Target0  
	{    
		float distanceToCenter = distance(input.uv, float2(0.5f, 0.5f));  
		float blob = smoothstep(0.5, 0.0, distanceToCenter);
		
		// Invert the blob for the multiply blend.  
		float value = 1.0 - blob;  
		value = lerp(1.0, value, input.alpha); // Ensure that when the alpha is 0, the returned value is 1.  
		return float4(value, 1.0, 1.0, 1.0);  
	}
	
	ENDHLSL  
}


Additive blend into the green channel:

// Pass 2  
Pass  
{  
	Blend One One // Additive blending.  
	..  
	ColorMask G // Only green channel
	
	HLSLPROGRAM  
	...  
	
	FragmentData vert(VertexData input)  
	{  
		return vertShared(input, 2); // Render only instances with blendMode 2  
	}
	
	float4 frag(FragmentData input) : SV_Target0  
	{    
		float distanceToCenter = distance(input.uv, float2(0.5f, 0.5f));  
		float blob = smoothstep(0.5, 0.0, distanceToCenter);
		
		return float4(0.0, blob * input.alpha, 0.0, 0.0); // Render blob into green channel  
	}
	
	ENDHLSL  
}


Now it's time to disable temporary texture clearing. I modified the C# rendering code to skip the texture clear at the beginning of the frame.


___

5. Use the texture in other shaders

With the texture rendering complete, I can now sample it in the shader.

I will create a plane that covers the gameplay area, and I will display the content of this texture here:

To better visualize the feature, I will modify the debug display of a texture in OnGUI() of HeatmapCamera to be displayed conditionally. I don’t want the debug texture view to cover the game content now.

// Added a toggle in the component's inspector
[SerializeField] private bool displayDebugGUI;
private void OnGUI()
{
  // When the toggle is off, don't render the texture on the screen.
  if (!displayDebugGUI)
    return;
  
  if (heatmapTexture != null)
    GUI.DrawTexture(new Rect(0, 0, 512, 512), heatmapTexture, ScaleMode.ScaleToFit, false);
}

Then, I will create a shader that will render the texture content in world space. I will start from this template.

Shader "Heatmap/TextureDebug"
{
    SubShader
    {
        // Pass 0
        Pass 
        {
            // Alpha blending
            Blend SrcAlpha OneMinusSrcAlpha
            ZTest LEqual
            Cull Off
            ZWrite Off
            HLSLPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
            struct FragmentData
            {
                float4 positionCS_SV : SV_Position;
                float3 positionWS : TEXCOORD0;
            };
            FragmentData vert(float3 positionOS : POSITION)
            {
                FragmentData output;
                //The vertex shader here converts the object space into world space, then into clip space.
                // World space position is used as an interpolator and forwarded to the fragment shader.
                output.positionWS = TransformObjectToWorld(positionOS.xyz);
                output.positionCS_SV = TransformWorldToHClip(output.positionWS);
                return output;
            }
            float4 frag(FragmentData input) : SV_Target0
            {   
                // Render world-space position as color
                return float4(frac(input.positionWS.xzy), 0.9);
            }
            ENDHLSL
        }
    }
}

This is how this shader looks in action. It displays world-space position as an UV grid.

Then I need to access the heatmap matrices and the rendered texture. Let's define them in the shader code, before the vertex shader.

...
// View-projection matrices - will be used to calculate the UV
float4x4 _HeatmapMatrixV;
float4x4 _HeatmapMatrixP;
// Heatmap texture
Texture2D _HeatmapTexture;
SamplerState linearClampSampler;
...

Then, let's use that to sample the texture. I can use the view and projection matrices to convert a world-space position into a texture clip-space position. This code is in the fragment shader:

float4 frag(FragmentData input) : SV_Target0
{   
	// Convert world-space position to heatmap normalized clip-space
	float4 positionCS = mul(_HeatmapMatrixP, mul(_HeatmapMatrixV, float4(input.positionWS.xyz, 1.0)));
	positionCS /= positionCS.w;

Then X and Y components of a normalized clip space are in -1 to 1 range. I can remap those to 0-1 to create a texture UV:

// Clip space is from -1 to 1 so convert it to 0-1 to get the texture UV.
float2 uv = positionCS.xy * 0.5 + 0.5;

Then it's time to sample the texture and display its content.

// Sample the texture
float4 heatmapValue = _HeatmapTexture.SampleLevel(linearClampSampler, uv.xy, 0.0);

// And display the texture as colors with subtle transparency.
return float4(heatmapValue.rgb, 0.8);

This is how it looks in action. With that, the core of this feature is now complete.

The feature is now complete.


___

What's next?

In the next article, I will use this texture to implement volumetric fog that is being dispersed by the red channel of the rendered texture.


___

Bonus - What about the performance?

Rendering additional stuff during the frame will always add some performance drawbacks. Let's measure them in this case.

First of all, I've added a profiler marker to see how much time per frame the CPU uses to prepare the instances:

public unsafe class HeatmapObjectRenderer : MonoBehaviour
{
    ...
    private static readonly ProfilerMarker updateInstanceProfilerMarker = new ProfilerMarker(nameof(updateInstanceProfilerMarker));
    public static void UpdateInstances()
    {
        using (updateInstanceProfilerMarker.Auto())
        {
            // Iterate through each instance
            for (int i = 0; i < allInstances.Count; i++)
                allInstances[i].UpdateData(); 
        }
    }
    ...

For performance measurements, I created 300 HeatmapObjectRenderer. I assume that this is a reasonable usage of those in the intensive gameplay.

Then, I measured the CPU usage using Unity's profiler, and it appears to use 0.29ms per frame on average.

:center-px:

0.29 ms per frame for updating 300 objects on an i5-10400F CPU. For me it is a BAD PERFORMANCE. For me, ~300 objects per frame should update in a maximum of 0.05ms on such a CPU. I will fix the problem in one of the next articles.

The render feature itself adds up to 0.06ms, which is high, but reasonable.

:center-px:

Now, let's measure the GPU times. I used Nvidia Nsight and measured the rendering of 300 heatmap objects. 0.03ms per frame of rendering a 4K texture on RTX 3060. Not bad, considering that all of the objects are always rendered and there is no culling :)

If you want to know more about how I profile the GPU, look at this article:

https://www.proceduralpixels.com/blog/how-to-profile-the-rendering-gpu-profiling-basics

:center-px:














Hungry for more?

I share rendering and optimization insights every week.

Hungry for more?

I share rendering and optimization insights every week.

Hungry for more?

I share rendering and optimization insights every week.

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2025 Jan Mróz | Procedural Pixels

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2025 Jan Mróz | Procedural Pixels

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2025 Jan Mróz | Procedural Pixels