Tutorial
Rendering into custom texture in Unity
Sep 17, 2025
30 minutes
A month ago, a friend of mine asked me if I wanted to help him improve the visuals of the game he develops in his free time. After discussing a few ideas, we decided to implement volumetric fog that would be dispersed by running characters. Additionally, we aimed to add zen gardens and create an effect where characters mess up the sand when running through. In this article series, I explain my thought process behind creating such effects. Here, I will focus only on implementing rendering game metadata, like character trails, into the texture.
What is the goal?
This is an advanced article. Rendering custom data into a texture requires writing a lot of code and managing many moving parts.
My overall goal is to implement fog and zen gardens. Throughout this article series, I'll take this:
And I will achieve this.
:image-description:
EPIC!
___
How to approach this effect
Let's imagine that the effect above is only in my head. To render fog or the sand, I need access to data about where the characters have been. Therefore, it is intuitive for me to start implementing the effect by rendering character trails into a top-down texture.
Gameplay takes place on the XZ plane, with Y representing the upward direction.

Enemies are implemented by instancing prefabs that move on the map.

Imagine a terrain with texture, where each enemy is represented by a quad that renders into this texture. Imagine they are brushes that run on the big painting canvas.

So, as an initial step, I will just implement this effect - rendering into a texture and sampling it in another shader:
This is the result texture.

Each character will render trails using the red and green channels. In later articles, I will use the red channel to disperse fog, and the green channel to mess up the sand.
___
How to render into a texture:
To render custom content into a texture I could use a built-in camera with a texture as a target. That seems easy, no? I would simply add renderers, create another camera that renders only these, and it's done.
No. To render trails, I need the content of the texture to be preserved between the frames, but Unity ignored my camera settings and was always clearing the texture content at the beginning of each frame.
:center-px:

Unity does not respect this setting when rendering into a custom texture. For Unity, it is more like a suggestion than a rule.
I could solve that by:
Modifying the render pipeline source code to respect that setting.
Implement custom rendering into the texture.
I chose the second option: custom rendering into texture. Here are my reasons:
I am implementing this feature for another team and do not want to leave them maintaining manual render pipeline updates.
I want to render all objects as quads using a simple shader. This is ideal for efficient instanced rendering. The built-in camera has high overhead, while custom rendering does only what's necessary.
___
Rendering into texture - implementation
I created another article explaining what is required to render custom content into a texture and why I think this is the most important skill each technical artist can master. You can read it here:
https://www.proceduralpixels.com/blog/rendering-into-texture-the-most-important-ta-skill
TLDR: to render custom content into a texture, I need to:
Track and filter objects to render
Define a camera - create a view-projection matrix, create a texture
Execute draw calls
Prepare shaders that will render into a texture
Use the rendered texture in other shaders

To achieve this, the idea is to render in 3 passes.
First pass will render into the red channel using additive blending. It will ensure that all the trails stay in the texture.
The second pass will render into the red channel using multiply blending. It will allow me to slowly fade the trails away.
The third pass will render into a green channel using additive blending - again, this will allow the trails to stay in the texture.
I will go through each step and implement it below. However, I warn you - this will be a lengthy journey through the code. Let's dive in!
___
1. Track and filter objects to render
:center-px:

The goal of this step is to have a single collection with all the objects that needs to be rendered. Preferably a list or an array of structs that can be later used to set the graphics buffer.
I want to render objects using quads, so I need to have information about their position, rotation, and scale. Additionally, I would like to control which channel is rendered and its alpha value to adjust the trail intensity.
The position, rotation and scale can be stored in a single matrix, so I will just use that.
This is the data of each rendered object.
Ok, let's create a component that will be responsible for tracking the objects. To follow Unity convention, I will name it HeatmapObjectRenderer
. Notice that I store a static list of all renderers as well as low-level list of renderers data. The list of renderers will be used to update the low-level data. And low-level data will be later used to set the graphics buffer used in the shader.
I use low-level, unsafe collections to enable the use of this code with the Burst compiler for optimization purposes, if needed.
Ok, so the renderer data layout is now ready. I will use OnEnable
and OnDisable
to register/unregister the renderer.
When enabled, the component allocates data and registers itself in a static collection. When disabled, it unregisters and deallocates the data.
If enemies are moving, I need a function that will update the data of the all instances before they are rendered. I will also implement a function that I will use to fetch the data for the rendering.
Perfect. Let's add this component to an enemy prefab.

The component is added, but there is no visual feedback in the scene view to show that this object renders into a texture. It is good practice to implement gizmos for such components. I assumed the quad will be drawn on the XZ plane, spanning from (-0.5, 0, -0.5) to (0.5, 0, 0.5) in object space.
This is a gizmo I came up with, it draws a red quad and a transparent circle inside:
Gizmo that draws the red quad.

Red quad is now visible. This is the quad shape I want to render into a texture.
Now, all the code for tracking the objects is ready. Time to implement a custom camera.
___
2. Define a camera
:center-px:

In this step, I will implement a custom component that will represent my camera.
In this component I want to allocate the texture and define a view-projection matrices.
The view matrix and the projection matrix will define what area of the world this camera renders.
Let's start with this snippet.
Now I will allocate the target texture. Similarly to the renderers, I will allocate data in OnEnable
and deallocate it in OnDisable
. The texture will store half-precision RGBA content.
Ok, the easy part of a camera is done. Now I need to create view and projection matrices.
View matrix works by converting world space position into a camera space. The role of the projection matrix is to convert this camera space into clip space, which defines the position of the objects on the screen.
I want to create a setup that:
Renders orthogonal frustum.
I want everything in between (-1.0, -1.0, -1.0) and (1.0, 1.0, 1.0) in camera space to be rendered into a texture.
I want the camera to render along the Z axis. The object-space X-axis is left-right, and the Y-axis is top-down; the Z-axis is forward.
Let's start with projection matrix. Unity has a nice Matrix4x4.Ortho() function to create orthogonal projection matrices. Let's use that.
I noticed that Matrix4x4.Ortho()
can't use negative near/far plane values. So I moved the projection matrix back a little using a translation matrix.
Let's think about the view-matrix. The view matrix is used to convert a world space position into camera space. Therefore, if the component is assigned to the transform, the object space of this transform is equivalent to my camera space.
So I can just use transform.worldToLocalMatrix
as my view matrix.
Now it's time to set the parameters that will be publicly available for rendering.
Time to assign the created component. I made a prefab to store the heatmap camera and attached the component. When camera object space defines the rendered content, I can use its transform to control how much of the world is rendered into the texture.

I set up the camera to render 400 units in width/height, and 200 units in depth.
However, it is now impossible to see if the frustum is set up properly. I will implement a gizmo. Because the camera renders the content inside an object-space box, I can simply use a gizmo that draws a simple box.
Much better!

Ok. I have a collection of objects to render, allocated texture, and calculated view and projection matrices. Time to execute some draw calls!
___
3. Execute draw calls
:center-px:

I'm working with Unity 6000.0.39f1 with URP. Therefore, I will implement a ScriptableRendererFeature
that injects a custom pass into a RenderGraph
.
I will start by implementing a ScriptableRendererFeature
. This is a boilerplate code required by Unity. The only thing it does is inject a custom pass into a render pipeline. The meat is implemented later.
First of all, I want to explain how I want to implement the rendering:
I don't want to render anything if there is no
HeatmapCamera
component or if there is no object to be rendered.I want to have a graphics buffer that will store the data of all objects to render.
I want to use instanced rendering to draw all objects at once.
In the first pass, I want to render objects into the red channel additively.
In the second pass, I want to render objects into the red channel with a multiply blend.
In the third pass, I want to render into the green channel additively.
Started to implement the rendering
I will implement the whole rendering in the RecordRenderGraph
method.
I mentioned that I don't want to render anything when there is no camera component present or no objects to render, lets start by implementing that.
Allocate data for instances
Now I need to create a graphics buffer that will store the data. Some while ago, in the HeatmapObjectRenderer
, I created a method that allows for fetching this data:
The method fills the unsafe list. So I need to create one in my render pass. I modified the render pass to store the list.
Creating graphics buffer with instance data
Now I need to modify the RecordRenderGraph
function to allocate the graphics buffer.
Then, it's time to fill the graphics buffer with renderers data. I will use a compute pass to do that.
And yes, this compute pass doesn't really compute anything. It is here only to prepare the buffer.
In the code above I used a custom command buffer extension that allowed me to use unsafe lists to set the buffer data. This is part of my private library, which I use to implement small utilities that make my life easier in Unity. Here is the snippet:
Executing draw calls into a custom texture
Now, when the graphics buffer is ready, I will render the objects into a custom texture. I will use the raster pass to do that. The goal is to:
Set the previously allocated texture as a render attachment.
Set all the shader properties, like matrices and instance buffer.
Execute draw calls.
I will start by importing an allocated previously texture into a render graph, because all external resources that will be used during the rendering needs to be imported into render graph.
Then, I will create a render pass and declare render attachment (target texture) and used graphics buffer:
The RenderGraph
API requires creating a separate class that will hold all resources used by the render function. The object of this class is managed internally by the RenderGraph. So I will create a class named PassData
, I will set it's resources and use them in the render function.
To execute draw calls, I need to have access to camera matrices, instance buffer, material with the proper shader, and instance count.
All the resources are ready, and the target texture is set. It's finally a time to write a render function. I will get a material property block and set all the properties used by the shader - the view-projection matrices and the instance buffer.
I like to store the name of shader properties in a class like this:
Now, I will execute 3 instance draws.
First draw call - render into red channel additive blending.
Second draw call - render into red channel with multiply blending
Third draw call - render into green channel with additive blending.
Each object instance in the buffer has a property that defines whether it should be rendered. So I will implement instance culling in the vertex shader - by moving culled vertices out of the frustum.
All the draws are instanced. Each instance is a single quad made of 6 vertices (2 triangles). I will use the DrawProcedural function that requires no mesh. The mesh will be defined in a shader code.
Ok! The whole rendering code is ready... At least for the CPU.
Let's add the render feature to the renderer asset.
:center-px:

This is how HeatmapRendererFeature looks in the renderer asset, notice that there is a field with a material. Shader in this material is used to render objects into a texture.
But I can't render anything without a shader...
___
4. Prepare shaders that will render into a texture
:center-px:

In this section, I will write shaders that render into a texture. I will create a shader, but I want to start with something super simple - I want to ensure that the feature is working correctly.
Render anything
In the first step, I will ensure that the render texture is set correctly, and the objects can be drawn into this texture. I will define the XZ quad in the shader constant array. Then I will use this quad to render it into this texture to see if it can fill some of the pixels of the texture. The goal is to basically change some of the colors in the texture - nothing more.
Ok, now I will create the material with this shader and assign it to my render feature.
:center-px:

I entered play mode and launched the frame debugger. Looks like the rendering works fine!
:center-px:

Wow, all of that to see a square on the screen. My render feature is correctly executed by the render graph. It renders into a HeatmapCamera_RenderTexture
I created, and the texture has a correct resolution and format. The draw call renders 157 instances.
Use Instance ID
Let's keep the frame debugger open and iterate on the shader. To make sure that the texture is cleared each frame (it will be easier to debug), I added clearing the texture content to the render function.
Let's use the instance ID to render instances in different places. I modified the shader to render smaller quads that are slightly offset by the instance ID to see if instancing works correctly.
Modified vertex shader code.
:center-px:

The frame debugger indicates that the texture was cleared, and then multiple quads were rendered.
Use instance buffer
Now that I know the instancing works correctly, it's time to access the renderer data of each instance and render them in the correct place in the texture. I will start by declaring the renderer data, buffer, and view-projection matrices at the beginning of the shader code. I set all those resources in my C# rendering code, so now I can access them in the shader.
Declaring instance buffer and view-projection matrices. Those properties should have the same layout and names as the properties set in the C# code.
And in the frame debugger, I can now see that quads are rendered in different positions. To be able to see that I needed to make the region rendered by the camera smaller.
:center-px:

Draw texture on the screen
It would be nice to observe the content of this texture in the game's screen. Let's modify the HeatmapCamera
component to do that.
I can see that quads in the texture move like the characters on the screen. So it may be working properly.
Render blobs instead of quads
Let's make those quads render a smooth blob inside. I changed the blending to additive.
Then I added an interpolator for UV.
And I rendered a blob using the UV calculated in the vertex shader.
Let's see the blobs in action.
Use instance alpha
Each renderer has its own alpha channel, which determines the blending intensity. Let's implement it. I will forward the alpha value from the vertex shader into a fragment shader using an interpolator.
Render 3 passes
My goal was to render trails using 3 passes. Each renderer component has a blend mode property that defines which pass it needs to use:
Render into red channel - additive blending
Render into red channel - multiply blending
Render into green channel - additive blending
I will start by moving all the common code into a shared HLSLINCLUDE
section in the shader file. This is how the shader looks right now:
Now I want to modify the vertex shader to cull instances for the specific pass. I will introduce the argument in the vertex shader , int blendMode
, to conditionally move vertices out of the screen.
Rendered area from clip space is -1.0 to 1.0. Setting vertices to (2.0, 2.0, 1.0, 1.0) will make the GPU cull the triangles before they are rasterized.
However, this vertex shader is now more like a utility function. Let's use that in my pass.
Let's adjust all the passes accordingly. Additive blend into the red channel:
Multiply blend into the red channel:
Additive blend into the green channel:
Now it's time to disable temporary texture clearing. I modified the C# rendering code to skip the texture clear at the beginning of the frame.
___
5. Use the texture in other shaders
With the texture rendering complete, I can now sample it in the shader.
I will create a plane that covers the gameplay area, and I will display the content of this texture here:

To better visualize the feature, I will modify the debug display of a texture in OnGUI()
of HeatmapCamera
to be displayed conditionally. I don’t want the debug texture view to cover the game content now.
Then, I will create a shader that will render the texture content in world space. I will start from this template.

This is how this shader looks in action. It displays world-space position as an UV grid.
Then I need to access the heatmap matrices and the rendered texture. Let's define them in the shader code, before the vertex shader.
Then, let's use that to sample the texture. I can use the view and projection matrices to convert a world-space position into a texture clip-space position. This code is in the fragment shader:
Then X and Y components of a normalized clip space are in -1 to 1 range. I can remap those to 0-1 to create a texture UV:
Then it's time to sample the texture and display its content.
This is how it looks in action. With that, the core of this feature is now complete.
The feature is now complete.
___
What's next?
In the next article, I will use this texture to implement volumetric fog that is being dispersed by the red channel of the rendered texture.
___
Bonus - What about the performance?
Rendering additional stuff during the frame will always add some performance drawbacks. Let's measure them in this case.
First of all, I've added a profiler marker to see how much time per frame the CPU uses to prepare the instances:
For performance measurements, I created 300 HeatmapObjectRenderer
. I assume that this is a reasonable usage of those in the intensive gameplay.
Then, I measured the CPU usage using Unity's profiler, and it appears to use 0.29ms per frame on average.
:center-px:

0.29 ms per frame for updating 300 objects on an i5-10400F CPU. For me it is a BAD PERFORMANCE. For me, ~300 objects per frame should update in a maximum of 0.05ms on such a CPU. I will fix the problem in one of the next articles.
The render feature itself adds up to 0.06ms, which is high, but reasonable.
:center-px:

Now, let's measure the GPU times. I used Nvidia Nsight and measured the rendering of 300 heatmap objects. 0.03ms per frame of rendering a 4K texture on RTX 3060. Not bad, considering that all of the objects are always rendered and there is no culling :)
If you want to know more about how I profile the GPU, look at this article:
https://www.proceduralpixels.com/blog/how-to-profile-the-rendering-gpu-profiling-basics
:center-px:
