Introduction
In the previous articles, I implemented a top-down camera that renders character trails into a texture.
However, the camera is fixed at a specific position. In this article, I implement a top-down camera that dynamically follows the player's position, updating its view as the player moves, and preserves the texture content in world space.

The camera will track the player's movement, but the texture content will always correspond to the same world-space locations, regardless of camera moves.
___
Current implementation overview
To implement moving the camera, it is essential to understand the core implementation details of the effect.
There is a HeatmapCamera
component that is responsible for rendering the trails into a texture.
The camera uses worldToCameraMatrix
to transform world coordinates to camera space, copied from the gameObject's transform. The projectionMatrix
defines the camera's rendered area. Trails are drawn into heatmapTexture
.
public struct HeatmapCameraParameters
{
public Matrix4x4 worldToCameraMatrix;
public Matrix4x4 projectionMatrix;
public RenderTexture heatmapTexture;
}
Controlling the HeatmapCamera's transform directly determines what appears in the texture.

The camera renders trails by additive blending quads rendered by the HeatmapObjectRenderer
. Trails are faded away with a multiply-blend pass. This component defines where to render the object.


Rendering uses the URP RenderGraph API. Key code snippets are provided below.
...
computeBuilder.SetRenderFunc((PassData passData, ComputeGraphContext context) =>
{
heatmapObjectRendererInstances->Clear();
HeatmapObjectRenderer.FetchInstanceData(heatmapObjectRendererInstances);
context.cmd.SetBufferData(heatmapObjectInstanceBuffer, heatmapObjectRendererInstances);
});
...
rasterBuilder.SetRenderFunc((PassData passData, RasterGraphContext context) =>
{
var propertyBlock = context.renderGraphPool.GetTempMaterialPropertyBlock();
propertyBlock.SetBuffer(Uniforms._HeatmapObjectInstanceBuffer, passData.heatmapObjectInstanceBuffer);
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixV, passData.cameraParameters.worldToCameraMatrix);
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(passData.cameraParameters.projectionMatrix, true));
context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 0, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);
context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 1, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);
context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 2, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);
});
...
Quad shaders use HeatmapCamera's view-projection matrices for correct texture rendering positions.
float4 positionWS = mul(instanceData.localToWorldMatrix, positionOS);
float4 positionVS = mul(_HeatmapMatrixV, positionWS);
float4 positionCS = mul(_HeatmapMatrixP, positionVS);
___
Moving the camera and preserving the texture's content
When the top-down camera moves, the resulting texture changes. Consider the following texture example.
This 16x16 texture has a yellow dot at the center and a yellow border. White dots are pixel centers.

Visualized 16x16 texture rendered by the camera. The yellow dot is the camera center, and the yellow square marks the texture area. White dots represent pixel centers.
Let's move it by a few units and see how the image looks.

Notice that when the camera position changed, the texture pixels are misaligned. This is quite problematic because when preserving the content of a texture in world space, I need to store the data in the same positions.
This is key: the camera can move only in increments equal to the pixel size in world space. Let's snap the camera position to that increment and observe the result.

Perfect, all the texture texels remain perfectly aligned before and after moving the camera. Therefore, when moving the camera, I must snap its position to increments that match the texture pixel size in world space to maintain this alignment.
After moving the camera, shift the texture content accordingly using a shader.
Create a new texture and assign it to the camera after movement.

Pixels overlapping with the old texture sample from it ; others initialize to default values.
___
Implementation plan
Now I know what is needed to make it work:
Move the camera.
Snap the camera world-space position to a multiple of the texture pixel size in world space.
Copy the old texture content to a new texture.
New pixels that are in the bounds of the previous texture should sample from the old texture.
New pixels that are outside of the bounds should be initialized to a default value (black color).
Render current frame data into a new texture.
To make it work, I need:
Texture's pixel size in world space units,
Secondary texture,
Shader that moves texture content from one texture to the other with proper offsets.
Calculating texture's pixel size in world space units
I can utilize the inverse view-projection camera matrix to calculate the texture pixel size in world coordinates.
1. Calculate the UV of a first texel, and the UV of a diagonal neighbor:

2. Use the inverse view-projection matrix to move texel positions from UV into a world space.

3. The difference between the first position and the second position is the texel size in world space:

Using a double buffer for rendering
To reuse the texture content from the previous frame, I need to store references to the previously used texture and its associated parameters.
I will utilize a swap buffer pattern for that:

Shader to offset the data
The shader that offsets the whole data in the texture will render full-screen into a new texture.
It will have access to both the previous frame data and the current frame data, allowing it to use the matrices to transform pixel UVs into world space and sample the previous texture using this data.
___
Implementation
Let's implement that in the code.
Swap buffer
I will start by implementing the swap buffer. I added a secondary parameters that contain values from the previous frame.
public struct HeatmapCameraParameters
{
public Matrix4x4 worldToCameraMatrix;
public Matrix4x4 projectionMatrix;
public RenderTexture heatmapTexture;
}
public class HeatmapCamera : MonoBehaviour
{
public HeatmapCameraParameters CurrentParameters { get; private set; } = default;
public HeatmapCameraParameters PreviousParameters { get; private set; } = default;
...
Then I modified the OnEnable
method that was initializing the data:
private void OnEnable()
{
allInstances.Add(this);
var textureA = AllocateHeatmapTexture();
textureA.name = gameObject.name + "_RenderTexture_A";
var textureB = AllocateHeatmapTexture();
textureB.name = gameObject.name + "_RenderTexture_B";
var currentParameters = new HeatmapCameraParameters();
currentParameters.heatmapTexture = textureA;
CurrentParameters = currentParameters;
var previousParameters = new HeatmapCameraParameters();
previousParameters.heatmapTexture = textureB;
PreviousParameters = previousParameters;
}
private RenderTexture AllocateHeatmapTexture()
{
var texture = new RenderTexture(
(int)resolution, (int)resolution,
UnityEngine.Experimental.Rendering.GraphicsFormat.R16G16B16A16_SFloat,
UnityEngine.Experimental.Rendering.GraphicsFormat.None, 0
);
texture.enableRandomWrite = true;
texture.Create();
return texture;
}
I also ensured that the textures are properly deallocated when the component is disabled:
private void OnDisable()
{
allInstances.Remove(this);
CurrentParameters.heatmapTexture.Release();
PreviousParameters.heatmapTexture.Release();
CurrentParameters = default;
PreviousParameters = default;
}
Update matrices
Now, after I removed setting the matrices when initializing, I need to update them each frame. Let's do that in the LateUpdate()
method.
private void LateUpdate()
{
(CurrentParameters, PreviousParameters) = (PreviousParameters, CurrentParameters);
HeatmapCameraParameters parameters = CurrentParameters;
parameters.projectionMatrix = ...;
parameters.worldToCameraMatrix = ...;
CurrentParameters = parameters;
Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(parameters.projectionMatrix, false));
Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixV, parameters.worldToCameraMatrix);
}
Now, I have a setup for this: current frame data and previous frame data are stored on the CPU.

Swapping the frame data is now implemented.
Manually creating view and projection matrices
Now, I need to snap the camera's position to a specific pixel size. To avoid modifying the transform of this camera, I will apply the position change only in the view matrix of the camera.
The view matrix is a world-to-local transformation matrix. Which is inverse localToWorld. I can create a custom localToWorld matrix using a Matrix4x4.TRS(position, rotation, scale)
API.
So if I snap the position in the argument of the Matrix4x4.TRS(position, rotation, scale)
to the pixel size, the view matrix will look like: Matrix4x4.TRS(snappedPosition, rotation, scale).inverse
I made this function that creates a view matrix for the camera:
private Matrix4x4 GetViewMatrix()
{
return Matrix4x4.TRS(transform.position, transform.rotation, transform.lossyScale).inverse;
}
Then I created another version, which will create a view matrix, whose position is snapped to the texture pixel size in world space:
private Matrix4x4 GetSnappedViewMatrix(float3 pixelSizeWS)
{
float3 cameraPosition = transform.position;
float3 snapedCameraPosition = cameraPosition;
if (lengthsq(pixelSizeWS) > 0.00001f)
snapedCameraPosition.xz = round(cameraPosition.xz / pixelSizeWS.xz) * pixelSizeWS.xz;
Matrix4x4 localToWorldMatrix = Matrix4x4.TRS(snapedCameraPosition, transform.rotation, transform.lossyScale);
Matrix4x4 worldToLocalMatrix = localToWorldMatrix.inverse;
}
Snapping ensures that the pixels in the current texture are aligned with those in the previous texture.

This is now implemented.
I then created a similar function to generate a projection matrix. It is always fixed and doesn't need to be snapped:
private static Matrix4x4 GetProjectionMatrix()
{
Matrix4x4 projectionMatrix = Matrix4x4.Ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.00001f, 2.0f);
projectionMatrix = projectionMatrix * Matrix4x4.Translate(new Vector3(0.0f, 0.0f, -1.0f));
return projectionMatrix;
}
Calculate pixel size in world space
Now I need to calculate the pixel size of a texture in world space. I explained in the previous part of this article that I can calculate UVs of diagonal pixels in the texture, convert them into world space, and use the difference to get the pixel size.
Let's start by creating a function that will convert texture UV into a world space position:
public float3 TextureUVToWorldSpacePosition(float2 uv)
{
float4 positionCS = float4(uv * 2.0f - 1.0f, 0.0f, 1.0f);
float4 positionVS = 0;
if (CurrentParameters.projectionMatrix.Equals(default))
positionVS = mul(GetProjectionMatrix().inverse, positionCS);
else
positionVS = mul(CurrentParameters.projectionMatrix.inverse, positionCS);
float4 positionOS = 0;
if (CurrentParameters.worldToCameraMatrix.Equals(default))
positionOS = mul(GetViewMatrix(), positionVS);
else
positionOS = mul(CurrentParameters.worldToCameraMatrix.inverse, positionCS);
return positionOS.xyz / positionCS.w;
}

This is now implemented.
Now let's use this method to calculate pixel size:
float3 texel0WS = TextureUVToWorldSpacePosition(float2(0.5f, 0.5f) / (float)resolution);
float3 texel1WS = TextureUVToWorldSpacePosition(float2(1.5f, 1.5f) / (float)resolution);
float3 pixelSizeWS = texel1WS - texel0WS;
Matrix4x4 viewMatrix = GetSnappedViewMatrix(pixelSizeWS);
Updating camera parameters each frame
Now, let's update all the camera parameters in the LateUpdate
.
private void LateUpdate()
{
(CurrentParameters, PreviousParameters) = (PreviousParameters, CurrentParameters);
float3 texel0WS = TextureUVToWorldSpacePosition(float2(0.5f, 0.5f) / (float)resolution);
float3 texel1WS = TextureUVToWorldSpacePosition(float2(1.5f, 1.5f) / (float)resolution);
float3 pixelSizeWS = texel1WS - texel0WS;
HeatmapCameraParameters parameters = CurrentParameters;
parameters.projectionMatrix = GetProjectionMatrix();
parameters.worldToCameraMatrix = GetSnappedViewMatrix(pixelSizeWS);
CurrentParameters = parameters;
Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(parameters.projectionMatrix, false));
Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixV, parameters.worldToCameraMatrix);
}
Update the gizmos
The code should currently run without issues. However, I have no way to tell whether the feature works correctly. To fix that, I will update the gizmo drawn by the component:
private void OnDrawGizmosSelected()
{
Gizmos.color = Color.yellow;
Gizmos.matrix = GetViewMatrix().inverse;
Gizmos.DrawWireCube(Vector3.zero, Vector3.one * float3(2.0f, 2.0f, 2.0f));
Gizmos.color = Color.yellow * new Color(1, 1, 1, 0.5f);
Gizmos.DrawLine(new Vector3(0.0f, 0.0f, -1.0f), Vector3.zero);
if (!CurrentParameters.worldToCameraMatrix.Equals(default))
{
Gizmos.color = Color.yellow * new Color(1, 0.5f, 1, 1.0f);
Gizmos.matrix = CurrentParameters.worldToCameraMatrix.inverse;
Gizmos.DrawWireCube(Vector3.zero, Vector3.one * float3(2.0f, 2.0f, 2.0f));
Gizmos.color = Color.yellow * new Color(1, 1, 1, 0.5f) * new Color(1, 0.5f, 1, 1.0f);
Gizmos.DrawLine(new Vector3(0.0f, 0.0f, -1.0f), Vector3.zero);
}
}
Moved the heatmap camera to a player prefab
To make the camera follow the player, I added it to the player prefab.

Then I enabled the gizmos in the world space. In the video, you can see the yellow line; this represents the original view matrix center. The orange one is the snapped one - so the feature works correctly.
Update the rendering
Now, I have both textures ready, with access to the current parameters and previous frame parameters. It's time to write the shader that will copy the texture from the previous frame into the current frame.
The plan is:
Modify the rendering to execute a single draw call that will copy the texture content from the previous frame into the current frame. The copy should be rendered before any other content.
Create the shader that executes the copy.
Modifying the render feature
Let's start by modifying the rendering feature. First, I will need to use another shader to initialize the current frame texture. I added a new material to the render pass:
public unsafe class HeatmapRenderPass : ScriptableRenderPass
{
...
internal Material heatmapRenderMaterial;
internal Material initializeTextureMaterial;
public HeatmapRenderPass(Material heatmapRenderMaterial, Material initializeTextureMaterial)
{
this.heatmapRenderMaterial = heatmapRenderMaterial;
this.initializeTextureMaterial = initializeTextureMaterial;
...
}
...
Now I modify the RecordRenderGraph
function, which handles the CPU-side of the rendering. I need to import both textures into the render graph first.
public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
{
using (Markers.recordRenderGraph.Auto())
{
var heatmapCamera = HeatmapCamera.MainInstance;
...
if (heatmapRenderMaterial == null || initializeTextureMaterial == null)
return;
HeatmapCameraParameters currentParameters = heatmapCamera.CurrentParameters;
HeatmapCameraParameters previousParameters = heatmapCamera.PreviousParameters;
var currentHeatmapTextureRT = RTHandles.Alloc(currentParameters.heatmapTexture);
var currentHeatmapTextureHandle = renderGraph.ImportTexture(currentHeatmapTextureRT);
var previousHeatmapTextureRT = RTHandles.Alloc(previousParameters.heatmapTexture);
var previousHeatmapTextureHandle = renderGraph.ImportTexture(previousHeatmapTextureRT);
Then, I can add a new raster pass that will initialize the texture.
...
var previousHeatmapTextureHandle = renderGraph.ImportTexture(previousHeatmapTextureRT);
using (Markers.initializeCurrentTexture.Auto())
using (var rasterBuilder = renderGraph.AddRasterRenderPass<InitializePassData>($"{nameof(HeatmapRenderPass)}_InitializeHeatmap", out InitializePassData passData))
{
rasterBuilder.SetRenderAttachment(currentHeatmapTextureHandle, 0);
rasterBuilder.UseTexture(previousHeatmapTextureHandle);
passData.currentParameters = currentParameters;
passData.previousParameters = previousParameters;
}
And now it's the time for the draw call. I created a render function and set all the required resources - the previous texture and all the matrices. I will use a procedural draw to draw a full-screen quad.
using (Markers.initializeCurrentTexture.Auto())
using (var rasterBuilder = renderGraph.AddRasterRenderPass<InitializePassData>($"{nameof(HeatmapRenderPass)}_InitializeHeatmap", out InitializePassData passData))
{
...
rasterBuilder.SetRenderFunc((InitializePassData passData, RasterGraphContext context) =>
{
var propertyBlock = context.renderGraphPool.GetTempMaterialPropertyBlock();
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP_Inv, GL.GetGPUProjectionMatrix(passData.currentParameters.projectionMatrix, true).inverse);
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixV_Inv, passData.currentParameters.worldToCameraMatrix.inverse);
propertyBlock.SetTexture(Uniforms._HeatmapPreviousTexture, passData.previousParameters.heatmapTexture);
propertyBlock.SetMatrix(Uniforms._HeatmapPreviousMatrixP, GL.GetGPUProjectionMatrix(passData.previousParameters.projectionMatrix, false));
propertyBlock.SetMatrix(Uniforms._HeatmapPreviousMatrixV, passData.previousParameters.worldToCameraMatrix);
context.cmd.DrawProcedural(Matrix4x4.identity, initializeTextureMaterial, 0, MeshTopology.Triangles, 6, 1, propertyBlock);
});
}
Creating a shader
Now, when the CPU-side of the rendering is ready, it's time to create a shader. As usual, I created a new shader file and started by using this full-screen procedural draw template:
Shader "Heatmap/InitializeTexture"
{
SubShader
{
Pass
{
Blend Off
Cull Off
ZTest Always
ZWrite Off
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
static const float4 fullscreenQuadCS[] =
{
float4(-1.0, -1.0, 0.0, 1.0),
float4(-1.0, 1.0, 0.0, 1.0),
float4(1.0, 1.0, 0.0, 1.0),
float4(-1.0, -1.0, 0.0, 1.0),
float4(1.0, 1.0, 0.0, 1.0),
float4(1.0, -1.0, 0.0, 1.0)
};
struct FragmentData
{
float4 positionCS_SV : SV_Position;
float4 positionCS : TEXCOORD0;
};
FragmentData vert (uint vertexID : SV_VertexID)
{
FragmentData output;
output.positionCS = fullscreenQuadCS[vertexID];
output.positionCS_SV = fullscreenQuadCS[vertexID];
return output;
}
float4 frag (FragmentData input) : SV_Target
{
return float4(1.0, 0.0, 0.0, 1.0);
}
ENDHLSL
}
}
}
The whole shader code will be placed in the fragment shader. Now I need to add all the properties I set from the C# API. I will add them just above the fragment shader:
Texture2D _HeatmapPreviousTexture;
SamplerState pointClampSampler;
float4x4 _HeatmapMatrixP_Inv;
float4x4 _HeatmapMatrixV_Inv;
float4x4 _HeatmapPreviousMatrixP;
float4x4 _HeatmapPreviousMatrixV;
float4 frag (FragmentData input) : SV_Target
{
return float4(1.0, 0.0, 0.0, 1.0);
}
Now it's time for some space transformations! The plan:
Convert position from the current texture's clip space to the world space.
Convert world space position into the previous texture clip space.
For pixels outside of the previous texture range - return black color.
For pixels in the previous texture range, convert their clip space into UV and sample the previous texture.
float4 frag (FragmentData input) : SV_Target
{
float4 currentCS = input.positionCS;
float4 currentVS = mul(_HeatmapMatrixP_Inv, currentCS);
float4 positionWS = mul(_HeatmapMatrixV_Inv, currentVS);
positionWS /= positionWS.w;
float4 previousVS = mul(_HeatmapPreviousMatrixV, positionWS);
float4 previousCS = mul(_HeatmapPreviousMatrixP, previousVS);
previousCS /= previousCS.w;
if (any(abs(previousCS.xy) >= float2(1.0, 1.0)))
return float4(0.0, 0.0, 0.0, 0.0);
float2 previousUV = previousCS.xy * 0.5 + 0.5;
return _HeatmapPreviousTexture.SampleLevel(pointClampSampler, previousUV, 0.0);
}
Testing the feature
I created the material with this shader and assigned it to the renderer feature.

And made the heatmap camera render a small portion of the world, using a lower resolution texture:

In the gameplay, everything works fine. Trails are properly preserved in the world-space, and as expected, data far from the player gets lost, because it happens to be outside of the rendered texture. Additionally, the texture pixels don't flicker, ensuring that the view matrix snapping works correctly.
___
Bonus - rendering issues
Missing one important detail in the implementation may cause various bugs. It is essential for me to understand how various rendering issues manifest during the rendering process.
No pixel snapping
Here, I disabled view matrix snapping.
parameters.worldToCameraMatrix = GetViewMatrix();
parameters.worldToCameraMatrix = GetSnappedViewMatrix(pixelSizeWS);
No pixel snapping + bilinear texture filtering
Here, I disabled snapping the view matrix and switched from point filtering to bilinear filtering in the shader:
Texture2D _HeatmapPreviousTexture;
SamplerState linearClampSampler;
Texture2D _HeatmapPreviousTexture;
SamplerState pointClampSampler;
Using non Y-flipped projection matrix
Here, I changed the boolean parameter that flips the projection matrix that is used to render into a texture.
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP_Inv, GL.GetGPUProjectionMatrix(passData.currentParameters.projectionMatrix, false).inverse);
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP_Inv, GL.GetGPUProjectionMatrix(passData.currentParameters.projectionMatrix, true).inverse);