Moving top-down camera with preserved world-space texture content

Sep 29, 2025

15 minutes

Introduction

In the previous articles, I implemented a top-down camera that renders character trails into a texture.

However, the camera is fixed at a specific position. In this article, I implement a top-down camera that dynamically follows the player's position, updating its view as the player moves, and preserves the texture content in world space.

The camera will track the player's movement, but the texture content will always correspond to the same world-space locations, regardless of camera moves.



___

Current implementation overview

To implement moving the camera, it is essential to understand the core implementation details of the effect.

There is a HeatmapCamera component that is responsible for rendering the trails into a texture.

The camera uses worldToCameraMatrix to transform world coordinates to camera space, copied from the gameObject's transform. The projectionMatrix defines the camera's rendered area. Trails are drawn into heatmapTexture.

// Camera parameters
public struct HeatmapCameraParameters  
{
	public Matrix4x4 worldToCameraMatrix; // View matrix
	public Matrix4x4 projectionMatrix; // Projection matrix
	public RenderTexture heatmapTexture; // Target texture
}

Controlling the HeatmapCamera's transform directly determines what appears in the texture.

The camera renders trails by additive blending quads rendered by the HeatmapObjectRenderer. Trails are faded away with a multiply-blend pass. This component defines where to render the object.

Rendering uses the URP RenderGraph API. Key code snippets are provided below.

...
// Using compute render pass to set instance data buffer on the GPU  
computeBuilder.SetRenderFunc((PassData passData, ComputeGraphContext context) =>  
{  
	heatmapObjectRendererInstances->Clear();  
	HeatmapObjectRenderer.FetchInstanceData(heatmapObjectRendererInstances);  
	context.cmd.SetBufferData(heatmapObjectInstanceBuffer, heatmapObjectRendererInstances);  
});
...
//Using raster pass to render  
rasterBuilder.SetRenderFunc((PassData passData, RasterGraphContext context) =>  
{  
	// Using instance buffer and view-projection matrices to render content into texture  
	var propertyBlock = context.renderGraphPool.GetTempMaterialPropertyBlock();  
	propertyBlock.SetBuffer(Uniforms._HeatmapObjectInstanceBuffer, passData.heatmapObjectInstanceBuffer);  
	propertyBlock.SetMatrix(Uniforms._HeatmapMatrixV, passData.cameraParameters.worldToCameraMatrix);  
	propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(passData.cameraParameters.projectionMatrix, true));
	
	// Draw additive passes  
	context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 0, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);
	
	// Draw multiply passes  
	context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 1, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);
	
	// Draw green color additive passes  
	context.cmd.DrawProcedural(Matrix4x4.identity, passData.renderMaterial, 2, MeshTopology.Triangles, 6, passData.instanceCount, propertyBlock);  
});
...

Quad shaders use HeatmapCamera's view-projection matrices for correct texture rendering positions.

// This is a snippet from a vertex shader code:  
float4 positionWS = mul(instanceData.localToWorldMatrix, positionOS); // Quad vertices from object space to world space  
float4 positionVS = mul(_HeatmapMatrixV, positionWS); // From world space to camera space (view space)  
float4 positionCS = mul(_HeatmapMatrixP, positionVS); // From view space to clip space  



___

Moving the camera and preserving the texture's content

When the top-down camera moves, the resulting texture changes. Consider the following texture example.

This 16x16 texture has a yellow dot at the center and a yellow border. White dots are pixel centers.

Visualized 16x16 texture rendered by the camera. The yellow dot is the camera center, and the yellow square marks the texture area. White dots represent pixel centers.

Let's move it by a few units and see how the image looks.

Notice that when the camera position changed, the texture pixels are misaligned. This is quite problematic because when preserving the content of a texture in world space, I need to store the data in the same positions.

This is key: the camera can move only in increments equal to the pixel size in world space. Let's snap the camera position to that increment and observe the result.

Perfect, all the texture texels remain perfectly aligned before and after moving the camera. Therefore, when moving the camera, I must snap its position to increments that match the texture pixel size in world space to maintain this alignment.

After moving the camera, shift the texture content accordingly using a shader.

Create a new texture and assign it to the camera after movement.

Pixels overlapping with the old texture sample from it ; others initialize to default values.



___

Implementation plan

Now I know what is needed to make it work:

  1. Move the camera.

  2. Snap the camera world-space position to a multiple of the texture pixel size in world space.

  3. Copy the old texture content to a new texture.

    • New pixels that are in the bounds of the previous texture should sample from the old texture.

    • New pixels that are outside of the bounds should be initialized to a default value (black color).

  4. Render current frame data into a new texture.


To make it work, I need:

  • Texture's pixel size in world space units,

  • Secondary texture,

  • Shader that moves texture content from one texture to the other with proper offsets.


Calculating texture's pixel size in world space units

I can utilize the inverse view-projection camera matrix to calculate the texture pixel size in world coordinates.

1. Calculate the UV of a first texel, and the UV of a diagonal neighbor:

2. Use the inverse view-projection matrix to move texel positions from UV into a world space.

3. The difference between the first position and the second position is the texel size in world space:


Using a double buffer for rendering

To reuse the texture content from the previous frame, I need to store references to the previously used texture and its associated parameters.

I will utilize a swap buffer pattern for that:


Shader to offset the data

The shader that offsets the whole data in the texture will render full-screen into a new texture.

It will have access to both the previous frame data and the current frame data, allowing it to use the matrices to transform pixel UVs into world space and sample the previous texture using this data.



___

Implementation

Let's implement that in the code.


Swap buffer

I will start by implementing the swap buffer. I added a secondary parameters that contain values from the previous frame.

// Camera parameters  
public struct HeatmapCameraParameters  
{
	public Matrix4x4 worldToCameraMatrix; // View matrix
	public Matrix4x4 projectionMatrix; // Projection matrix
	public RenderTexture heatmapTexture; // Target texture
}
public class HeatmapCamera : MonoBehaviour  
{  
	// Renamed Parameters to CurrentParameters and added secondary parameters to store data from the previous frame  
	public HeatmapCameraParameters CurrentParameters { get; private set; } = default;  
	public HeatmapCameraParameters PreviousParameters { get; private set; } = default;  
	 
	...

Then I modified the OnEnable method that was initializing the data:

private void OnEnable()  
{  
	// Register the instance  
	allInstances.Add(this);
	
	// Allocate textures  
	var textureA = AllocateHeatmapTexture();  
	textureA.name = gameObject.name + "_RenderTexture_A";  
	var textureB = AllocateHeatmapTexture();  
	textureB.name = gameObject.name + "_RenderTexture_B";
	
	// Initialize parameters for a current frame  
	var currentParameters = new HeatmapCameraParameters();  
	currentParameters.heatmapTexture = textureA;  
	CurrentParameters = currentParameters;
	
	// Initialize parameters for a previous frame  
	var previousParameters = new HeatmapCameraParameters();  
	previousParameters.heatmapTexture = textureB;  
	PreviousParameters = previousParameters;
	
	// Previously, I was calculating the matrices here, but now I want to update them each frame.  
	//I moved this part into a LateUpdate().  
}
private RenderTexture AllocateHeatmapTexture()  
{  
	var texture = new RenderTexture(  
		(int)resolution, (int)resolution,  
		UnityEngine.Experimental.Rendering.GraphicsFormat.R16G16B16A16_SFloat,  
		UnityEngine.Experimental.Rendering.GraphicsFormat.None, 0  
	);  
	texture.enableRandomWrite = true;  
	texture.Create();
	
	return texture;  
}

I also ensured that the textures are properly deallocated when the component is disabled:

private void OnDisable()  
{
	allInstances.Remove(this);
	
	// Release the textures  
	CurrentParameters.heatmapTexture.Release();
	PreviousParameters.heatmapTexture.Release();
	
	// Set parameters to default
	CurrentParameters = default;
	PreviousParameters = default;
}

Update matrices

Now, after I removed setting the matrices when initializing, I need to update them each frame. Let's do that in the LateUpdate() method.

private void LateUpdate()  
{  
	// Swap the parameters  
	(CurrentParameters, PreviousParameters) = (PreviousParameters, CurrentParameters);  
	
	// Update public parameters  
	HeatmapCameraParameters parameters = CurrentParameters;
	parameters.projectionMatrix = ...; // TODO: Update projection matrix;
	parameters.worldToCameraMatrix = ...; // TODO: Update view matrix.
	CurrentParameters = parameters;
	
	// Set global view and projection matrices for the heatmap.
	Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(parameters.projectionMatrix, false));
	Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixV, parameters.worldToCameraMatrix);
}

Now, I have a setup for this: current frame data and previous frame data are stored on the CPU.

Swapping the frame data is now implemented.

Manually creating view and projection matrices

Now, I need to snap the camera's position to a specific pixel size. To avoid modifying the transform of this camera, I will apply the position change only in the view matrix of the camera.

The view matrix is a world-to-local transformation matrix. Which is inverse localToWorld. I can create a custom localToWorld matrix using a Matrix4x4.TRS(position, rotation, scale) API.

So if I snap the position in the argument of the Matrix4x4.TRS(position, rotation, scale) to the pixel size, the view matrix will look like: Matrix4x4.TRS(snappedPosition, rotation, scale).inverse

I made this function that creates a view matrix for the camera:

// Creates a view matrix for the camera, without snapping  
private Matrix4x4 GetViewMatrix()  
{  
	// Create a local-to-world matrix manually and inverse it.  
	return Matrix4x4.TRS(transform.position, transform.rotation, transform.lossyScale).inverse;  
}

Then I created another version, which will create a view matrix, whose position is snapped to the texture pixel size in world space:

// Creates a view matrix for the camera and snaps its position into a pixel size in world space.  
private Matrix4x4 GetSnappedViewMatrix(float3 pixelSizeWS)  
{  
	// Snap the world space position to a pixel size if it's big enough  
	float3 cameraPosition = transform.position;  
	float3 snapedCameraPosition = cameraPosition;  
	if (lengthsq(pixelSizeWS) > 0.00001f)  
		snapedCameraPosition.xz = round(cameraPosition.xz / pixelSizeWS.xz) * pixelSizeWS.xz; // Snap only in XZ axis
	
	// Create local-to-world matrix using the snapped position  
	Matrix4x4 localToWorldMatrix = Matrix4x4.TRS(snapedCameraPosition, transform.rotation, transform.lossyScale);  
	Matrix4x4 worldToLocalMatrix = localToWorldMatrix.inverse; // And inverse it to create a view matrix  
}

Snapping ensures that the pixels in the current texture are aligned with those in the previous texture.

This is now implemented.

I then created a similar function to generate a projection matrix. It is always fixed and doesn't need to be snapped:

private static Matrix4x4 GetProjectionMatrix()  
{  
	// Create a projection matrix that renders from -1 to 1 in object space.  
	Matrix4x4 projectionMatrix = Matrix4x4.Ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.00001f, 2.0f);  
	projectionMatrix = projectionMatrix * Matrix4x4.Translate(new Vector3(0.0f, 0.0f, -1.0f));
	
	return projectionMatrix;  
}

Calculate pixel size in world space

Now I need to calculate the pixel size of a texture in world space. I explained in the previous part of this article that I can calculate UVs of diagonal pixels in the texture, convert them into world space, and use the difference to get the pixel size.

Let's start by creating a function that will convert texture UV into a world space position:

public float3 TextureUVToWorldSpacePosition(float2 uv)  
{  
	// Calculate clip space position from UV  
	float4 positionCS = float4(uv * 2.0f - 1.0f, 0.0f, 1.0f); // Assume depth 0.
	
	// Convert clip space into view space using the inverse projection matrix.  
	// Ensure that it can execute properly when matrices are not set.  
	float4 positionVS = 0;  
	if (CurrentParameters.projectionMatrix.Equals(default))  
		positionVS = mul(GetProjectionMatrix().inverse, positionCS);  
	else  
		positionVS = mul(CurrentParameters.projectionMatrix.inverse, positionCS);
	
	// Convert view space into world space using inverse view matrix.  
	// Ensure that it can execute properly when matrices are not set.  
	float4 positionOS = 0;  
	if (CurrentParameters.worldToCameraMatrix.Equals(default))  
		positionOS = mul(GetViewMatrix(), positionVS);  
	else  
		positionOS = mul(CurrentParameters.worldToCameraMatrix.inverse, positionCS);
	
	// Return normalized 3D position  
	return positionOS.xyz / positionCS.w;  
}

This is now implemented.

Now let's use this method to calculate pixel size:

// Calculate pixel size:  
float3 texel0WS = TextureUVToWorldSpacePosition(float2(0.5f, 0.5f) / (float)resolution);  
float3 texel1WS = TextureUVToWorldSpacePosition(float2(1.5f, 1.5f) / (float)resolution);  
float3 pixelSizeWS = texel1WS - texel0WS;
// And this is how I can create a snapped view matrix out of that.  
Matrix4x4 viewMatrix = GetSnappedViewMatrix(pixelSizeWS);

Updating camera parameters each frame

Now, let's update all the camera parameters in the LateUpdate.

private void LateUpdate()  
{  
	// Swap the buffers  
	(CurrentParameters, PreviousParameters) = (PreviousParameters, CurrentParameters);
	
	// Calculate pixel size:  
	float3 texel0WS = TextureUVToWorldSpacePosition(float2(0.5f, 0.5f) / (float)resolution);  
	float3 texel1WS = TextureUVToWorldSpacePosition(float2(1.5f, 1.5f) / (float)resolution);  
	float3 pixelSizeWS = texel1WS - texel0WS;
	
	// Update public parameters  
	HeatmapCameraParameters parameters = CurrentParameters;  
	parameters.projectionMatrix = GetProjectionMatrix(); // Creating a projection matrix  
	parameters.worldToCameraMatrix = GetSnappedViewMatrix(pixelSizeWS); // Using snapped view matrix  
	CurrentParameters = parameters; // overriding current parameters
	
	// Set global view and projection matrices for the heatmap rendering  
	Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixP, GL.GetGPUProjectionMatrix(parameters.projectionMatrix, false));  
	Shader.SetGlobalMatrix(HeatmapRenderPass.Uniforms._HeatmapMatrixV, parameters.worldToCameraMatrix);  
}

Update the gizmos

The code should currently run without issues. However, I have no way to tell whether the feature works correctly. To fix that, I will update the gizmo drawn by the component:

private void OnDrawGizmosSelected()  
{  
	// This is the previous gizmo, where I draw the frustum without snapping  
	// Draw a yellow frustum box  
	Gizmos.color = Color.yellow;  
	Gizmos.matrix = GetViewMatrix().inverse;  
	Gizmos.DrawWireCube(Vector3.zero, Vector3.one * float3(2.0f, 2.0f, 2.0f));
	
	// Draw a line from near the plane to the center  
	Gizmos.color = Color.yellow * new Color(1, 1, 1, 0.5f);  
	Gizmos.DrawLine(new Vector3(0.0f, 0.0f, -1.0f), Vector3.zero);
	
	// I added the part here, where I draw the snapped version of the frustum.  
	if (!CurrentParameters.worldToCameraMatrix.Equals(default))  
	{  
		// Draw an orange frustum box  
		Gizmos.color = Color.yellow * new Color(1, 0.5f, 1, 1.0f); // Draw more orange  
		Gizmos.matrix = CurrentParameters.worldToCameraMatrix.inverse;  
		Gizmos.DrawWireCube(Vector3.zero, Vector3.one * float3(2.0f, 2.0f, 2.0f));
		
		// Draw a line from near the plane to the center  
		Gizmos.color = Color.yellow * new Color(1, 1, 1, 0.5f) * new Color(1, 0.5f, 1, 1.0f); // Draw more orange  
		Gizmos.DrawLine(new Vector3(0.0f, 0.0f, -1.0f), Vector3.zero);  
	}  
}

Moved the heatmap camera to a player prefab

To make the camera follow the player, I added it to the player prefab.

Then I enabled the gizmos in the world space. In the video, you can see the yellow line; this represents the original view matrix center. The orange one is the snapped one - so the feature works correctly.

Update the rendering

Now, I have both textures ready, with access to the current parameters and previous frame parameters. It's time to write the shader that will copy the texture from the previous frame into the current frame.

The plan is:

  1. Modify the rendering to execute a single draw call that will copy the texture content from the previous frame into the current frame. The copy should be rendered before any other content.

  2. Create the shader that executes the copy.

Modifying the render feature

Let's start by modifying the rendering feature. First, I will need to use another shader to initialize the current frame texture. I added a new material to the render pass:

public unsafe class HeatmapRenderPass : ScriptableRenderPass  
{  
...  
	
	internal Material heatmapRenderMaterial;  
	internal Material initializeTextureMaterial; // Added new material here
	
	public HeatmapRenderPass(Material heatmapRenderMaterial, Material initializeTextureMaterial)  
	{  
		this.heatmapRenderMaterial = heatmapRenderMaterial;  
		this.initializeTextureMaterial = initializeTextureMaterial; // And added it to the constructor  
		...  
	}  
	...

Now I modify the RecordRenderGraph function, which handles the CPU-side of the rendering. I need to import both textures into the render graph first.

public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)  
{  
using (Markers.recordRenderGraph.Auto())  
{  
	var heatmapCamera = HeatmapCamera.MainInstance;  
	
	...
	
	if (heatmapRenderMaterial == null || initializeTextureMaterial == null) // added new check here  
		return;  
	
	// Accessing the heatmap camera  
	HeatmapCameraParameters currentParameters = heatmapCamera.CurrentParameters;  
	HeatmapCameraParameters previousParameters = heatmapCamera.PreviousParameters;
	
	// Importing current and previous textures into the render graph  
	var currentHeatmapTextureRT = RTHandles.Alloc(currentParameters.heatmapTexture);  
	var currentHeatmapTextureHandle = renderGraph.ImportTexture(currentHeatmapTextureRT);  
	var previousHeatmapTextureRT = RTHandles.Alloc(previousParameters.heatmapTexture);  
	var previousHeatmapTextureHandle = renderGraph.ImportTexture(previousHeatmapTextureRT);

Then, I can add a new raster pass that will initialize the texture.

...  
var previousHeatmapTextureHandle = renderGraph.ImportTexture(previousHeatmapTextureRT);
// Create a new raster pass, before anything is rendered.  
using (Markers.initializeCurrentTexture.Auto()) // Added new profiling marker  
using (var rasterBuilder = renderGraph.AddRasterRenderPass<InitializePassData>($"{nameof(HeatmapRenderPass)}_InitializeHeatmap", out InitializePassData passData))  
{  
	// Render into a current texture, and declare that the previous texture will be used.  
	rasterBuilder.SetRenderAttachment(currentHeatmapTextureHandle, 0);  
	rasterBuilder.UseTexture(previousHeatmapTextureHandle);  
	  
	// Fill the current parameters and previous parameters in the PassData object.  
	passData.currentParameters = currentParameters;  
	passData.previousParameters = previousParameters;  
}

And now it's the time for the draw call. I created a render function and set all the required resources - the previous texture and all the matrices. I will use a procedural draw to draw a full-screen quad.

using (Markers.initializeCurrentTexture.Auto())  
using (var rasterBuilder = renderGraph.AddRasterRenderPass<InitializePassData>($"{nameof(HeatmapRenderPass)}_InitializeHeatmap", out InitializePassData passData))  
{  
...
rasterBuilder.SetRenderFunc((InitializePassData passData, RasterGraphContext context) =>  
{  
	// Get property block  
	var propertyBlock = context.renderGraphPool.GetTempMaterialPropertyBlock();
	
	// Set current parameters  
	propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP_Inv, GL.GetGPUProjectionMatrix(passData.currentParameters.projectionMatrix, true).inverse);  
	propertyBlock.SetMatrix(Uniforms._HeatmapMatrixV_Inv, passData.currentParameters.worldToCameraMatrix.inverse);
	
	/// Set previous parameters  
	propertyBlock.SetTexture(Uniforms._HeatmapPreviousTexture, passData.previousParameters.heatmapTexture);  
	propertyBlock.SetMatrix(Uniforms._HeatmapPreviousMatrixP, GL.GetGPUProjectionMatrix(passData.previousParameters.projectionMatrix, false));  
	propertyBlock.SetMatrix(Uniforms._HeatmapPreviousMatrixV, passData.previousParameters.worldToCameraMatrix);
	
	// Draw 2 triangles  
	context.cmd.DrawProcedural(Matrix4x4.identity, initializeTextureMaterial, 0, MeshTopology.Triangles, 6, 1, propertyBlock);  
});  
}

Creating a shader

Now, when the CPU-side of the rendering is ready, it's time to create a shader. As usual, I created a new shader file and started by using this full-screen procedural draw template:

Shader "Heatmap/InitializeTexture"  
{  
	SubShader  
	{  
		Pass  
		{  
			Blend Off  
			Cull Off  
			ZTest Always  
			ZWrite Off
			
			HLSLPROGRAM  
			#pragma vertex vert  
			#pragma fragment frag
			
			// Fullscreen quad vertices in clip space  
			static const float4 fullscreenQuadCS[] =  
			{  
				float4(-1.0, -1.0, 0.0, 1.0),  
				float4(-1.0, 1.0, 0.0, 1.0),  
				float4(1.0, 1.0, 0.0, 1.0),  
				float4(-1.0, -1.0, 0.0, 1.0),  
				float4(1.0, 1.0, 0.0, 1.0),  
				float4(1.0, -1.0, 0.0, 1.0)  
			};
			
			struct FragmentData  
			{  
				float4 positionCS_SV : SV_Position;
				
				// I only need to use a clip space position in the fragment shader  
				float4 positionCS : TEXCOORD0;  
			};
			
			FragmentData vert (uint vertexID : SV_VertexID)  
			{  
				FragmentData output;
				
				//The vertex shader simply outputs a full-screen quad's positions in clip space.  
				output.positionCS = fullscreenQuadCS[vertexID];  
				output.positionCS_SV = fullscreenQuadCS[vertexID];  
				return output;  
			}
			
			float4 frag (FragmentData input) : SV_Target  
			{  
				return float4(1.0, 0.0, 0.0, 1.0);  
			}  
			ENDHLSL  
		}  
	}  
}

The whole shader code will be placed in the fragment shader. Now I need to add all the properties I set from the C# API. I will add them just above the fragment shader:

// Added a previous texture  
Texture2D _HeatmapPreviousTexture;  
SamplerState pointClampSampler; // Using point-clamp sampler to avoid hardware interpolation and reading outside of the texture.
// Added current and previous frame uniforms  
float4x4 _HeatmapMatrixP_Inv;  
float4x4 _HeatmapMatrixV_Inv;  
float4x4 _HeatmapPreviousMatrixP;  
float4x4 _HeatmapPreviousMatrixV;
float4 frag (FragmentData input) : SV_Target  
{  
	return float4(1.0, 0.0, 0.0, 1.0);  
}

Now it's time for some space transformations! The plan:

  1. Convert position from the current texture's clip space to the world space.

  2. Convert world space position into the previous texture clip space.

  3. For pixels outside of the previous texture range - return black color.

  4. For pixels in the previous texture range, convert their clip space into UV and sample the previous texture.

float4 frag (FragmentData input) : SV_Target  
{  
	// 1. Convert the current texture's clip space into world space.  
	float4 currentCS = input.positionCS;  
	float4 currentVS = mul(_HeatmapMatrixP_Inv, currentCS);  
	float4 positionWS = mul(_HeatmapMatrixV_Inv, currentVS);  
	positionWS /= positionWS.w;
	
	// 2. Convert the world space position into a previous texture clip space.  
	float4 previousVS = mul(_HeatmapPreviousMatrixV, positionWS);  
	float4 previousCS = mul(_HeatmapPreviousMatrixP, previousVS);  
	previousCS /= previousCS.w;
	
	// 3. If the position is outside of the texture, reset the color to black:  
	if (any(abs(previousCS.xy) >= float2(1.0, 1.0)))  
		return float4(0.0, 0.0, 0.0, 0.0);  
	
	// 4. Otherwise, calculate the previous texture UV and sample the texture:  
	float2 previousUV = previousCS.xy * 0.5 + 0.5;  
		return _HeatmapPreviousTexture.SampleLevel(pointClampSampler, previousUV, 0.0);  
}

Testing the feature

I created the material with this shader and assigned it to the renderer feature.

And made the heatmap camera render a small portion of the world, using a lower resolution texture:

In the gameplay, everything works fine. Trails are properly preserved in the world-space, and as expected, data far from the player gets lost, because it happens to be outside of the rendered texture. Additionally, the texture pixels don't flicker, ensuring that the view matrix snapping works correctly.



___

Bonus - rendering issues

Missing one important detail in the implementation may cause various bugs. It is essential for me to understand how various rendering issues manifest during the rendering process.


No pixel snapping

Here, I disabled view matrix snapping.

// This  
parameters.worldToCameraMatrix = GetViewMatrix();
// Instead of this:  
parameters.worldToCameraMatrix = GetSnappedViewMatrix(pixelSizeWS);


No pixel snapping + bilinear texture filtering

Here, I disabled snapping the view matrix and switched from point filtering to bilinear filtering in the shader:

// This:  
Texture2D _HeatmapPreviousTexture;  
SamplerState linearClampSampler;
// Instead of this:  
Texture2D _HeatmapPreviousTexture;  
SamplerState pointClampSampler;


Using non Y-flipped projection matrix

Here, I changed the boolean parameter that flips the projection matrix that is used to render into a texture.

// This  
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP_Inv, GL.GetGPUProjectionMatrix(passData.currentParameters.projectionMatrix, false).inverse);
// Instead of this:  
propertyBlock.SetMatrix(Uniforms._HeatmapMatrixP_Inv, GL.GetGPUProjectionMatrix(passData.currentParameters.projectionMatrix, true).inverse);



Hungry for more?

I share rendering and optimization insights every week.

Hungry for more?

I share rendering and optimization insights every week.

Hungry for more?

I share rendering and optimization insights every week.

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2025 Jan Mróz | Procedural Pixels

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2025 Jan Mróz | Procedural Pixels

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2025 Jan Mróz | Procedural Pixels