Tutorial

Texture Baking - Trading VRAM for GPU performance

Feb 12, 2026

10 min

In this article I show how to use texture baking to trade VRAM memory for GPU performance.

If a material is too expensive, but the result is not animated, I often consider baking the results into a texture - then render the mesh with a much simpler shader at runtime.

Here I will explain the full optimization loop that includes the texture baking:

  1. Profiling a nice looking and costly shader.

  2. Baking the expensive part into an albedo texture - I explain the baking with easy step-by-step instructions.

  3. Replace the runtime shader with a cheap one.

  4. Profile again and confirm the speedup.


___

Procedural rock texturing

In my project, I implemented a shader that textures rocks procedurally.

A single rock uses only a normal texture.


This is the normal texture used by this rock. The rest of the visuals is derived procedurally.

:center-px:


I vibe-coded a custom shader graph node for bevel edge detection. I used it on a normal map to get the sharp surface edges as a grayscale mask.

Here I combined 3 passes of edge detection, each with different parameters. That is 75 normal map samples in total.
Not something I want to use in production.


Then I used triplanar mapping and sampled two textures, tiled wood and tiled mud. I added them together.


Then I combined edge detection with the triplanar mapping to get the final result. I used a color tint for both:


The final Shader Graph looks like this. Awful monster:


___

Profiling

Let's profile this shader. I profiled this scene at 1440p on my RTX 3060:


And as expected, the shader is painfully slow.


Most units are underutilized. The GPU produces pixels at 2% efficiency (screen pipe). Rendering 3 stones took 1.36ms.

What is wrong?

The shader uses too many texture samples. High-quality edge detection needs a lot of samples, which is not a good fit for a stylized runtime shader. In Nsight, you can see the shader spends most of its time stalled waiting for texture fetches (91.46%):

:center-px:


Not good.


___

How to optimize?

I like the look and I want to keep it in my game. How can I optimize it so the shader is not a bottleneck?

The answer is: I can trade memory for higher performance by baking the albedo texture for this mesh.

I can bake the albedo texture using this shader. Then I can use that albedo texture with a much simpler material, so I get the same result with only a few texture samples at runtime.


___

Texture baking basics

This may sound complex, but baking into a texture takes only a few steps:

  1. Render the mesh into a custom texture

  2. Modify the vertex shader to make it render into UV space

  3. Save the rendered texture as an asset

:center-px:


___

1. Render into a texture

When baking into a texture, the goal is to render a single mesh into a target texture, instead of the standard camera view. So I just need a render texture as the target.

Then I render the mesh with its material into the texture:

// Allocate texture and clear it
RenderTexture renderTexture = new RenderTexture(2048, 2048, 0);
cmd.SetRenderTarget(renderTexture);
cmd.ClearRenderTarget(false, true, Color.clear);

// Render the mesh into this texture
cmd.DrawMesh(mesh, meshRenderer.transform.localToWorldMatrix, material, 0, 0


___

2. Render UV as clip-space

Now I need to figure out how to render the object so it is rendered as UV-unwrapped, instead of using a classic perspective projection.

The trick here is to:

  1. Ignore the perspective projection.

  2. Remap vertex UV directly into a clip space position.

  3. Disable depth tests and backface culling.

The vertex shader must output the vertex position in clip space. The GPU expects clip space in the range from -1 to 1. I can remap UV space from [0, 1] into [-1, 1].


You can do this with one line in the vertex shader. Just before it finishes, override the clip space position:

output.positionCS.xyzw = float4(input.texCoord0.xy * 2.0 - 1.0, 0.0, 1.0);
#if UNITY_UV_STARTS_AT_TOP
	output.positionCS.y *= -1.0; // Remember to flip the image if the UV.y starts at top of the texture, not at the bottom.
#endif


___

3. Save the texture

After rendering, I save the texture into an image file. Then it is ready to use with a different shader.


___

Implementation

Here is how I implemented texture baking in my project:

  1. I created a second version of the original shader for baking

  2. I created an editor window that I will use for baking

  3. I allocated custom texture and rendered the mesh into it.

  4. I saved the texture into an asset.

  5. I replaced the original material with the optimized one.

  6. Fix issues


___

1. Creating a second version of the original shader

First, I modify the original shader to render into UV space, instead of using a perspective projection.

I want to render the unaltered albedo for this pass, so I changed the shader type from Lit to Unlit to avoid applying lighting when baking the model.

:center-px:


Unity does not make it easy to override clip space position in Shader Graph, so I generated the shader code from the graph and modified it.

:center-px:


I changed the shader name and added a _Baking suffix:

// old name
Shader "Shader Graphs/StoneShader"

// new name
Shader "Shader Graphs/StoneShader_Baking"

Then, I disabled depth testing and backface culling in the first pass:

:center-px:


Then I modified the PackedVaryings method to override the calculated clip-space position:

PackedVaryings PackVaryings (Varyings input)
{
	PackedVaryings output;
	ZERO_INITIALIZE(PackedVaryings, output);
	//output.positionCS = input.positionCS; // I commented out this line

	// And replaced it with custom clip-space calculation. Remapping UV into clip space:
	output.positionCS.xyzw = float4(input.texCoord0.xy * 2.0 - 1.0, 0.0, 1.0);
	#if UNITY_UV_STARTS_AT_TOP
		output.positionCS.y *= -1.0;
	#endif

Now the shader is ready. I saved the generated shader as a .shader asset in my project:

:center-px:


___

2. Creating the editor window

I want to run baking from an editor window. I started by creating the data needed to bake the object into a texture:

public class TextureBakingData : ScriptableObject
{
	public MeshRenderer meshRenderer; // Renderer to bake
	public Shader bakingShader; // Shader to use for baking
	public int targetResolution = 2048; // Target texture resolution


Then I created an editor window:

using System;
using System.IO;
using UnityEditor;
using UnityEngine;
using UnityEngine.Rendering;

public sealed class TextureBakingWindow : EditorWindow
{
	[SerializeField]
	private TextureBakingData data; // Data for this window
	private Editor dataEditor; // It will draw the inspector for the data above

	private RenderTexture bakedTexture; // Baked texture

	[MenuItem("Tools/Texture baking")]
	public static void Open()
	{
		GetWindow<TextureBakingWindow>("Texture Baking");
	}

	private void OnEnable()
	{
		// If the data doesn't exist, create one
		if (data == null)
		{
			data = CreateInstance<TextureBakingData>();
			data.hideFlags = HideFlags.DontSave;
		}

		// Create an editor that will display the data
		dataEditor = Editor.CreateEditor(data);
	}

	private void OnGUI()
	{
		// Draw the inspector for the data
		dataEditor.DrawDefaultInspector();

		// Button that will trigger the baking
		if (GUILayout.Button("Bake"))
			bakedTexture = BakeTexture();

		// If the texture is already baked
		if (bakedTexture != null)
		{
			// Button to save the texture
			if (GUILayout.Button("Save"))
				SaveTexture();

			// Draw the texture in the inspector
			var rect = EditorGUILayout.GetControlRect(false, position.width);
			GUI.DrawTexture(rect, bakedTexture);
		}
	}

	private RenderTexture BakeTexture()
	{
		// Implemented in the next section.
		return null;
	}

	private void SaveTexture()
	{
		// Implemented in the next section.


This is what the window looks like:

:center-50:


___

3. Render mesh into texture

Now it's time to render the mesh into the custom texture. Let's implement the method that bakes the texture. The comments explain each step:

private RenderTexture BakeTexture()
{
	// Access the mesh filter of the renderer
	MeshFilter meshFilter = data.meshRenderer.GetComponent<MeshFilter>();
	if (meshFilter == null)
		return null;

	// Access the mesh used by the renderer
	Mesh mesh = meshFilter.sharedMesh;
	if (mesh == null)
		return null;

	// Access the material used by the renderer
	Material material = data.meshRenderer.sharedMaterial;
	if (material == null)
		return null;

	// Create a material copy
	material = new Material(material);
	material.hideFlags = HideFlags.DontSave;

	// And override the shader to use the "baking" version
	material.shader = data.bakingShader;

	// Get the command buffer that will be used for baking
	CommandBuffer cmd = CommandBufferPool.Get(nameof(TextureBakingWindow));

	// Allocate the target texture and clear it
	RenderTexture renderTexture = new RenderTexture(data.targetResolution, data.targetResolution, 0);
	cmd.SetRenderTarget(renderTexture);
	cmd.ClearRenderTarget(false, true, Color.clear);

	// Bake texture contents by rendering the mesh with a "baking" shader.
	cmd.DrawMesh(mesh, data.meshRenderer.transform.localToWorldMatrix, material, 0, 0);
	Graphics.ExecuteCommandBuffer(cmd); // Execute all the commands

	// Cleanup resources
	CommandBufferPool.Release(cmd);
	DestroyImmediate(material);

	return renderTexture


And look, baking already works!


___

4. Saving the texture into an asset

Now it's time to save the texture into a file.

I open a file dialog to choose where to save it.

private void SaveTexture()
{
	// Display a Windows Explorer dialog that asks the user where to save the texture
	string path = EditorUtility.SaveFilePanel(
		"Save baked texture as PNG",
		"Assets",
		"BakedTexture.png",
		"png"
	);

	if (string.IsNullOrWhiteSpace(path))
		return;

	if (!path.EndsWith(".png", StringComparison.OrdinalIgnoreCase))
		path += ".png";

	// Texture2D always reads pixels from the RenderTexture.active texture, so I need to cache it.
	RenderTexture previous = RenderTexture.active;

	try
	{
		// Set active render texture to baked texture.
		RenderTexture.active = bakedTexture;

		// Allocate new Texture2D
		var tex2D = new Texture2D(bakedTexture.width, bakedTexture.height, TextureFormat.RGBA32, false, true);

		// Read pixels from baked texture into the allocated texture.
		tex2D.ReadPixels(new Rect(0, 0, bakedTexture.width, bakedTexture.height), 0, 0);
		tex2D.Apply(false, false);

		// Encode texture as PNG
		byte[] png = tex2D.EncodeToPNG();

		// Destroy the texture.
		DestroyImmediate(tex2D);

		// Write the PNG bytes into the file.
		File.WriteAllBytes(path, png);

		// Refresh the asset database, it will force Unity to import the texture
		AssetDatabase.Refresh();
	}
	catch (Exception e)
	{
		// If something fails, log the exception
		Debug.LogException(e);
		EditorUtility.DisplayDialog("Save Texture Failed", e.Message, "OK");
	}
	finally
	{
		// Restore the previously set render texture to not break anything.
		RenderTexture.active = previous


Here is saving in action:


Now I have the baked texture in the assets!


___

5. Replacing the original material with the optimized one

I made a shader that uses only the albedo texture and the normal map to render the objects.

:center-50:


Then I created a material that uses the baked texture.
In the video below, I toggle between the original shader and the baked texture. The baked texture is the brighter one.


There is a problem. Black pixels appear on the UV seams of the model.

:center-50:


___

6. Fix issues

Now I need to fix the black pixels. They appear because bilinear filtering mixes in neighboring pixels that were never baked, and they bleed into the model.


I can fix this by adding dilation. It works flooding the non-baked pixels with the neighboring baked ones.


I used this shader to "blit" the baked texture to create the dilation.

float4 frag(FragmentData input) : SV_Target0
{
	const float AlphaThreshold = 0.1;

	uint w, h;
	_MainTex.GetDimensions(w, h);
	float2 texel = 1.0 / max(float2((float)w, (float)h), 1.0.xx);

	float2 uv = input.uv;
	float4 c = _MainTex.Sample(linearRepeatSampler, uv);

	// 3x3 dilation: if the pixel is below the threshold, "flood" from the strongest neighbor above threshold.
	float4 best = c;
	if (best.a <= AlphaThreshold)
	{
		// Pick the max alpha sample in the 3x3 kernel
		[unroll]
		for (int y = -1; y <= 1; y++)
		{
			[unroll]
			for (int x = -1; x <= 1; x++)
			{
				float4 s = _MainTex.Sample(linearRepeatSampler, uv + texel * float2(x, y));
				best = (s.a > best.a) ? s : best;
			}
		}

		// Only flood if a neighbor is above threshold.
		best = (best.a > AlphaThreshold) ? best : c;
	}

	return best


Then I added a dilation step to the baking algorithm:

...
Graphics.ExecuteCommandBuffer(cmd);

// After the texture is baked, allocate temporary texture
RenderTexture renderTextureTemp = new RenderTexture(data.targetResolution, data.targetResolution, 0);

// Execute 10 dilation steps
for (int i = 0; i < 10; i++)
{
	// Each step blits twice using ping-pong buffer
	Graphics.Blit(renderTexture, renderTextureTemp, data.dilateMaterial);
	Graphics.Blit(renderTextureTemp, renderTexture, data.dilateMaterial);
}

// Cleanup resources
CommandBufferPool.Release(cmd);
DestroyImmediate(material);
renderTextureTemp.Release(); // added this one


This is the difference between the dilated texture and the original one:


It fixed the seams:


___

Profiling after optimization

Now it's time to compare the performance of the original shader to the optimized one. Again, I profiled on an RTX 3060 at 1440p.

The baked texture rendered 4.5x faster in this case!


The shader is no longer bound by texture fetches:

:center-px:


___

Summary

I optimized the shader by baking it into a texture. This lets me skip the costly computation in the shader and replace it with a single texture read.

This is a good optimization technique, but it trades memory for GPU performance. In this case, the shader renders 4.5x faster. However, it adds 5.3MB to VRAM.

:center-px:


Pros:

  • You keep the same look, but move the heavy work out of the runtime shader.

  • You reduce the number of texture samples and expensive math per pixel.

  • You get more stable performance on multiple devices, because sampling one texture has more predictable behaviour than complex algorithms.

  • Baked textures are easy to reuse across multiple instances of the same mesh.

Cons:

  • You spend VRAM and disk space for the baked textures.

  • You add an authoring step and more tooling, which can slow iteration.

  • Baking can expose UV issues (seams, padding, filtering), so you may need extra fixes like dilation.

  • If the look depends on time or dynamic inputs, baking reduces or removes that flexibility.

In short, texture baking is a good option when your shader is the bottleneck and the result can be treated as static. If you are already close to your memory budget, or your material needs to stay fully dynamic, this trade might not be worth it.


And, cheatsheet as a bonus:


Hungry for more?

I share rendering and optimization insights every week.

Hungry for more?

I share rendering and optimization insights every week.

Hungry for more?

I share rendering and optimization insights every week.

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2026 Jan Mróz | Procedural Pixels

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2026 Jan Mróz | Procedural Pixels

I write expert content on optimizing Unity games, customizing rendering pipelines, and enhancing the Unity Editor.

Copyright © 2026 Jan Mróz | Procedural Pixels