Rendering

Tutorial

Render Graph - Creative Shadowcasting

Jul 3, 2025

:image-description:

Read-time: 11 min

In this article, I modify my rain atmosphere feature to keep surfaces dry when they are under some form of cover.

To achieve this, I will utilize a shadowcasting technique. The roofs of the buildings will cast a shadow, which I will use to modulate the intensity of the effect.

:center-px:

:image-description:

This is the final result


___

Contents:

  1. Explaining how the effect works by this point.

  2. Explaining shadowcasting technique.

  3. Implementing shadowcasting to modulate the effect.


___

How the original effect works

The effect was implemented in the deferred rendering path by modifying the GBuffer contents before the lighting was calculated. The effect reduces the albedo and GI and then increases the smoothness to mimic the wet look.

This is how the lighting initially worked: Unity was rendering a GBuffer, which was then used by the lighting algorithm to produce a color buffer.

:center-px:

:image-description:

This is how the Unity's render graph nodes look like. Gbuffer is fed to the deferred lighting pass.

The rain atmosphere effect works by modifying a GBuffer just before the Deferred Lighting pass. It copies GBuffer content using the InitializeRainResources node, and then creates a shared resource RainResourceData that holds copy of the GBuffer.

ApplyRainAtmosphere modifies the original GBuffer to increase smoothness and reduce global illumination and albedo, making surfaces appear more wet. The effect is applied using a full-screen shader, similar to a post-processing effect.

:center-px:

:image-description:

Current implementation of the effect.

:center-px:


___

Shadowcasting technique

I want to modify the effect to make the surfaces look dry when they are under any cover. I want to use the shadowcasting technique for that.

Classic shadowcasting works by rendering the depth of the scene from the light point-of-view. This depth texture rendered by the light is called Shadowmap. Then, the lighting algorithm compares the rendered surface distance to the light with the distance saved in the depth rendered by the light to calculate the shadow.

In my case, I like to interpret the falling rain as a directional light that emits "wetness" instead of light. Let's see how we can implement that using this image.

:center-px:


1. Rendering depth from the light point-of-view

I will place the camera at the light source. This camera will render a depth-only scene. The created texture is called a shadowmap.

:center-px:

:image-description:

Vertical lines represent the depth values stored within the shadowmap.


2. The lighting algorithm uses shadowmap to estimate shadow

The lighting algorithm uses the shadowmap to calculate the shadows when rendering with the main camera.

When calculating the lighting for any surface, the algorithm compares the distance to the light source with the distance saved in the shadowmap.

If the distance to the light source is smaller or close to the distance saved in the shadowmap - there is no shadow.

:center-px:

:image-description:

The distance to a light source matches the distance saved in the shadow map - there is a light.

When the distance from the surface to the light is larger than the distance saved in the shadowmap - there is a shadow.

:center-px:

:image-description:

The distance to the light source is larger than the distance stored in the shadowmap - there is a shadow.

To make this technique possible, we need a way to convert world space position into shadowmap UV. This can be done using the view and projection matrices of the camera that was used to render the shadowmap.

If we do this in the postprocess, we need to reconstruct the world space position from the main camera's depth buffer because the world space position is not stored within the GBuffer.


___

Implementing shadowcasting to modulate the effect

To implement the shadowcasting, I need to get this data into the shader that applies the effect:

  • Main camera depth buffer - to reconstruct world space position from depth.

  • View and projection matrices used to render the camera - used to convert world space position into shadowmap UV.

  • Shadowmap texture.

I will divide the implementation into a few steps:

  1. Configuring a camera that will render a shadowmap.

  2. Modifying the render feature - forwarding the rendered shadowmap into the shader.

  3. Implement shadowcasting in the shader.


___

1. Configuring shadowmap camera

I created a new camera and placed it on the scene. I set the camera to render orthographic and made it render top-down, capturing most of the scene.

:center-px:

Then I created a RainCamera component. This component is responsible for creating a shadowmap texture and assigning it to the camera.

The component creates a shadowmap texture using a depth-only texture format. Then, it renders the camera into the shadow map and saves the parameters for further use in the render feature.

The RainCameraParameters will be used in the render feature to render the effect.

using System.Collections.Generic;
using System.Linq;
using UnityEngine;
using UnityEngine.Experimental.Rendering;

[RequireComponent(typeof(Camera))]
[ExecuteAlways]
public class RainCamera : MonoBehaviour
{
    // Keeping track of all created instances.
    public static List<RainCamera> AllInstances { get; private set; } = new();

    // Main instance - last registered one.
    public static RainCamera MainInstance => AllInstances.LastOrDefault();

    // All parameters that define the shadowmap. Those will be used in the render feature.
    public RainCameraParameters Parameters { get; private set; }

    private void OnEnable()
    {
        // Registering the new instance.
        AllInstances.Add(this);

        // Create a shadowmap texture
        RenderTexture ShadowmapTexture = new RenderTexture(1024, 1024, GraphicsFormat.None, GraphicsFormat.D32_SFloat, 0);
        ShadowmapTexture.name = $"{gameObject.name}_ShadowmapTexture"; // Assigning the name makes debugging easier.

        // Render the shadowmap
        var camera = GetComponent<Camera>();
        camera.targetTexture = ShadowmapTexture;
        camera.Render();

        // I'm assuming that the rendered environment is static, so the camera can be disabled after the first render.
        camera.enabled = false; 

        // Save parameters of the shadowmap. Those will be used in the render feature.
        Parameters = new RainCameraParameters()
        {
            // Matrices are required to read and convert the world space position to the UV of the shadowmap.
            worldToCameraMatrix = camera.worldToCameraMatrix,
            projectionMatrix = camera.nonJitteredProjectionMatrix,

            // Shadowmap will be sampled in the shader.
            shadowmapTexture = ShadowmapTexture,
        };
    }

    private void OnDisable()
    {
        // Unregistering the instance.
        AllInstances.Remove(this);

        // Release resources allocated in the OnEnable method.
        Parameters.shadowmapTexture.Release();
    }
}

// All parameters that define the rain camera. 
public struct RainCameraParameters
{
    public Matrix4x4 worldToCameraMatrix;
    public Matrix4x4 projectionMatrix;
    public RenderTexture shadowmapTexture

I assigned the component to the created camera. I used a Frame Debugger to see if the shadowmap texture was properly rendered. After enabling the camera in the inspector, the shadowmap looks correct in the FrameDebugger.

:center-px:

:image-description:

Camera properly renders the scene content, checked in the Frame Debugger window.


___

2. Modifying the render feature

Now, I need to find a way to bind this shadowmap and camera depth buffer to the shader that renders the effect.

InitializeRainResourcesPass was used to prepare all the resources used by the effect, so it looks like a perfect place to set shadowmap parameters.

RainResourceData is a class that shares all the resources of the effect with other passes. I decided to add a shadowmap parameters here.

public class RainResourceData : ContextItem
{
    public TextureHandle normalSmoothnessTexture { get; internal set; }

    // Added shadowmap camera parameters here.
    public RainCameraParameters? rainCameraParameters { get; internal set; } = null;

    public override void Reset()
    {
        normalSmoothnessTexture = TextureHandle.nullHandle;

        // Resetting it here.
        rainCameraParameters = null

If the RainCamera component is present on a scene, I set the parameters for other passes.

public class InitializeRainResourcesPass : ScriptableRenderPass
{
...
    public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
    {
        RainResourceData rainResources = frameData.GetOrCreate<RainResourceData>();

        var urpResources = frameData.Get<UniversalResourceData>();
        var gBuffer = urpResources.gBuffer;

        if (gBuffer == null || gBuffer.Length == 0 || gBuffer[0].Equals(TextureHandle.nullHandle))
            return;

        rainResources.normalSmoothnessTexture = renderGraph.CreateTexture(gBuffer[2].GetDescriptor(renderGraph));

        // If a rain camera is present, share its parameters with other passes.
        RainCamera mainRainCamera = RainCamera.MainInstance;
        if (mainRainCamera != null)
            rainResources.rainCameraParameters = mainRainCamera.Parameters;

        using (var builder = renderGraph.AddRasterRenderPass(GetType().Name, out PassData passData))
        {
...

Then I moved to the ApplyRainAtmospherePass, where all rendering is implemented. I need to do two things here:

  • Forward the shadowmap parameters to the shader.

  • Forward the camera depth buffer to the shader.

I modified the PassData that stores resources used in the render function.

internal class PassData
{
	internal Material material;
	internal TextureHandle gBuffer2;

	// Added camera depth handle to use it in the render function.
	internal TextureHandle cameraDepthTexture;

	// Added shadowmap parameters to use it in the render function.
	internal RainCameraParameters? rainCameraParameters

Then, I modified the render pass that applies the rain effect. I accessed the shared resources and forwarded them into the render function. Only commented lines were added.

    public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
    {
        UniversalResourceData urpResources = frameData.Get<UniversalResourceData>();
        TextureHandle[] gBuffer = urpResources.gBuffer;

        // Accessing the active depth buffer from the camera
        TextureHandle depthBuffer = urpResources.activeDepthTexture;

        if (gBuffer == null || gBuffer.Length == 0 || gBuffer[0].Equals(TextureHandle.nullHandle))
            return;

        RainResourceData rainResources = frameData.Get<RainResourceData>();

        using (var builder = renderGraph.AddRasterRenderPass(GetType().Name, out PassData passData))
        {
            builder.UseTexture(rainResources.normalSmoothnessTexture);

            // Declare the usage of the depth buffer to avoid render graph resource stripping.
            // Shadowmap is always present in the memory, so it's usage doesn't need to be declared.
            builder.UseTexture(depthBuffer);

            passData.gBuffer2 = rainResources.normalSmoothnessTexture;
            passData.material = material;

            // Forwarding camera's depth texture to the render function;
            passData.cameraDepthTexture = depthBuffer;

            // Forwarding rain camera parameters to the render function
            passData.rainCameraParameters = rainResources.rainCameraParameters;

            for (int i = 0; i < gBuffer.Length; i++)
                builder.SetRenderAttachment(gBuffer[i], i, AccessFlags.Write);

            builder.SetRenderFunc((PassData passData, RasterGraphContext context) => RenderFunction(passData, context));
        }
    }

Then, I modified the render function to set all the resources in the shader. I used MaterialPropertyBlock from the render graph pool.

_CameraDepthTexture - Game camera's depth texture

_RainCamera_Shadowmap - Rendered shadowmap

_RainCamera_MatrixV, _RainCamera_MatrixP - view and projection matrices of the camera that rendered a shadowmap (required to convert world space position to shadowmap UV).

public static class Uniforms
{
    internal static readonly int _GBuffer2 = Shader.PropertyToID(nameof(_GBuffer2));

    // I added those uniform constants.
    internal static readonly int _CameraDepthTexture = Shader.PropertyToID(nameof(_CameraDepthTexture));
    internal static readonly int _RainCamera_Shadowmap = Shader.PropertyToID(nameof(_RainCamera_Shadowmap));
    internal static readonly int _RainCamera_MatrixV = Shader.PropertyToID(nameof(_RainCamera_MatrixV));
    internal static readonly int _RainCamera_MatrixP = Shader.PropertyToID(nameof(_RainCamera_MatrixP

private static void RenderFunction(PassData passData, RasterGraphContext context)
{
	MaterialPropertyBlock propertyBlock = context.renderGraphPool.GetTempMaterialPropertyBlock();
	propertyBlock.SetTexture(Uniforms._GBuffer2, passData.gBuffer2);

	// Setting camera depth buffer.
	propertyBlock.SetTexture(Uniforms._CameraDepthTexture, passData.cameraDepthTexture);

	// If a rain camera exists, set up shader uniforms.
	if (passData.rainCameraParameters.HasValue)
	{
		propertyBlock.SetTexture(Uniforms._RainCamera_Shadowmap, passData.rainCameraParameters.Value.shadowmapTexture);
		propertyBlock.SetMatrix(Uniforms._RainCamera_MatrixV, passData.rainCameraParameters.Value.worldToCameraMatrix);
		Matrix4x4 projectionMatrix = GL.GetGPUProjectionMatrix(passData.rainCameraParameters.Value.projectionMatrix, true);
		propertyBlock.SetMatrix(Uniforms._RainCamera_MatrixP, projectionMatrix);
	}
	else
	{
		// Otherwise, if the rain camera is not present, set all uniforms to default
		propertyBlock.SetTexture(Uniforms._RainCamera_Shadowmap, context.defaultResources.defaultShadowTexture);
		propertyBlock.SetMatrix(Uniforms._RainCamera_MatrixV, Matrix4x4.identity);
		propertyBlock.SetMatrix(Uniforms._RainCamera_MatrixP, Matrix4x4.identity);
	}

	context.cmd.DrawProcedural(Matrix4x4.identity, passData.material, 0, MeshTopology.Triangles, 6, 1, propertyBlock


___

3. Implementing shadowcasting in the shader

After all the required matrices and textures are forwarded to the, I need to implement the shadowcasting inside the shader. There are a few steps to be done:

  1. Declare all resources and uniforms in the shader.

  2. Reconstruct world space position from the depth buffer.

  3. Calculate shadowmap UV from the world space position.

  4. Calculate the rain-camera depth value using the world space position.

  5. Compare the calculated depth with the value in the shadowmap.


___

3.1 Declare all resources in the shader

I placed all required uniforms and resources in the shader code.

SamplerState pointClampSampler;
Texture2D _GBuffer2;

// Added camera depth texture and shadowmap
Texture2D _CameraDepthTexture;
Texture2D _RainCamera_Shadowmap;

// Added rain camera matrices
float4x4 _RainCamera_MatrixV;
float4x4 _RainCamera_MatrixP;

float _AlbedoMultiplier;
float _AmbientLightMultiplier;


___

3.2 Reconstruct world space position from the depth buffer

Now, I need to get the world space position that I could use in the shadowcasting algorithm. Because GBuffer doesn't contain world space position, I need to derive it from the value in the depth buffer.

The goal is to sample a raw depth buffer value. Then, I will reconstruct the clip space position of the game camera using the screen UV and this raw depth value. Clip space position can be converted into a world space position using the inversed view-projection matrices.

This is how I read the depth value from the depth buffer.

// Get raw depth buffer value
float rawScreenDepth = _CameraDepthTexture.SampleLevel(pointClampSampler, IN_uv, 0.0f).r;

Then I can reconstruct the clip space position using the UV and the raw depth value.

The screen UV range is from [0,1], and clip space XY is in [-1,1]. I just need to remap it to [-1, 1] and set the Z value to the sampled depth.

// Reconstructing Clip Space position from UV and raw depth.
float4 positionCS = float4((IN_uv.xy * 2.0 - 1.0), rawScreenDepth, 1.0);
#if UNITY_UV_STARTS_AT_TOP
	positionCS.y *= -1.0; // Some rendering APIs invert the Y axis in Unity.
#endif

Next, I need to convert the clip space position into a world space position. Usually, vertex shaders use a View-Projection matrix to convert a world space position into clip space, so I can use an inversed View-Projection matrix to flip the transformation. Unity stores the inversed view-projection matrix in the UNITY_MATRIX_I_VP.

// Use an inversed view-projection matrix to transform CS into WS.
float4 positionWS = mul(UNITY_MATRIX_I_VP, positionCS); 
positionWS /= positionWS.w; // The final value is not normalized. I normalize it by dividing it by the W component.

When making such changes, I like to print the results on the screen using just an emission buffer. It is helpful to see if the depth data is sampled properly or if the reconstructed position WS is correct.

OUT_GBuffer0.rgba = 0.0;
OUT_GBuffer2.rgba = 0.0;
OUT_GBuffer3.rgb = frac(positionWS.xyz); // Display position WS

// Or 

OUT_GBuffer3.rgba = rawScreenDepth * 200.0; // Display raw depth, rescaled to actually see some of the values on the screen

:center-100:

:image-description:

From the left: original image, displayed raw depth, displayed position WS. All these images need to be distorted in accordance with the 3D shapes. This image shows correctly sampled depth and position in world space.


___

3.3 Calculate shadowmap UV from the world space position

Now, I can use the world space position to calculate the UV of the shadowmap. I will use the view and projection matrix of the rain-camera to calculate the clip space position.

float4 rainPositionVS = mul(_RainCamera_MatrixV, positionWS); // Convert world space to shadowmap camera view
float4 rainPositionCS = mul(_RainCamera_MatrixP, rainPositionVS); // Convert shadowmap camera view space to shadowmap clip space
rainPositionCS /= rainPositionCS.w; // Normalize the clip space to [-1, 1] values (NDC)

float2 rainUV = rainPositionCS.xy * 0.5 + 0.5; // Remap clip-space to shadowmap UV.
#if UNITY_UV_STARTS_AT_TOP
	rainUV.y = 1.0 - rainUV.y; // And flip UV.y if I'm in Australia
#endif

Again, to check if everything went correctly, I printed UV to the emission buffer:

:center-50:

:image-description:

UV of a shadowmap displayed on the screen.

It appears that the shadowmap UV is calculated correctly.


___

3.4 Calculate rain-camera depth and shadowmap sampling

It is the right time to sample the shadowmap and calculate the rain-camera depth value for the surface.

I can sample the shadowmap using the calculated UV:

float rawRainDepth = _RainCamera_DepthTexture.SampleLevel(linearClampSampler, sampleUV, 0.0f).r;

And when printed to the screen, it looks correct:

:center-50:

:image-description:

Shadowmap values sampled for a previously calculated world space position. You can notice that buildings stand out because the depth value under the roofs is the same as for the roofs. This is a correct result.

I need to get the rain-camera depth value for the surface using the world space position, but...

...it is actually the Z value of the rain camera clip space. I can just use that because it is already calculated.

float surfaceRainDepth = rainPositionCS.z;

:center-50:

:image-description:

Notice that the calculated depth values look more correct. And that's the key. We will compare the value in the shadow map with the calculated value to determine if there is a shadow or not.


___

3.5. Calculating the shadow

It's time to compare the calculated depth value with the value read from the shadowmap. If the calculated value is smaller than the value in the shadowmap, there is a shadow.

However, the values in the shadow map are quantized and not perfect. I need to apply some offset to reduce artifacts. I tested a few different offsets, and 0.002 works fine. Later, I could polish the effect and calculate this offset on the fly instead of hardcoding it, but right now, it's fine.

The offset should depend on the rain-camera render distance and depth encoding.

float shadowMultiplier = 1.0;
if (surfaceRainDepth < rawRainDepth - 0.002)
	shadowMultiplier = 0.0;

:center-100:

:image-description:

This is how shadowMultiplier looks like when displayed on the screen.

I used the calculated shadowMultiplier to modify the effect strength.

// Effect strength is now modulated by the calculated shadow multiplier
float effectStrength = _EffectStrength * shadowMultiplier;

float4 rawGBuffer2 = _GBuffer2.SampleLevel(pointClampSampler, IN_uv, 0.0);
rawGBuffer2.a = lerp(rawGBuffer2.a, min(_SmoothnessMax, rawGBuffer2.a + _SmoothnessAdd), effectStrength);

OUT_GBuffer0 = float4((float3)lerp(1.0, _AlbedoMultiplier, effectStrength), 1.0);
OUT_GBuffer2 = rawGBuffer2;
OUT_GBuffer3 = float4((float3)lerp(1.0, _AmbientLightMultiplier, effectStrength), 1.0);

:center-px:

:image-description:

This is how the effect looks like when we just use the simplest shadowcasting technique.

The problem is that the calculated shadowmask is super sharp. The transition between dry and wet surfaces appears too obvious. It will be nice to blur it a little.

Blurring can be implemented by sampling many shadowmap samples from the nearby UVs and calculating the average shadow multiplier out of that.

I will use a kernel to sample a shadowmap a few times from different positions, and then I will calculate the average shadow mask value from all those samples. The sampling kernel is an array of offsets that should be applied to UV before sampling.

I asked ChatGPT to generate a sampling kernel with 16 points with Poisson-Disk distribution. I will use this kernel for sampling. I put the kernel above the fragment shader:

static const int poissonDiskCount = 16;

static const float2 poissonDisk[poissonDiskCount] = 
{
	float2(-0.094184, -0.929388),
	float2( 0.143831, -0.141007),
	float2( 0.344959,  0.293877),
	float2( 0.199841,  0.786413),
	float2(-0.241888,  0.997065),
	float2(-0.382775,  0.276768),
	float2(-0.264969, -0.418930),
	float2(-0.942016, -0.399062),
	float2(-0.915885,  0.457716),
	float2(-0.814099,  0.914375),
	float2(-0.815442, -0.879124),
	float2( 0.537429, -0.473734),
	float2( 0.791975,  0.190901),
	float2( 0.974843,  0.756483),
	float2( 0.945586, -0.768907),
	float2( 0.443233, -0.975115)
};

:center-px:

:image-description:

Plotted all of the offsets within a kernel.

I iterated through all the offsets in the kernel and calculated the shadow multiplier for each offset. Then I calculated the average, which is used to modulate the effect strength. After implementing the kernel, one last thing was to play with the offsets.

// I will accumulate the shadowmask value here
float shadowMultiplierSum = 0.0;

// Iterate through each offset in the kernel
for (int i = 0; i < poissonDiskCount; i++)
{
	float2 offset = poissonDisk[i].xy * 0.01; // Scale the offset
	float rawRainDepth = _RainCamera_Shadowmap.SampleLevel(linearClampSampler, rainUV + offset, 0.0f).r; // Sample the shadowmap

	// Calculate shadow multiplier for each sample
	float shadowMultiplier = 1.0;
	if (surfaceRainDepth < rawRainDepth - 0.03)
		shadowMultiplier = 0.0;

	// Accumulate shadow multiplier
	shadowMultiplierSum += shadowMultiplier;
}

// Calculate the average
float shadowmapMultiplierAvg = shadowMultiplierSum / (float)poissonDiskCount;

// And modulate the effect based on the average
float effectStrength = _EffectStrength * shadowmapMultiplierAvg;

:center-100:

:image-description:

You can't really notice the banding in the final result.

I see that there is a lot of banding that could be fixed. But the truth is that the player will never see the mask. Try to spot this banding when it's in action as a mask for the effect. I will add the parameters to the material to control all the offsets.

Shader "Hidden/ApplyRainAtmosphere"
{
    Properties
    {
		...

        _ShadowBlurRadius("Shadow blur radius", Range(0.0, 0.1)) = 0.007
        _ShadowBlurShape("Shadow blur shape", Range(0.0, 2.0)) = 0.65
        _MinEffectStrength("Min effect strength in shadows", Range(0.0, 1.0)) = 0.1
        _DepthOffset("Depth offset", Range(0.0, 0.1)) = 0.03
        _DepthFadeWidth("Depth fade width", Range(0.0, 0.1)) = 0.05
    }
...

This is the final fragment shader code will all the tweaks:

void frag 
(
    in float2 IN_uv : TEXCOORD0,
    out float4 OUT_GBuffer0 : SV_Target0,
    out float4 OUT_GBuffer2 : SV_Target2,
    out float4 OUT_GBuffer3 : SV_Target3
) 
{
    // Calculating world space position from depth.
    float rawScreenDepth = _CameraDepthTexture.SampleLevel(pointClampSampler, IN_uv, 0.0f).r;
    float4 positionCS = float4((IN_uv.xy * 2.0 - 1.0), rawScreenDepth, 1.0); 
    #if UNITY_UV_STARTS_AT_TOP
        positionCS.y *= -1.0;
    #endif
    float4 positionWS = mul(UNITY_MATRIX_I_VP, positionCS); 
    positionWS /= positionWS.w;

    // Calculating rain-camera clip-space position. It contains depth value.
    float4 rainPositionVS = mul(_RainCamera_MatrixV, positionWS);
    float4 rainPositionCS = mul(_RainCamera_MatrixP, rainPositionVS);
    rainPositionCS /= rainPositionCS.w;

    // Remapping rain-camera clip-space position to shadowmap UV
    float2 rainUV = rainPositionCS.xy * 0.5 + 0.5; 
    #if UNITY_UV_STARTS_AT_TOP
        rainUV.y = 1.0 - rainUV.y;
    #endif

    // Calculating the shadows.
    float surfaceRainDepth = rainPositionCS.z;
    float shadowMultiplierSum = 0.0;
    for (int i = 0; i < poissonDiskCount; i++)
    {
        float2 offset = poissonDisk[i].yx * _ShadowBlurRadius;
        float rawRainDepth = _RainCamera_Shadowmap.SampleLevel(linearClampSampler, rainUV + offset, 0.0f).r;

        // Tweaked shadow multiplier calculation to hide sharp banding
        float shadowMultiplier = smoothstep(-_DepthOffset, -_DepthOffset + _DepthFadeWidth, surfaceRainDepth - rawRainDepth);

        shadowMultiplierSum += shadowMultiplier;
    }

    float shadowMultiplierAvg = shadowMultiplierSum / (float)poissonDiskCount;

    // Calculating the final effect strength using the calculated shadows.
    // Added min strength and gamma-modifier to the calculated shadow value
    float minEffectStrength = _EffectStrength * _MinEffectStrength;
    float effectStrength = minEffectStrength + _EffectStrength * pow(shadowMultiplierAvg, _ShadowBlurShape) * (1.0 - minEffectStrength);

    float4 rawGBuffer2 = _GBuffer2.SampleLevel(pointClampSampler, IN_uv, 0.0);
    rawGBuffer2.a = lerp(rawGBuffer2.a, min(_SmoothnessMax, rawGBuffer2.a + _SmoothnessAdd), effectStrength);

    OUT_GBuffer0 = float4((float3)lerp(1.0, _AlbedoMultiplier, effectStrength), 1.0);
    OUT_GBuffer2 = rawGBuffer2;
    OUT_GBuffer3 = float4((float3)lerp(1.0, _AmbientLightMultiplier, effectStrength), 1.0);
}

This is the final effect:

:center-px:

Sign up for my newsletter

Get the latest updates on my posts, plus tips on rendering and optimization strategies in every email.

© 2025 Jan Mróz | Procedural Pixels.

Made by Natalia Bracikowska

© 2025 Jan Mróz | Procedural Pixels.

Made by Natalia Bracikowska

© 2025 Jan Mróz | Procedural Pixels.

Made by Natalia Bracikowska

Sign up for my newsletter

Get the latest updates on my posts, plus tips on rendering and optimization strategies in every email.

Sign up for my newsletter

Get the latest updates on my posts, plus tips on rendering and optimization strategies in every email.