Rendering
Tutorial
Render Graph - Creative Shadowcasting
Jul 3, 2025
:image-description:
Read-time: 11 min
In this article, I modify my rain atmosphere feature to keep surfaces dry when they are under some form of cover.
To achieve this, I will utilize a shadowcasting technique. The roofs of the buildings will cast a shadow, which I will use to modulate the intensity of the effect.
:center-px:

:image-description:
This is the final result
___
Contents:
Explaining how the effect works by this point.
Explaining shadowcasting technique.
Implementing shadowcasting to modulate the effect.
___
How the original effect works
The effect was implemented in the deferred rendering path by modifying the GBuffer contents before the lighting was calculated. The effect reduces the albedo and GI and then increases the smoothness to mimic the wet look.
This is how the lighting initially worked: Unity was rendering a GBuffer, which was then used by the lighting algorithm to produce a color buffer.
:center-px:

:image-description:
This is how the Unity's render graph nodes look like. Gbuffer is fed to the deferred lighting pass.
The rain atmosphere effect works by modifying a GBuffer just before the Deferred Lighting pass. It copies GBuffer content using the InitializeRainResources
node, and then creates a shared resource RainResourceData
that holds copy of the GBuffer.
ApplyRainAtmosphere
modifies the original GBuffer to increase smoothness and reduce global illumination and albedo, making surfaces appear more wet. The effect is applied using a full-screen shader, similar to a post-processing effect.
:center-px:

:image-description:
Current implementation of the effect.
:center-px:

___
Shadowcasting technique
I want to modify the effect to make the surfaces look dry when they are under any cover. I want to use the shadowcasting technique for that.
Classic shadowcasting works by rendering the depth of the scene from the light point-of-view. This depth texture rendered by the light is called Shadowmap. Then, the lighting algorithm compares the rendered surface distance to the light with the distance saved in the depth rendered by the light to calculate the shadow.
In my case, I like to interpret the falling rain as a directional light that emits "wetness" instead of light. Let's see how we can implement that using this image.
:center-px:

1. Rendering depth from the light point-of-view
I will place the camera at the light source. This camera will render a depth-only scene. The created texture is called a shadowmap.
:center-px:

:image-description:
Vertical lines represent the depth values stored within the shadowmap.
2. The lighting algorithm uses shadowmap to estimate shadow
The lighting algorithm uses the shadowmap to calculate the shadows when rendering with the main camera.
When calculating the lighting for any surface, the algorithm compares the distance to the light source with the distance saved in the shadowmap.
If the distance to the light source is smaller or close to the distance saved in the shadowmap - there is no shadow.
:center-px:

:image-description:
The distance to a light source matches the distance saved in the shadow map - there is a light.
When the distance from the surface to the light is larger than the distance saved in the shadowmap - there is a shadow.
:center-px:

:image-description:
The distance to the light source is larger than the distance stored in the shadowmap - there is a shadow.
To make this technique possible, we need a way to convert world space position into shadowmap UV. This can be done using the view and projection matrices of the camera that was used to render the shadowmap.
If we do this in the postprocess, we need to reconstruct the world space position from the main camera's depth buffer because the world space position is not stored within the GBuffer.
___
Implementing shadowcasting to modulate the effect
To implement the shadowcasting, I need to get this data into the shader that applies the effect:
Main camera depth buffer - to reconstruct world space position from depth.
View and projection matrices used to render the camera - used to convert world space position into shadowmap UV.
Shadowmap texture.
I will divide the implementation into a few steps:
Configuring a camera that will render a shadowmap.
Modifying the render feature - forwarding the rendered shadowmap into the shader.
Implement shadowcasting in the shader.
___
1. Configuring shadowmap camera
I created a new camera and placed it on the scene. I set the camera to render orthographic and made it render top-down, capturing most of the scene.
:center-px:

Then I created a RainCamera
component. This component is responsible for creating a shadowmap texture and assigning it to the camera.
The component creates a shadowmap texture using a depth-only texture format. Then, it renders the camera into the shadow map and saves the parameters for further use in the render feature.
The RainCameraParameters
will be used in the render feature to render the effect.
I assigned the component to the created camera. I used a Frame Debugger to see if the shadowmap texture was properly rendered. After enabling the camera in the inspector, the shadowmap looks correct in the FrameDebugger.
:center-px:

:image-description:
Camera properly renders the scene content, checked in the Frame Debugger window.
___
2. Modifying the render feature
Now, I need to find a way to bind this shadowmap and camera depth buffer to the shader that renders the effect.
InitializeRainResourcesPass
was used to prepare all the resources used by the effect, so it looks like a perfect place to set shadowmap parameters.
RainResourceData
is a class that shares all the resources of the effect with other passes. I decided to add a shadowmap parameters here.
If the RainCamera
component is present on a scene, I set the parameters for other passes.
Then I moved to the ApplyRainAtmospherePass
, where all rendering is implemented. I need to do two things here:
Forward the shadowmap parameters to the shader.
Forward the camera depth buffer to the shader.
I modified the PassData
that stores resources used in the render function.
Then, I modified the render pass that applies the rain effect. I accessed the shared resources and forwarded them into the render function. Only commented lines were added.
Then, I modified the render function to set all the resources in the shader. I used MaterialPropertyBlock from the render graph pool.
_CameraDepthTexture
- Game camera's depth texture
_RainCamera_Shadowmap
- Rendered shadowmap
_RainCamera_MatrixV
, _RainCamera_MatrixP
- view and projection matrices of the camera that rendered a shadowmap (required to convert world space position to shadowmap UV).
___
3. Implementing shadowcasting in the shader
After all the required matrices and textures are forwarded to the, I need to implement the shadowcasting inside the shader. There are a few steps to be done:
Declare all resources and uniforms in the shader.
Reconstruct world space position from the depth buffer.
Calculate shadowmap UV from the world space position.
Calculate the rain-camera depth value using the world space position.
Compare the calculated depth with the value in the shadowmap.
___
3.1 Declare all resources in the shader
I placed all required uniforms and resources in the shader code.
___
3.2 Reconstruct world space position from the depth buffer
Now, I need to get the world space position that I could use in the shadowcasting algorithm. Because GBuffer doesn't contain world space position, I need to derive it from the value in the depth buffer.
The goal is to sample a raw depth buffer value. Then, I will reconstruct the clip space position of the game camera using the screen UV and this raw depth value. Clip space position can be converted into a world space position using the inversed view-projection matrices.
This is how I read the depth value from the depth buffer.
Then I can reconstruct the clip space position using the UV and the raw depth value.
The screen UV range is from [0,1], and clip space XY is in [-1,1]. I just need to remap it to [-1, 1] and set the Z value to the sampled depth.
Next, I need to convert the clip space position into a world space position. Usually, vertex shaders use a View-Projection matrix to convert a world space position into clip space, so I can use an inversed View-Projection matrix to flip the transformation. Unity stores the inversed view-projection matrix in the UNITY_MATRIX_I_VP
.
When making such changes, I like to print the results on the screen using just an emission buffer. It is helpful to see if the depth data is sampled properly or if the reconstructed position WS is correct.
:center-100:

:image-description:
From the left: original image, displayed raw depth, displayed position WS. All these images need to be distorted in accordance with the 3D shapes. This image shows correctly sampled depth and position in world space.
___
3.3 Calculate shadowmap UV from the world space position
Now, I can use the world space position to calculate the UV of the shadowmap. I will use the view and projection matrix of the rain-camera to calculate the clip space position.
Again, to check if everything went correctly, I printed UV to the emission buffer:
:center-50:

:image-description:
UV of a shadowmap displayed on the screen.
It appears that the shadowmap UV is calculated correctly.
___
3.4 Calculate rain-camera depth and shadowmap sampling
It is the right time to sample the shadowmap and calculate the rain-camera depth value for the surface.
I can sample the shadowmap using the calculated UV:
And when printed to the screen, it looks correct:
:center-50:

:image-description:
Shadowmap values sampled for a previously calculated world space position. You can notice that buildings stand out because the depth value under the roofs is the same as for the roofs. This is a correct result.
I need to get the rain-camera depth value for the surface using the world space position, but...
...it is actually the Z value of the rain camera clip space. I can just use that because it is already calculated.
:center-50:

:image-description:
Notice that the calculated depth values look more correct. And that's the key. We will compare the value in the shadow map with the calculated value to determine if there is a shadow or not.
___
3.5. Calculating the shadow
It's time to compare the calculated depth value with the value read from the shadowmap. If the calculated value is smaller than the value in the shadowmap, there is a shadow.
However, the values in the shadow map are quantized and not perfect. I need to apply some offset to reduce artifacts. I tested a few different offsets, and 0.002 works fine. Later, I could polish the effect and calculate this offset on the fly instead of hardcoding it, but right now, it's fine.
The offset should depend on the rain-camera render distance and depth encoding.
:center-100:

:image-description:
This is how shadowMultiplier
looks like when displayed on the screen.
I used the calculated shadowMultiplier
to modify the effect strength.
:center-px:

:image-description:
This is how the effect looks like when we just use the simplest shadowcasting technique.
The problem is that the calculated shadowmask is super sharp. The transition between dry and wet surfaces appears too obvious. It will be nice to blur it a little.
Blurring can be implemented by sampling many shadowmap samples from the nearby UVs and calculating the average shadow multiplier out of that.
I will use a kernel to sample a shadowmap a few times from different positions, and then I will calculate the average shadow mask value from all those samples. The sampling kernel is an array of offsets that should be applied to UV before sampling.
I asked ChatGPT to generate a sampling kernel with 16 points with Poisson-Disk distribution. I will use this kernel for sampling. I put the kernel above the fragment shader:
:center-px:

:image-description:
Plotted all of the offsets within a kernel.
I iterated through all the offsets in the kernel and calculated the shadow multiplier for each offset. Then I calculated the average, which is used to modulate the effect strength. After implementing the kernel, one last thing was to play with the offsets.
:center-100:

:image-description:
You can't really notice the banding in the final result.
I see that there is a lot of banding that could be fixed. But the truth is that the player will never see the mask. Try to spot this banding when it's in action as a mask for the effect. I will add the parameters to the material to control all the offsets.
This is the final fragment shader code will all the tweaks:
This is the final effect:
:center-px:
