Unity Shaders Update

I've been doing some more experimentation with shaders in Unity. Here's a brief update.

Combining with the base texture

The main thing I've done is add some functionality that combines the base color of the object's material with the crosshatching effect. The effect can be seen in the image below.


All four of these tori are using the same shader graph, and have the same input parameters to that shader graph. The reason why they appear differently shaded is because they have different textures. Each torus has a uniform greyscale material of differing brightness, going from pure white for the torus on the left to pure black for the torus on the right. As you can see, the shading kicks in more aggressively on objects with darker base textures.

Because this effect depends on the material and not the shader, I can vary the quality of the shading across objects without having to create separate meshes with different materials. All I have to do is paint the areas of the model that I want to be more heavily shaded with a darker color.

If a colored material is used, this is what the effect looks like:


The color still ranges from white to black, but the hue of the in-between shading matches the hue of the input color. The aggressiveness of the shading still depends on the texture's brightness. I'm not sure if I'm a biggest fan of the way this effect loos in isolation. However, I think I want to have some way of fitting a little bit of color into each character's design, both for aesthetic purposes and as a way to distinguish multiple players using the same character.

Alright, now let's go over what I changed in the shader graph to achieve this effect.

In order to combine the shading with the base texture, I feed the RGBA channel of the base texture into the shader graph, then remap it from [0, 1] to [-1, 2]. I then add it to the normal map between the light and the object, and apply a saturate node to the result, which clamps the output to between 0 and 1.


So, what's the result of this? Well, for greyscale textures, the RGBA channel of the texture is effectively a map of the brightness which ranges from 0 to 1, with 0 corresponding to pure black and 1 corresponding to pure white. The normal map, on the other hand, ranges from -1 to 1.

If the input from the texture is 0, then it will get remapped to -1, which will always result in a value of 0 or less when summed with the normal map (black). If the input from the texture is 1, then it will get remapped to 2, which will always result in a value of 1 or more when summed with the normal map (white). If the input from the texture is between 0 and 1, then the end result may either be above 1, below 0, or somewhere in between depending on the value of the normal map at each location. This effectively allows me to control when shading starts to happen by controlling the value of the texture. It also allows me to force parts of the object to always be pure white or pure black, no matter what kind of light is being cast on them, by painting that part of the object pure white or pure black.

While I was at it, I made some minor adjustments to the math in my shader graph. Initially, I was kind of just following the tutorial without thinking much about what the things I was operating on actually represented and what the nodes did to them. Now, I think I have a better understanding of what the nodes actually do, so I can make more informed decisions about what to change and what effect it will have on the final product.


Next Steps

Testing out the shader on a torus made me realize that the shader doesn't currently support occlusion. When light shines sideways onto the torus, the inner edge of the ring facing the light remains bright, even though the light should be blocked. To be honest, I don't think this is the biggest deal in the world since I'm not prioritizing realism too highly, but I do think that shadows could look nice if used appropriately.

Now, there is a way to retrieve a "shadow map" of the directional light and include it in the shader graph. However, the Universal Render Pipeline doesn't support shadows for multiple directional lights at the same time, for reasons that are a mystery to me.


That's a bit of an issue. My plan was to have each character be in a separate render layer and have their own personal light that shines on them from a slightly front-facing angle, something like this:

If only one directional light can do shadows at a time, and I want multiple lights at different angles, then I have to adapt my approach to use something other than directional lights. Thankfully, it seems like I can achieve the same effect with box lights. 


Box lights seem better for my purposes anyway, because directional lights affect an entire scene and box lights only affect a set area, and I want to have my lights affect characters separately anyway. I just need to figure out how to call them into the shader graph.

I suppose I could also change my approach to use a traditional multi-point lighting setup, ditch the custom shader, and have the crosshatching be a post-processing filter that's applied based on the brightness map of the final render. While that's probably a lot simpler, as most of the work would be done by Unity's built-in lighting system, it seems to me like that would change the nature of the lighting, and make it harder to implement effects such as forcing certain parts of the model to always be pure white or pure black independent of shading. I imagine that my method also is a lot easier from a performance perspective, since I'm not aiming for realism and I'm applying a limited set of curated lighting calculations.

This tutorial I found linked from a Reddit post seems to cover how to import specific lights into the shader graph, and this GitHub repo contains some custom HLSL functions for iterating through additional lights and importing the "shadowmask" into the shader graph, which sounds like what I want. I'll go through them and see what kind of results I get.

Now, one thing that I'm consistently seeing on the discussion boards I'm going through is that real-time lighting is rather performance-heavy, and that lighting is generally "baked" into each object's texture rather than computed during runtime. In my case, I'm not really planning on doing any fancy stuff like indirect lighting or ambient occlusion, which seems like where a lot of the performance cost comes in. My main concern, I guess, is that I'm not sure how I can tell if Unity is doing computations that I'm not actually using in the background, and how I can stop Unity from doing them if so.

Comments