Indexing Trick
Unfortunately, I couldn't find any mention of a way to manually control the indices of lights called by GetAdditionalLights(). However, I've come up with a workaround. There's a lot of properties of my lights that don't actually affect my scene, as they just end up getting overridden by the shader graph.
Of particular note is the color. As of right now, light color has no effect on the scene. Conveniently, the HLSL packages from URP include a simple built-in function for pulling the light's color. Also, color is one of the variable types that can be used as an input parameter for the shader graph.
Here's my idea, then: create a color input for my shader. Set each light's color to a specific RGBA value, then set the color input for each material I want that light to affect to that same value. Then, in my shader code, iterate through all of the additional lights, but only perform calculations using the ones that match the material's input color.
This method can be easily expanded. For example, if I want to have multiple lights operate on the same object in different ways, all I have to do is create an additional color input to the shader for each light I want, then create separate checks in my shader code for each of those input colors. I can also have the same light affect multiple objects. I bring this up because it may be useful for implementing visual effects on attacks. Specifically, I think it would be cool if projectiles and attack effects shed colored light on surrounding characters and the environment. It would be a very unique, striking effect, and really serve to emphasize the attack animations in the context of the limited color palette.
But I'm getting ahead of myself. For now, I think it's enough to get the "color filtering" method work for single lights.
I modified the code for my custom calculation function in the shader graph like so, replacing the "lightID" input parameter with a "TargetRed1" parameter that is checked against the red channel of all additional lights. If the TargetRed1 parameter matches the red channel of the additional light exactly, it calculates the normal map for that light.
To test my code, I set the "TargetRed1" values of each of my four test objects to different values. When I changed the red channel of the only additional light in the scene, only one of the four objects would light up at a time.
Now, here's me lighting all four test objects from different directions, as controlled by the light color:
Presumably, I'll be able to control parameters like the light color, target color on the material, etc. using code during runtime. The downside of this method is that I won't be able to use the light color for other purposes, but I don't think I'll need that for this project.
Shadows
The next issue I tackled was adding shadows to my objects. I'm kind of skeptical if I want real-time shadows because I don't think it would add that much to the visuals, and I know real-time shadows incur a decently heavy performance cost. However, I wanted to at least evaluate the possibility of implementing them to see if I liked the effect.
There's a lot of tutorials out there on adding shadows to custom shaders, so I mostly just followed them. This video, alongside the accompanying article, was my main resource. These were very helpful because they cleared up a lot of the issues that weren't made clear by the documentation. For example, the documentation identifies that all objects need a "ShadowCaster shader pass" in order to cast shadows.
Apparently, this is done automatically for objects that use a lit shader, but my shader graphs are unlit, and the documentation doesn't adequately explain how to do a "ShadowCaster shader pass" in that case. According to the video tutorial, though, all you have to do is include this line in the header in your HLSL code:
#include "Packages/com.unity.render-pipelines.universal/Editor/ShaderGraph/Includes/ShaderPass.hlsl" |
In order to implement the shadows, I started a separate shader graph, just to keep things simple for now and see what the shadow effect looked like in isolation. The goal here is to extract a parameter called the "shadow attenuation," which is basically a map of all the areas in the scene where Unity determines shadows should be cast by a given light. Here's what the HLSL code to do that looks like for the main light:
Note that for this to work, you first need to get something called the "shadow coordinates." According to the documentation, you don't need to do this step for additional lights: apparently, all you need for those to work are the light index and the position. I find it a little strange that only the main light needs this step, but whatever.
Anyway, here's what the shadow attenuation looks like using the main directional light:
The plan is to add this shadow attenuation to my existing shader, either after calculating everything else, or perhaps on top of my normal map. The only problem, really, is that my normal map technique already takes care of most of the shadows: the only thing the shadow map adds that the normal map doesn't are areas where the normal is facing towards the light, but the surface is occluded by something else that gets in the way of the light. Now, I imagine that I'm mostly going to be keeping my texture brightnesses at a range where about the object starts to be shaded pure black anyway once the surface normal is at an angle of 60-90 degrees or less from the light direction, so that the shadow map at least isn't adding any additional shadows on the opposite side of objects from the light where I don't want them. If it really comes down to it, I could probably figure out a way to isolate all the areas in the object where the dot product of the normal and light are above/below a certain threshold, and set the shadow attenuation to 0 in those areas in order to ensure I'm only getting the type of shadows I want from the attenuation.
Now, unlike the normal map based shading, which I implemented manually in my shader graph and is relatively simple, the shadow attenuation is based on URP's built-in lighting system. There's a few implications of this. One of them is that the calculations are probably a lot more complicated than what I've been doing, and as such will incur a heavier performance burden. It also means that I can fine-tune the look of my shadows quite a lot by adjusting the parameters in my URP Asset in the Settings folder.
I'm not very familiar with what all of these settings mean. However, they do seem to have quite a significant impact on the results. In particular, the "depth bias" and "normal bias" settings claim to be "useful for avoiding false self-shadowing artifacts," which seems to be along the lines of what I'm trying to achieve.
Another implication is that the "render layers" setting actually makes a difference here. Shadows will only be cast if the light and the object the shadows are being cast off of are in the same render layer. Note that shadows can still be cast on objects that are in a different render layer from the light.
However, the thing that's adding the shadow attenuation to the appearance of the object is my shader, not Unity's built-in lighting system. Shadow attenuation is calculated individually for each light, so I can just use the light selection method I outlined earlier in this post to have objects only calculate and add attenuation from certain lights.
This is important, because I want player characters to be able to cast shadows on themselves, but not on each other. I want to maintain the appearance of having the lighting directions be different for different characters, and I think that having shadows be cast between characters would be kind of jarring when combined with that style. Also, the standard practice in fighting games is to have characters be rendered in two distinct "layers" without ever clipping into each other. When character models overlap, one of their models will always be unobscured, entirely in front of the other. Generally, the character who most recently landed an attack is always in front.
I think this is a smart design decision, as it places emphasis on the attacking character and helps maintain a consistent visual profile for their attack animations. I'm not quite sure how exactly I would implement this effect in Unity, but I imagine that it would either not be possible to implement alongside inter-character shadows (as I'd have to render the characters separately in different locations with different cameras then superimpose them), or it would look strange alongside inter-character shadows (as the characters would appear separate in the render, when in the actual game space they're clipping into each other, which will greatly affect how they occlude each others' light).
In order to have separate shadows for separate characters, I'd have to implement the shadow map for additional lights, rather than main lights. However, no matter what I tried, I just couldn't seem to replicate these results with additional lights. I kept running into the same roadblock: the shadowAttenuation value for additional lights seemed to just put out a constant value for all points on every object within their field of effect without actually calculating any shadows. Despite following various tutorials and trying to figure things out myself by working right from the documentation, nothing seemed to fix this issue.
Eventually, I tracked down this forum thread from March-April of this year, which seemed to be describing a similar issue to what I was having.
Incidentally, this is the same Unity staff member from my previous post who said that the Forward+ path's structure for handling additional lights was "not exactly nice."
I tried opening my project in Unity 6.2 Beta, but it told me I didn't have a valid license, even though I couldn't find anything stating you needed a specific license in order to access beta versions.
I'm currently in the process of downloading the most recent version of 6.1. Once it's done, I'll test how the additional light shadows work there. I may also try the 6.2 Beta again and see if I can work out the license issue.
Comments
Post a Comment