Shader on a custom model + UV-based shading and more!

 Behold:

This is a 3D model I made myself, with my custom shader applied. I also made quite a lot of modifications to the shaders since my last post, which are all showcased in this screenshot. 

My initial goal for this project was to produce a unique visual style that looks good and stands out, while also not requiring too much effort on the asset production end in order to make things look acceptable. In my personal assessment, I think I've delivered on that goal successfully. 

I'm pretty happy with how my model looks, and given that I have fairly limited experience with 3D modeling and no prior experience with character modeling on this scale, I'm also pretty happy with the amount of time it took (I started the modeling process on Aug 6 and it's currently Aug. 19, so about two weeks). I expect that the production process will get much faster as I get more experience, and I think I've managed to establish a good structure that will make it easy to add new features.

I'll try to go over everything I did, but I forgot to take detailed notes throughout the process because I was so locked in, so I'll probably forget some stuff.

The model

First, the modeling process. This was my first time making a humanoid 3D model, and my first time modeling anything at this level of complexity. I don't think it's worth going into too much detail here, though, since I don't think much of what I did was particularly unique or specific to the demands of this project.

I mostly just followed along with this Maya character modeling tutorial from Alex Cheparev.


The tutorial was pretty easy to follow. 

I don't have the drafting skills needed to produce a super detailed reference image, but I made do with what I had. I sketched out the basic shape of the features I wanted mostly to give myself a sense of proportion, as I feel like I have a much better sense of proportion when drawing than when I'm working in 3D.


For much of the process, though, I was just making it up as I went along, and I was perfectly fine with breaking from the reference as I saw fit.


Initially, I was putting quite a bit of emphasis on having a finely tuned, detailed mesh. I put a lot of effort into trying to get the hands and face to look realistic, and having lots of fiddly little edges and other bits for outlines to show up on.


However, when I did my first test to see how the model actually looked in Unity, I ended up being pretty disappointed. Things just didn't look big enough. I think the outlines especially didn't play too well with the realistically-proportioned fingers: they ended up looking very chunky in comparison to the fingers and made them feel even smaller. I also realized that the shader works best with two types of features: 1. big, blocky figures with hard edges, and 2. soft-looking, "lumpy" materials with lots of small ripples for the shading to play off of.

From that point forward, I decided to keep things simple and blocky, and not worry too much about the finer details.


However, I must admit that I wasn't really thinking of or keeping track of my poly count. I made pretty liberal use of stuff like subdivisions to give more material to sculpt, and the bevel tool to add detail to edges (which, in retrospect, is extremely unnecessary given that such edges are going to get painted over by the outline shader anyway). The poly count is definitely way higher than it has to be, and I'd like for my game to be able to run on less powerful machines:


My inexperience is probably showing here. There's a lot I can do to optimize these models, though. Instead of subdividing my models then sculpting to add details (which, in fairness, was the main method shown in the tutorial series), I should definitely be using normal maps. Normal maps would probably work quite well in this context, as the main reason I did so much subdividing in the first place was because I wanted to add more details like ridges and waves for the normal-based shading to pick up on. I remember finding this tutorial on how to bake texture maps in Maya and apply them in Unity early on in my investigation process. It's probably a good time to check that out again.

Thankfully, I'll probably have a lot of performance budget to spend on characters, as I can save a lot everywhere else: I'm skipping over a lot of lighting calculations, and for fighting games, the players don't have a lot of control over the camera and there isn't much emphasis on paying attention to things at far distances, so you can get away with less detailed environments (or even prerendered 2D backgrounds).

Color Coded Shading

Now, let's go over some modifications I made to how the shader interacts with textures. Before, my plan was to texture my models with one color, then apply a hue shift effect to the texture in the shader graph before running calculations in order to achieve different colors.

After some consideration, though, I figured that there was probably a better method. Given that I'm planning on my characters being monochromatic, I could probably decouple the color completely from the texture, and instead use the texture to encode other parameters.

First, I added new color inputs to my shader called "ShadeColor" and "GlowColor." These will determine the colors that appear on the model.


Then, I changed how the shader graph handles the input texture. Instead of just feeding it into the shading calculation algorithm, it separates the texture into the red, green, and blue channels. 


I'm using these channels to encode different values: the red channel encodes the saturation of the input to the shading algorithm, and the green channel encodes the value. We'll talk about the blue channel later.

Unity has a built-in "Saturation" node, but it applies weights to the red, green, and blue channels in order to account for the human perception of brightness of various tones or whatever.


I decided I didn't want that, but I didn't see a built-in node that modifies saturation without weighting the channels, so I had to implement the math myself:


It's not as complicated as it looks. I start with the ShadeColor input, then take the highest value between the red, blue, and green channels of the shade color. I take the difference between that highest channel and the other two channels, and add a fraction of that difference to those two other channels based on the red channel of the input texture. When the red channel of the input texture is at zero, the shade color is unmodified, and when the red channel is at its maximum value, the red, blue, and green channels will all have the same value, resulting in zero saturation and a grayscale color.

Essentially what I'm doing here

The value modification is much simpler. I just take the green channel of the texture and multiply the shade color by it. 


If the green channel of the texture is 0, then all of the channels of the shade color will become 0, resulting in pure black. If the green channel of the texture is at its maximum, then the shade color will be unmodified.

This allows me to create a range of saturations and values. Assuming that the saturation of the shade color is maxed out by default, I can create any color ranging from pure black (by setting the texture's green channel to 0) to pure white (by setting both the red channel and the green channel to maximum). It's perhaps a little confusing that max red channel = minimum saturation, but I think it makes sense here: higher values from the texture = less shading/closer to white, lower values = more shading/closer to black.

Also, note the presence of a "lightness multiplier" parameter, which I use to multiply with the red and green channels of the texture before using them to modify the saturation and value.


This parameter allows me to control the extent of both types of shading at once (the color and the black hatching).

After applying the saturation and value modifiers to the shade color, I input the result into my shading algorithm, taking the place of the texture in previous iterations of the shader.


From here, things operate mostly the same as they did before. There's one modification, though. In previous iterations of the shader, I would take the result of combining the texture with the normal dot product map and shadows, then split it into color channels and take the average. I did this because the hatching operation requires a greyscale color input in order to work properly.


I do something similar in this version, except instead of just taking the average, I divide the sum of the color channels of the modified shade color + normal dot map + shadows by the sum of the channels of the original shade color.


I do this because not all hues have the same sum of channels given the same saturation and value. For example, pure red, blue, and green at max saturation will have one channel at 255 and the others at 0, while other hues at max saturation will have one channel at 255 and one other channel at somewhere between 0 and 255.


Basically, I'm envisioning a system where instead of picking alternate colors for their character (like you can in most fighting games), players instead pick a hue for their shading. I want the extent of the shading to be the same regardless of which hue the player picks. However, because the shading is currently based off of the sum of the channels of the input to the shading algorithm, and not on their saturation/value, the extent of the shading would be different unless I correct for the difference resulting from hue.

Anyway, to make a long story short, the result of all this is that now, the color of the shading can be easily controlled by a parameter in Unity, rather than having to mess with the texture file or apply a hue shift filter.


There is still the issue of apparent brightness, though: if you select a hue somewhere in between pure R, G, or B, the colors that show up will appear brighter, even if the area covered by the shading and the saturation/value distribution isn't any different. I'm personally not a big fan of this and would like to fix it.

I could probably solve that issue by scaling down the value of the color based on the sum of the channels and changing what gets used to calculate where the shading goes, but I don't feel like spending the energy right now to figure out what the math needs to look like.

Glow

Here's another modification I made to my shader graph:


As you can see, there's now a Lerp just before the output to the fragment shader. The "A" end of the Lerp is the result of the shading calculation we've already established, while the "B" end is the "GlowColor" input multiplied by another input called the "GlowMultiplier." The Lerp is driven by the blue channel of the texture (which you can't see in this image).

Essentially, what this does is make it so that, if the blue channel in the texture is greater than 1, the regular shading will be replaced by the glow color. Also note that both the glow color and the T of the Lerp are being multiplied by a significant factor. This will result in a color with channels way over the usual maximum.

And if "Bloom" is enabled in the URP settings, then color values greater than 1 will cause objects to "glow" and appear to emit light of those colors. 


I don't think it's the best-looking effect in the world, and I still need to tweak the way the math works in order to make it play nicer with different colors. Still, this is a nice, easy way to implement glowing body parts, which I think will work nicely with the game's overall aesthetic.

I was envisioning a system in which players would pick two colors for their character: one for the shading, and one for a glow that would show up on select parts of the character model in order to accentuate certain features. I imagined that this would extent to the effects and UI as well: for example, the hitsparks that appear when a character lands a hit could all be in that character's glow color, adding some flair and a sense of "ownership" to the hits you land.

Actually, there's pretty much no limit to how far I can go with these "effect maps." There's no limit on the number of textures I can assign to each object, and I can do whatever I want with those textures inside the shader graph.

UV Mapping

Up until now, all of my textures have been solid single colors that apply to the entire object. But if I UV map my model, I can use my color-coded texture approach to apply different effects to different parts of the object.

Here's what my UV map ended up looking like:



As you can see, it's not particularly detailed or well thought-out. The scale is all over the place, and most of the mesh components are just in one huge chunk, with not much thought put into creating well-proportioned, flat surfaces for painting. The "goggles band" UV is particularly nasty-looking, because I somehow managed to create non-manifold geometry and I couldn't figure out how to fix it without just redoing a lot of the modeling. However, I think all of that is fine for now, because the only thing I care about at this stage is separating all of the components that I want to be shaded differently.

Now, here's my UV map with the texture painted in:


As you can see, every UV shell just has a single solid color painted over it, with most of them having approximately the same shade of pea-soup green. Some things to note: the hair and undershirt are slightly closer to yellow, indicating higher red and green channels, which results in lighter shading. The red-shaded UVs have lower green channels, which will make them closer to black. The palm is shaded blue, which will result in it having a glow effect (the soles also have some blue in them, which will result in a weaker glow).

You can see all of this reflected in my model:


Note that different parts of the model are lighter or darker than others, and also have different amounts of color in them. That's from me manually setting the saturation and value: more saturation = more color, lower value = darker/more shading. You can also see the glow effect on the palms and the soles of the boots. The glow effect on the boots is considerably weaker: the radius doesn't extend as far.

Also note that there's black outlines at the top and bottom edges of the belt and at the top and bottom of the shirt, and at the edges of the side sections of the hair. Those are from me manually painting the UVs for those areas with a lower green channel: the edge detection algorithm doesn't find depth or normal edges there because the geometry at the edge is very close together and also facing in approximately the same direction. This is nice, because it's hard to find settings for the edge detection algorithm where it detects all the edges I want and doesn't detect any I don't want. Now my model is UV'ed, though, I can keep the edge detection at low sensitivity and manually paint in the edges that it misses.

Overall, this approach makes it very quick and easy to apply different "materials" to different parts of my model without putting much effort into UV mapping and texturing. However, I may have to actually put some real effort into UV mapping (like, actually thinking about texture size and how I want to split up my model) once I get around to implementing normal maps, or drawing things like flat symbols or designs onto my texture. The weird color-coded approach might make creating intricate symbols or designs especially difficult to wrap my head around, but I imagine that, in this art style, those kinds of things would work better if they're kept simple and two-tone anyway, which will thankfully make them much easier to work with.

Honestly, though, I think that whether it's worth expending more effort and adding more detail will depend on how good it looks in motion. I think I'm fine with players noticing "mistakes" if they really scrutinize the models: I'm only concerned if it gets to the point where it's too distracting for them to get immersed into the gameplay. I think I'll probably want to optimize this model a little and tweak how the shaders do color math, but I think I'm mostly ready to rig this thing up and start making some animations.

Comments