Things and stuff

I’ve been spending some time rethinking and re-implementing a part of my material system to basically be more versatile than what it was.
Since my renderer still utilizes sort of a hybrid rendering using a deferred approach to handle the largest bulk of the lighting and standard forward rendering for the rest, what I can and can’t do is pretty apparent.
But. With my latest additions the renderer is way more full featured and can be used to make a plethora of effects.
I still don’t handle transparency that well. Sorting transparent geometry is always difficult to get good. So I probably won’t bother trying to make more of it than I have to. Basic back to front sorting will have to do.


Here’s a boring ol’ wall tile with a texture applied.
Now how this is being rendered is of course mostly up to the material attached to that object.
We can change the material to do something else:

Here I turned the material into a mirror-shine super glossy material.

I also added a “custom” routine for effects and stuff that fall out of the reach of my generalized pipeline so we can render those with custom shaders and blending operations and stuff.
Main problem with that is of course that they can usually not take part of the scene’s lighting.
To remedy that, there’s a “super crisis-system”(this is not its actual name) that allows arbitrary shaders to access the scene’s lighting manually so we can at least get some of the lighting on it. Right now I’m planning on allowing access to 4 of the biggest nearby lights. I’m filtering out small and faint lights as these don’t affect the geometry as much (Duh).

I finished putting in support for the last channel of the dedicated “mask” texture that is available through the material’s texture slots.
The “mask” texture is a special texture that stores 3 textures in one. One texture per channel.
It stores Specular Gloss, Fresnel and Reflection.
I find it a tiny bit unintuitive to cram textures into channels of other textures, but artists don’t generally seem to mind; there are plenty other engines that do something similar.

Aaah. What else?

Haha. 😀 Just for fun I tried putting in a monster from Doom 3 to see how the texture maps translated into my engine’s material system.

Hmm. I guess that’s about it for right now. I should really document what I actually do better. I feel like I forget more than half the things.

Advertisements

Uhh…

Not surprisingly, I’ve run into some more “trouble”.
I mentioned briefly quite a few posts back that I was suffering from some issues with alpha testing. Most notably so from it not working at all.
I dug a bit deeper into it and it seems like my OpenGL context somehow got the idea that GL_ALPHA_TEST is in fact deprecated and therefore completely unsupported.
I admit I’m not entirely sure as to why this happened just now, and why it seems to be local to my current project…

I don’t really know what to make of it.
At first I thought is was a driver bug, seeing as how it stopped working suddenly. But then I updated my drivers like 2 times without any notable effect so that possibility isn’t a factor. (+ my older projects still work and they use GL_ALPHA_TEST.)
Now I don’t know what to say. I can still alpha test in my shader, but I fear that it may suffer performance impacts. And it has
already with my “vegetation test” a few posts back.
Though to be completely honest it does seem like alpha testing may take a turn for a shader based approach in the future, if not already. I think that DirectX 10 and later perform alpha testing in shaders because they disabled the possibility to do so otherwise from DirectX 9.
Seems to be reasonable that modern OpenGL versions use something akin to this…

Or something like that, I don’t really know. It’s kind of a mind-effer…

Edit: And yes, my OpenGL context is backwards compatible so that’s not the problem here.

Edit #2: Alright. My friend confirmed that GL_ALPHA_TEST and its kin is in fact deprecated. So I’m sticking to my manual alpha testing. It’s good to know I’m not going insane. The moment I saw that glEnable(GL_ALPHA_TEST) generated GL_INVALID_ENUM and some other errors no matter what I tried, I was all but sure I’d gone of the deep end.

CONCLUDED

OK. That is REALLY it.

I have solved the problem finally… No more light-bleeding, no more weirdness, no more nothing of that sort.

I’ve been fighting this problem for so long. So VERY long.

It never ceases to amaze me how the problem always lies in the absolute last thing you check. I feel this is an established concept, but I am blanking on the name used for it right now.
HOWEVER, it has also been proven (on my account, at least) that the problem may still exist and that I just think I’ve solved it but it still isn’t fixed.
Time will tell. But I really think this is the end of this for a while at least. (I hope)

Now, onto the actual problem.
This may sound exaggerated to some, but I’ve been having this same problem for pretty much as long as I care to admit.
I estimate it to be at least 1 year almost to the date.
The error is, however, hard to track down as it is located in a field where dozens of things could be wrong.
Add to that the fact that the error doesn’t show up that often so I’ve often found myself forgetting that it exists.

The previous posts outlined the idea that the deferred rendered scenery didn’t have the same problems the forward rendering did. This is not actually true, and I’m regretful that I said it with that level of certainty. Upon much closer inspection (debug information and stuff) they do exist, but they look a lot different than the forward rendering artifacts and not even half as outstanding, so I overlooked it.

Anyway. The problem lies in the CPU side of the tangent space (not the shader like I suspected), in the model loading code where I actually generate the vertex tangent vectors.
The problem is not the code that generates the tangent vectors, the error is when I was generating them.
I have to process my loaded models to remove cases where vertices share normals or texture coordinates, and this is a normal thing to do.
But I was generating the tangent vectors before that step, so I ended up having vertices that instead shared tangent vectors.
This explains why it only happened in some cases. I think I actually mentioned briefly in one of the earlier posts that I had noticed that the errors occur in areas where the tangents vary wildly, or point away from each other.

But. It should be good now.

Some stuff

Continuing the destructive streak that is my battle against the shaders:

OK. My stupid curiosity just won’t let this light-bleeding bug alone and it keeps messing with my head so I went back.
This is where it would be terrific to tell you that it was solved and that we can all live happily ever after.
It’s not, though. It’s still crap, but less.
I put in some additional code to deal with light falloff in a little more predictable manner and I managed to get the errors to show up a lot less.
They’re still not gone, since I really am not sure of what’s causing it to begin with, but I’m fairly sure they won’t be that bothersome anymore and probably won’t put some artist into a tough spot where they think their model sucks when it doesn’t.

But that’s really it, though. I’ll just keep it “as is” and I won’t care much for it anymore. I’m going to kick that shader right out of the door the second I get deferred shadow maps working anyway so I can deal with a few more weeks of some nebulous lighting bugs.

Terrible thing to have to fight with obsessions like this…

Thoughts

OK. This is really more than any man should endure, dealing with the same goddamn stuff for what seems like years.
But what can I do? I’m kind of a masochist like that: when I’m faced with a problem in my own work I can’t let it go, or I’ll feel terrible.

This will have to be one of those times and boy does it feel terrible. It’s bordering depressing, really.

Anyway, what am I talking about?

This.

(the model is a test-case model of a humanoid, not important really)
This is what I’m talking about. What the hell is that? Why does that need to happen?
It’s not some minute detail that has been forgotten like a normalized vector or anything like that, I’ve been through every vector and I’ve come down on every error like a hammer.
I’m all but tearing my own hair out over that crap, that filthy artifact that pops up every now and then.

Anyway… I started thinking. The UV-mapping for that model is an absolute shame, and I understand that bad UV-mapping can break normal mapping without skipping a beat. But as I mentioned in my last post: nothing is wrong with the normal mapping in deferred rendering.
That’s the kicker right there. Now I get that deferred rendering happens strictly per-pixel and that may be what’s causing those results to turn up fine without these banding artifacts. But I’m really wondering? Is there per-pixel normal mapping for forward rendering? If so is that the way to go?
I’ve tried moving some of the calculations over to the fragment shader, but that did absolutely nothing. It, in fact, changed so little(it didn’t change at all) I had to make sure I wasn’t editing the wrong file. I wasn’t…

This may not seem like a big thing. And I admit that with a proper normal map and models with good topology these artifacts are only visible in rare cases, if at all. That could of course be overlooked. But I can’t do it, I can’t keep going, knowing there’s something that wrong going on with that particular shader.
What’s worse is that it will show up on developer tools, meaning some poor 3d modeler will think their model was crappy, when in fact it was my problem all along.
I feel like screaming…

Still though. As I mentioned: At this point I’m willing to turn the key, walk away and have it burn to the ground behind me. I can’t really spend more time on this and I’ll just have to feel terrible about it.

Edit: Alright, OK. I just have to point out that: Of course I know the difference between per-vertex and per-pixel lighting. What I mean is, is there a way to perform normal mapping that is as per-pixel as deferred rendering but in forward rendering?
I doubt that there is, or that it is even necessary- but what the hell? I’m kinda pissed off and not thinking straight with this.

Quick update

Terrible things.

Well, I’ve known it for some time but it’s about time I trash the spotlight rendering code I have.
Which means I may have to temporarily pull all shadowing support from the engine as a result while I search for a better way to deal with things.

Anyway, the reason is still the light-bleeding bugs I’ve mentioned quite a few times about in the past posts ever since I started noticing it.
The path to solve this issue has been long and thorny and I’m not even finished yet- that’s why I decided that I’m not going to either.

It really doesn’t make any sense to me at all. The bug is that there are areas in the model that turn up with these weird “pinched” artifacts where the light behaves oddly and it just looks terrible.
The funny(extremely depressing) part is that my deferred rendered lights work flawlessly and there are no lighting bugs to speak of.
This means that the problem cannot be the vertex normals, tangents or bitangents, which I suspected a while ago. This also means that it probably isn’t the constructed TBN matrix either, since the three mentioned vectors that make it up are correct.

Anyway. I’ll maybe take a few more shots at fixing it, but I feel this problem really has overstayed its welcome.
I might look into deferred shadow maps instead. Seems like a good choice since I’m already working with a deferred system.

Bye.

Detail work

That’s the term I choose for this particular endeavor. Basically what I’ve been doing is I’ve scoured through pretty much the entire engine code-base and fixed a ton of tiny things everywhere to make it run smoother in a lot of circumstances.
One of the cooler things I did was to finally (I mean FINALLY!) move the deferred rendering code to use linear depth instead of perspective depth which actually sped things up like I couldn’t believe.
This also means that I lost(scrapped) an entire render target. I can now tuck my linear depth into the normal buffers alpha channel instead of having a unique buffer for it. My normal buffer is a 16 bit floating point render target for precision anyway so this was a fine opportunity.
So I’d say that the deferred rendering part of the engine’s renderer is feature complete and I can let that sit for a while before tearing anything out again.

I also extended the renderer a bit in what concerns the forward rendering part: I added a way of using a custom shader to display an object in the world instead of going through the usual pipeline.
This is of course only really suited for the (hopefully) few situations where this is actually needed. I say this because this part of the renderer is slower than the rest of it. I’m of course going to optimize it the best I know how but it’ll still be slow in comparison.
This is also where I plan on dealing with any and all alpha blended geometry since that has no place in deferred rendering without a few adjustments, which I will not look into right now.

I fixed an older program I made a while back which I use to compile cubemaps for the engine. All it does is pack a bunch of images into one file with a header so it’s easier to load/carry around without having separate files to deal with.
Anyway, it seemed to have stopped working so I fixed it up and it should hopefully work properly now.

I reworked the asset pipeline a little and added support for PSD file loading, in case someone wants to see how textures look but not go through the trouble of saving it as any of the accepted real-time formats.

Gosh. There’s just too much to list I don’t even know what to write next…
So I’ll just leave it at that, and I’ll leave the rest of the improvements to your imagination, which I’m sure you’ll use responsibly.
I might stop by in a few days or something with some new pics… Or dare I say it- Video??
We’ll see.
Bye for now. 🙂