Programming

Piling up

Engine work continues. I’ve added some new things and I’ve improved some other things. It’s going pretty effortlessly at this point.
A preliminary CPU profiling session yielded in no issues at all really, so it’s also going good as far as performance goes.

I’ve now completed the very basic animation support implementation that I started a while back and it’s actually kind of useable right now.
It’s of course missing important things such as morph targets and IK support. (Though I probably won’t bother with IK just yet.)
I’m also going to try to put in things such as “jiggle bones” and other secondary animation controllers just for completeness sake.

Even though most of what should be working is working right now, there’s still a mountain of things to change and fix that have to do with how the engine communicates with itself, primarily.
The renderer right now is pretty self-sufficient. But since I decided to make it “absolute” it’s not very customizable from outside influence.
I’m fine with this “design flaw”, since it was never my intention to make a really generalized engine anyhow, and for projects that need something else I can customize the pipeline.
But all this is secondary, I feel. It’s important to have it working as intended first, then have it well implemented- once you know what the hell you’re doing.

Hmmm… That should be it for now. I’ll pop back in later to maybe post some screenshots.

Bye.

Awaiting the fall…

Yeah, guess I bought myself some time by fixing that animation problem. It turns out that I wasn’t using the right matrix after all. Still not quite sure what the difference exactly is, but the point is it works now.

And by “works” I mean that it’s on par with the old exporter (for Blender 2.49b that I have mentioned on several occasions) and what results it managed to spit out back in its heyday.

The rotations are nowadays stored as Quaternions without the ‘W’ component opposed to Euler angles like they were back then. I recreate the ‘W’ component when I load the animations, mostly to conserve a little space in the format, storing a 3d vector instead of the 4d.

That’s all for now… 🙂

One step forward, two steps back…

That’s basically what’s going on right now. When the other stuff start to fall into place, with writing a renderer that’s not complete crap and all, the other stuff started breaking down.

It’s the dreaded animation again, the Blender switch I made a while back broke something. But without the chance to really mess with it inside the engine I didn’t know that it was broken. So I just chalked it up as a smooth conversion and didn’t think about it more.
But that has changed, in a bad way. It’s completely messed up and that’s because how I dealt with matrices back in the old Blender Python API doesn’t translate that great into the new API.
Or maybe they do translate 1:1 but I just haven’t found the right matrices yet.

It really seems like I can’t catch a break with this at all. Even when I’m doing my best at logically tracking what’s going on with the matrices it leaves me confused. It’s actually come to the point where I’m starting to blame the API for pulling tricks behind my back, knowing of course that that’s just a lame excuse, if any.

So, in the worst case scenario I get to spend a little more time with the old exporter for Blender 2.49. Just until I figure that stuff out with the newer one.

I’ll figure it out somehow…

Back to it… Wheeeeee…

Edit: Forgot to mention. There’s a project I’ve been working on, sort of like a toy project just to do something different alongside everything else. And I might make a sort of an “announcement” post about it here. I’m just not sure if I’m going to follow through with that project to the end, since it does have a planned “end goal” type thing.
We’ll see. 🙂

Engine demo WIP

So, it’s finally time to make my hard work amount for something, and I’m in the middle of composing a small demo to show off the engine’s features and such.
The following screenshots are direct shots taken from the demo render. The apparent state of the engine is that it only has the simple features working. I have no post-processing effects going on, since they haven’t been implemented yet. ( so no anti-aliasing :/ )
But regardless it should show the immediate quality of it.


(Click on them for larger size.)

They are both of the same scene, obviously, but one of them contains an orange light in shot, which right now circles around the scene.

There are some things that cannot be seen that easily in the screenshots that the engine does. The metallic and glass surfaces in the scene are actually faking a reflection by using simple cubemapping. Since this cannot be seen in still shots like these, I’m planning on putting up a video once the demo scene is done.

So, I’ll just continue now. Bye. 🙂

Continued: The little deferred renderer that almost could

So, I went against my better judgment and I tried my hand at fixing the deferred rendering again, and this time I actually was able to determine what that little “something” was that was wrong.
It was a value range that was supposed to be normalized but wasn’t… *cough*

It’s almost always something trivial, but you never expect it to be that simple, do you?
Anyhow. It works fine now. 🙂
I started working a little on some actual optimization, and so far it’s going pretty good, except for some tiny bugs creeping in, but those can probably be squashed pretty effortlessly.
In fact, the only thing that doesn’t work right now is the stencil culling routine for eliminating more of those pesky overdrawn pixels.

I’ll just get back to it now, it’s so exciting! 😀

The little deferred renderer that almost could

Yep.

This is the part where I hit my first snag.
But regardless it’s not that big of a deal right now.

It’s once more about the different spaces making it difficult on my poor brain.
And it feels a little counter-productive to have results that almost work.
Right now I try to keep my spaces in world space, as I find it more intuitive that way than other spaces such as view space. But I’ll eventually get to view space when I need to really optimize it.

So as I said, everything is in world space, except for depth, which I store as linear scene z depth.
I then go on to recreate a view space position from this depth and then convert it into world space to accommodate the other data.
But this just produces an “almost right” result. There’s something I’m missing, but I just don’t know what yet.

But like I said, I can roll with the world space position buffer a little while longer. I’m not really supposed to deal with optimizations just yet, but I’m just a human. 🙂

So anyway, it looks to me that this renderer will be a pretty sweet base for my projects if and when I do get the basic optimizations working for the deferred renderer.

Back to it! 🙂

Continuation

OK. There’s a few more things I need to vent just to clear my head.

Work is still going much as I expected. Meaning there isn’t anything real threatening the continuation of this little renderer project I have going on. So that’s fine.
But there are a few considerations I have to take care of a bit further down the road concerning the deferred shading part of the pipeline.
The most pressing issue is of course optimization. And there are quite the number of things to do when it comes to this.
Seeing as it is completely unoptimized right now in terms of GPU work (and *mostly CPU work too…) I can’t really use it for “real” scenarios.
I need to shift the entire space I calculate things in to something less expensive, like view-space, even though I recently read about that “viewspace normal error” that I will probably run into and have to fix. But it’s important for me to keep the GBuffer cost and execution time as short as possible.
I also need to clip my light volumes and such to keep the pixel overdraw to a minimum.
* Mostly meaning that there are only a few things I’ve cared to optimize just now for the CPU workload…

There are also a few things I left out that might be necessary to include, one of these would be a rim-lighting term for objects that need it. And that either adds a new pass to my forward renderer or adds some new code to my ambient pass shader.
I mentioned in the last post that the ambient pass shader also includes the emissive geometry term, so adding the rim-lighting shouldn’t make it that much more burdening.

One of the potential problems I can think of is to as perfectly as possible bridge the gap between forward and deferred rendering. A deferred renderer means that only one material is necessarily enforced on everything drawn with it, and this has to, in my case, also be true for the forward rendering. Therein lies a potential drawback, but if I play my cards right it shouldn’t really matter. I’m not looking to make a that customizable engine, just one solid pipeline is good enough… *sigh*

I haven’t even touched on post-processing yet, but there are a few of those effects I’d like to have too, adding a number of RenderToTexture passes to the pipeline.
HDR would be nice, but not entirely necessary.
And fortunately there’s a pretty cool Anti-Aliasing technique out now called FXAA which I’ve been dying to try out. 🙂

Now there are a few more things, but I’ll save this blog-post from being exposed to my lunatic raving.
So bye for now! 😀

Update

Hey there.

I’ve made some changes to my engine which will hopefully work out just fine, but if not I can always roll back to what I had before, even if it wasn’t that great…

Anyway- The change that I made was to decouple my simple renderer into a new class that’s designed to handle more advanced pipelines. I decoupled it from my scenemanager… Yeah, I just crammed the rendering right in there to save myself the trouble of dealing with it right then and there.
But that’s all taken care of. The new renderer class is much, much better and it finally feels like I’m doing something right again. (Been a while since I felt that. :))

This time however I got inspired. I read online about some guy (please forgive me, guy. But I really can’t remember who it was specifically) who had written a hybrid renderer that utilized both a forward rendering part as well as a deferred renderer for some other tasks. And this gobsmacked me to say the least. It felt like some kind of heavy weight was lifted off my head and I started planning.
It’s so obvious and so perfect for my needs. It offers me the best of both worlds!

There’s a reason why this suits me and my engine. Since forward rendering is still the most robust way of dealing with a few rendering issues (transparency being just one of them), it offers me a pretty stable platform to stand on for lower-end rigs that might like to run my game(s).
I’m not going to go too crazy with supporting lower-end hardware though, this is just a half-measure.
So if the platform isn’t up for spending all that GPU memory on the GBuffer cost it can default to my forward rendering solution.

Now this might seem obvious to some, creepy to some and maybe even a bit stupid to some.
But this is a good choice for my needs right now.

But enough of that. Let’s walk through some stuff it supports right now! 😀

Now I’ve thought a bit different than some regarding the hybrid approach and I’ve chosen to offload as much as possible to a forward rendering pass, this includes unlit/emissive geometry.
Because I’ve come across a few implementations that render the emissive geometry into a chunk in the GBuffer. But that baffled me and I instead render it onto the ambient pass before the deferred rendering even gets to work. Why waste an entire chunk of a buffer for something the lighting doesn’t give a crap about? Emissive (or Unlit) geometry is supposed to look like lit parts of a surface, or maybe I just have a different opinion of what emissive actually implies in this context. *shrug*

Along with this choice I’ve also rendered the “sun” light after the ambient pass just like one would in any forward renderer. I’ve chosen to include at least one global light because there are few scenes that doesn’t use some kind of main light, or even a super-subtle under-lighting. And that’s all available through this.
And even if we don’t want it we can turn it right off, no biggie.

There’s one last thing I do before I let the deferred renderer loose. And that is to render cubemap reflections for the parts of the scene that need it.
Keep in mind that this doesn’t work exactly right as of now, or at least I’m not happy with the resulting look. It’s a bit overpowering right now.

And right about now is the time when I let the deferred renderer do its part. Rendering all of the smaller, non-shadow casting lights. But the good part is that I can have a TON of these! 🙂

During the creation of this I’ve tried to my best extents to keep it running fast and avoid potential bottle-necks and the good news is that there isn’t that much for the renderer to fail on. There’s a slight chance that the forward rendering part, with all effects included, could draw the scene geometry a few times more than we’d like. But even with that it’s nowhere near how many times I’d have to redraw something if I were to use a full-featured forward renderer.
And we all know that a much liked part of a deferred renderer is that we only draw geometry once and that is still true.
But as for the forward rendering pass it draws the scene more than once, and that could potentially, not necessarily, but maybe turn into a performance-killer depending on scene complexity.

And that’s pretty much it. I might make a new post soon that includes some pretty pictures and some future plans, but that’s only if it keeps itself out of trouble.
But, as logic dictates, I’m very prone to being wrong and this dreamy house of cards I’ve assembled might come crashing down on me.
Let’s just hope for the best, I really want to work on games…

Bye! 😀

A few things

Well It’s been pretty slow for a while, but the few things that I have been getting into are pretty tedious matters so there’s all the reasons in the world for it to be slow.

First off. I’m just re-structuring the engine a little, and putting in some new things.
I’m going to start working on scene-management pretty soon and get that out of the way.

But mostly I should invest time in getting the rendering part fully working, and maybe include Deferred Shading as an option for it. But I’m still very much on the fence about the high GPU memory requirements of the GBuffer, even though this is likely to be less prominent in future cards.
But this is all in the cooking pot, and I’m still also considering using a mostly static environment.

I’ve stumbled upon a little thing that I’ve been researching regarding DirectX and I’m considering writing a renderer for it, but I’m right now just comparing the different versions that I can choose from, DirectX9 and 10 (or 10.1, I forget) which are the two I’m interested in, or physically bound to by my graphics card.
Making the rendering part of my engine API independent would be very nice.

So, now onto something else that I thought I would share:

I’ve been working with cubemaps for a few days now and I’ve always found them a little troublesome to create.
Now there are several good software out there that can be used to produce cubemaps, like Terragen and a few other, I’m sure.
But I figured that there has to be a way to do this in any 3d program, and the solution I found is pretty weird, but useful.
NOTE: Now I haven’t seen this solution anywhere else but I won’t claim this idea to be mine.

TIPS: Cubemap makin’ 101
Anyway, the idea is to build a small, simple scene in any 3d program that looks like something, whatever you want really, urban environment, jungle, whatever, doesn’t matter.
The more effort one puts into the scene, the better the cubemap will look, obviously.
But the idea is that this scene has to be enclosed, meaning there can’t be any leaks into the “void” 3d space or whatever one wants to call it.
The reason for this isn’t some technical one, it’s just that we rarely want to have pixels in the cubemap that match the color of the 3d void (usually black in most programs by default).

So after this scene is complete and it’s ready to be rendered you can unwrap a cube like such:

(Chances are you have seen a cube unwrapped like this before)

And this cube needs to be situated somewhere inside the scene you created earlier, I may add. All sides need to “see” the inside of the scene.

So what’s left is you add a 100% reflective material to this cube and omit things such as specular highlights and things from the material so that it’s just a perfect mirror and nothing else.

Now you can bake the results of this material onto a texture, using the UV unwrap that the cube has. The result is a cubemap like this:

There are a few things to note here.
* When baking the result to the texture, padding should be 0 pixels, we don’t want any pixel-bleeding to occur, chances are it will just make it harder to extract the different sides from the rendered textures.
* Also note that you could just as easily separate the six sides of the cube to render to six different textures instead, that would make it easier to remember which side the textures belong to.
* You can’t blur this image in a program like photoshop to create a so called “diffuse” cubemap, you would get seams to embarrass the very gods.
* If you really do need to blur it, most 3d programs have the option to blur the reflection, so just render the blurred reflection instead of the crystal clear one.

That’s pretty much it. I use it and it works pretty good I must admit. Given I mostly use it for diffuse cubemaps for image based effects and such, but none-the-less. 🙂

Now I’m back to whatever it was I was doing. Bye~!

Little progress

Yeah. I haven’t had a lot of progress with the engine at all. Been busy with some other stuff for a while.

But the main idea still flies: I’m struggling with the mesh pipeline, animation et al.

What else? Oh I did change my shader loader to only load one file per shader instead of two separate for vertex and fragment shader, I use preprocessor macros to differ between the two at load-time. So that could be good for some things. The old code is still in there, don’t know when I might need to load separate vertex and fragment source files for efficiency or something…

Also I’ve come to notice that it might be a… Uh- Let’s call it “less than good”, idea to generate normals manually when parsing models… Hmm… Mostly because it’s obvious the code I have right now doesn’t work all that well, and reading up on it online I think it’s just better to let the 3d program handle that for now. It was apparently more complex than I initially imagined.
And considering all the other stuff that’s been not working or acting up lately it’s mostly a hassle anyway.

I still get to generate tangents manually though.

So… That’s the whole onion right there.
Laters.