Programming

More Blender support

So yeah. The plan I was working on during the last post didn’t work out that great. It turned ugly very fast and it very much made itself obvious that it wasn’t going to work out the way I had imagined it.

Because of that I made the decision to just include everything into the exporter like I had before, vertices, texture-coordinates, triangle indices and the animation support.
Normals and tangents I still generate during the mesh optimization step just because it’s easier to control, even though it has been giving me some very odd data a few times, probably due to precision inaccuracies or some other bug somewhere.

So with that situation again under control I can focus on some other stuff now. One of these things I want to explore is static light-mapping of scenes to create some nice lighting for 3d scenes and not spend all that processing power having it dynamic when it’s not going to change.

The thing that’s currently giving me a headache when it comes to light-mapping is the automatic unwrapping of meshes in the scene so that I can get a second set of UV-coordinates for any surface.
For very simple, blocky scenes, this is no problem at all. You just look up the surface normal and project the surface onto a 2d plane and in that way get the UV-coordinates.
But for complex, or sometimes very complex meshes and surfaces you will get tons and tons of bleeding artifacts and other discrepancies. I’m thinking right now I will have to instead group nearby surfaces as much as possible.
There’s an unwrapping function in Blender that does close to what I’m planning to use called “Smart UV Project”.
I’m thinking the few artifacts I will probably still have after such a routine will be somewhat minimalistic, texture stretching and such errors aren’t often that noticeable when we’re dealing with light-mapping.

There’s another way to do light-mapping that will require more preparation work, but will grant a more controlled unwrapping.
That is to unwrap the light-map yourself and store two sets of UV-coordinates per light-mapped mesh.
But I made the conscious decision to at least try to automate the process, creating art assets is already complex enough, I can’t add more redundancy to the pipeline or the diminishing returns will become too great.

I’ll get back to it now. 🙂

Edit: OK I understand now why my tangents and normals sometimes get messed up. The tangent gets a NaN value when a face has UV-coordinates that are weird. I mean weird in that they don’t form a triangle, like UV-coordinates should do, but the coordinates are so close to each other they form a line segment, not a triangle. Can’t really explain it in terms that I understand yet, but that’s what’s going on.
My normals are probably acting up in a similar way, since they use a code similar to the tangent generation code.

Blender 2.5 support

So yeah.
I finally sat down with the intention to upgrade my older stuff to conform to the Blender 2.5 python API, since I suspect it’s more stable now than ever and I won’t have to do one of these major re-writes again for some time.

The most pressing matter I got to first was exporting animations from Blender again. The older exporter works, but I, of course, felt that with it being tied down on Blender 2.49b that I should upgrade it first.
It’s also the most advanced code I have as far as exporters go.
Anyway. It’s my great pleasure to announce that it works* right now, and that it didn’t take that much energy to convert it to work under the new API.

The problem now is that the animation files which are supposed to work with the mesh files (I deal with them separately now, to make re-use of animations and rigs easier) are only connected by vertex indices. That means the vertex weights I mention in the animation file needs to map 1:1 with the vertex indices in the mesh file.
That, in turn, means that there’s no guarantee that a mesh exported from 3D Studio Max will have indices that are equal to them which get exported through the animation exporter in Blender.
That, of course, means the animation will map poorly with the mesh and the result will be nothing short of garbage.
The only solution is to export both the mesh and the associated animation from the same software, just to ensure they cycle through the vertices similarly.

So now on to something else:

Now that I can (sort of) use the new API for Blender, I decided to partially turn Blender into a make-shift level editor in lieu of coding a real level editor, which I find very little motivation to do.
I won’t go into great lengths to support a lot of things, so I will support primitive shapes, lights and maybe some other things.
Keep in mind I will still have to produce a level editor at one point. But right now there are more urgent things to get to.

Bye for now. 🙂

*”Works” should be taken somewhat lightly, as the code hasn’t been tested that extensively and will probably need one or two fixes before it makes a full recovery, but it still “works”.

Taking care of business

Damn right. That’s what I’m doing.

What it means is that I’m sitting down and trying to complete some of the tasks that are left “good enough” or “meh, it works…”.

One of those things is the Resource Manager class.

I extended it to support materials this time, and model primitives (which I will get to in a minute).
In the past there was a big flaw in the engine design where, when I decided to finally let the user define what shader files the material was to use and not rely on the engine to fill that need, I ended up with having a lot of duplicates of the same shader source files romping around.
I wasn’t sure how to fix this. For other resources like textures and models I could just make sure that no two identical textures or models were loaded at the same time. (Right now this is done through string-matching the file-paths. Which is slow and I believe a hash-map or something would be more apt. I’ll get to researching this.)
But, I couldn’t check the fragment, vertex and potentially geometry shaders one at a time because that would only waste time and be cumbersome for the interface.
So I implemented a new layer to bridge this “gap” called a “Material”. Now this material is responsible for making sure two shader files of the same source isn’t loaded at the same time by making sure that the material itself isn’t loaded twice.
Note, however, that one can easily surpass this system by being careless so it’s not that stable. But for now it’s OK.
Shader files don’t take up that massive amount of space so a little duplicate data isn’t the end of the world. But it just makes sense to try and avoid this as much as possible.

The model primitives I mentioned are models that can be loaded without actually loading an external file. So one can create some basic primitive shapes without having actual models on disk.
It’s great for a lot of things, like debugging and place-holder graphics.
These are also managed through the Resource Manager interface, so only one primitive is loaded and used many times.

That’s what’s going around right now. I’ve got some other things going on that I might share pretty soon.

Ugh.

Hey.
I’m just doing some stuff here and there. Nothing that fascinating to report on yet. Though I am finally working on some stuff I’ve wanted to work on but was on the fence about. So if that works out to some extent I can show that off and outline what plans I have going.

So now I’ve moved over to use OpenGL 3.x and above strictly. I’m not entirely sure if there’s still some deprecated code running about, but that will be a thing of the past eventually.
I moved on to using Vertex Array Objects which has, in a way, been amazing but confusing at the very same time.
But over all, I’m getting to understand why and how the modern OpenGL works and I find it bucketloads better, albeit confusing at times.

So it’s all really a “pick-up-the-pieces and assemble” mission on my end. I just update older code base to function with the newer direction and that’s pretty much it.

I’ve got some stuff to do. So… I’ll do stuff now. Bye.

Post Processing

I’m just going to make a quick post about post processing and what my plans are as far as they go.

So not too long ago I was actually coding on this, trying to make a generalized pipeline where the post processing effect of your choice got applied to the rendered scene, as it should be, totally normal.

I got some neat effects out of this and I learned a lot about just how hard it can be to get post processing effects that look smooth and great, and what the costs are, of course.
* Observe: These images are not that new, I never posted them is all.

These two images display a rather shy light bloom effect in the works. There are tons of improvements one could put in, but at least it is working to a debatable extent at this point.

So the next step was to implement something else. And I’m a big fan of motion blur when it’s done right. So my attempt after the light bloom was just that. Motion blur. But only camera-based motion blur, nothing fancy…

So when you shake the camera about it smears into that direction (even though the direction calculation only sort of worked).
I know that from looking at the image it looks extremely heavy, but in truth you don’t notice it that much when the game is running smooth. Well… At least I don’t notice it that much.

There’s one obvious flaw in this setup: it’s not very stable for when there’s unaccounted for hiccups or stuttering going on so it can easily smear the entire screen into an ungainly pulp.

(Please forgive I wasn’t able to focus on the cubes when I was shaking the camera around that violently.)
But this is of course simplest solved by implementing either a hard or soft limit to how much the screen can get blurred at any given time.

But in a wider scope it all turned out pretty nice. There are some visual artifacts here and there, mostly concerning the low number of samplings in the blur kernel for both the light bloom and the motion blur, but that can be fixed. 🙂

So yeah that’s about it, then. Bye for now.

Restructuring

Yes. That is indeed what I’m doing right now.

Also I feel it’s important to mention (after all these posts) that the projects that I work on, games and techdemos et al, are very much in development in terms of concept art, game assets and documentation, only no real playable prototypes of them exist just yet.
I realize that reading these posts about me basically tearing out parts of the engine and putting new stuff in makes it sound like all these projects get flattened to the ground each time I do something like that. Not so, my dear visitor!

So back to business:
The restructuring this time encompasses most of how the engine deals with game objects that are located on disk. More importantly how it understands what resources to load and link to any given object when loading it.
What are these objects I speak of? Do you require the answer to a question no one asked? Very well!

An ‘object’ is the colloquial term I gave a simple node in the scene manager.
You know, the heap of data that encapsulates models, textures, shaders and even sounds, so that the engine has an idea of how to draw it.
Now the ‘object’ on disk, as I mentioned, is an XML file that links a bunch of raw resources together.

Anyway. I need to rewrite this portion of the engine since I encountered a very nasty moment with the past implementation.
You see, there was a certain rendering technique (*cough* Deferred Shading *cough*) I wanted to implement but understood soon that it would take too much work to put in.
In the engine-writing business we call these moments ‘warning signs’, as it’s a terrific way to tell when an engine is poorly structured.
This is what I’m trying to remedy right now.

Droll times… Droll times indeed.

Update

Hi again.
So I thought I was going to be able to cover my vague mention of “shading” from the last post, but I figure I’ll get to save that one for a later time.

I’m right now trying to get my current (previous) OpenGL renderer up to speed with newer OpenGL version like 3.x and newer.
I just figured that since I moved some of the code over to use newer tech, I might as well make that a “thing” for the entire engine.
Plus OpenGL 3 and newer gives me control over some things I didn’t really like in older versions, like the combined ModelView matrix. I never found it to make much sense in my case to have those two matrices combined without my own say-so, especially because it also made shader coding a bit of a pain, coming from a primarily HLSL based shader background.
So I just handle these matrices separately now.
In truth I did it with the my previous implementation also, but OpenGL 3 actually forces encourages me to do it, so there’s less of a margin to mess up on.

Anyway. The “shading” I mentioned in my last post was mostly going to be about “image based lighting” (IBL), or more specifically just an ambient lighting term that uses cubemaps to give something better “grounding”.
I find that this technique gives good results in a lot of cases to capture those fine nooks and crannies in complex scenes and simply get them to integrate better.
Even though the ambient lighting won’t map 1:1 with the scene, as it really should do in a physically correct lighting system, it still gives the scene a nice softness that can’t be seen as much with flat ambient.
I’ll cover this later however.

Another thing I’m working on is deferred shading… Again…
I’m reloaded with a lot of neat info and I think I have a better shot at making it work now that I have a better understanding of the moving parts contained within this technique.
It’s probably not going to turn out full-featured up to the point where I can consider it a “real” alternate rendering pipeline for my engine, but I’ll give it a try.

I’m also working on a level editor at the moment. And I’m again torn between choosing GUI libraries, the option to roll my own and the option of simply having no GUI at all and relying on keyboard shortcuts. *gulp*

Yeah. I’ll get back to it. Bye!

Updates

It’s that time again. Time for some random updates and such.

Been kinda hard at work the past few days trying to implement the stuff I need and want to have in my engine so that I can continue working on my game (Nope, still not saying what it is.). And it’s going pretty great actually.

Right now I’m working on the final steps of support for my model format and the shading pipeline for the engine.
The shading pipeline will be based on forward rendering, I’m not too invested in putting in deferred rendering again (even if that stays on the future vision for the engine), and it will be pretty restricted. But I’m confident that when I get this to work completely, I can easily extend it to support whatever I want. I’m trying to be focused on making a robust system before putting anything too fancy in.

So anyway. This morning I put in the initial code for bone animation and it’s working. So now I need to implement a means of animating it. I’ve got a pretty good idea of how to make a system that supports animation layering with an arbitrary number of layers and I’m going to get to work on that soon. Morph targets will have to wait a little while.

What else? Oh. Yes. I’m going to start putting static shadowing in, so that I can generate shadow and ambient occlusion maps for entire levels. I think it’s going to look cool enough. I’m going to base this on the Ambient Occlusion code I posed about a while ago.

Anyway. I’m going to get back to it. The next post I make will be about shading mostly. I think.
Bye. 🙂

Edit: Forgot to mention.
I know I talked a little about a 64 bit vertex structure that I was using that was running out of space for other stuff.
I should never have stored binormal information in the vertex structure, and I stopped doing that. I recreate the binormal in the vertex shader instead. So now I have some more space to toy around with.
Well not really. Animation support still takes up a large chunk. I mean- 4 floats for bone weights and 4 indices for bones. That’s a damn lot still. I think I could do with 2 weights per vertex however. It wouldn’t be that bad.

Continuing

I’m back with some new stuff pretty much connected to what I wrote about the last time.

So I did some poking around with the Blender 2.5 Collada exporter as well as the one in XSI and I’m pleased that animation is in fact kept unmodified between the two programs. This is good.
I can see a function in using Collada to export animations that maintain their integrity between modeling software so artists (and me) have more platforms to work on than one. Which right now is still Blender 2.49b. (I’m waiting on the Blender 2.5 python support to become more stable before committing to a rewrite of the exporter.)

The other option is to write a model format converter, which isn’t that bad to be honest, but it does add an extra step that could cause problems in the modeling pipeline, which is pretty clunky as it is. And the smart move is of course to move away from the “clunkyness” the artist has to deal with to make it as easy as possible on them.
Even though I’m not much for optimization YET, there’s still value in making it as simple as possible from the start to avoid a huge rewrite later.

But right now though, I am mostly interested in getting it to work before making any effort in cleaning it up and basically wasting time trying to scrutinize the things I put in just to avoid cleaning it up later, which I know is going to happen either way.

Anyway. Static models work fine now. I’m in the middle of implementing animation (again) which is part of the reason why I made up my mind about ditching the planned submesh support after all. It was just making it harder than it had to be, and made the actual animation code super-unnecessary and slow.
Animation support is still planned to include the features I’ve mentioned previously such as morph targets, animation layering and all that good stuff.
I’m also getting more interested in working out a system for IK handles and other secondary animation routines to improve it further. But that will have to wait some until all the essential stuff is in place.

So.. Back to it! 🙂

Model formats

Time to take a step back and look at what I’ve created.

So as I’m sure I’ve mentioned several times on this blog, I am writing my own format to use for 3d models and it is able to do most things required by a functional model format. It’s really nice to have a custom format around simply because of the notion that you know everything there is to know about the format, since you are its sole creator.
But as you may guess the main problem is that YOU are its sole creator… And the format lives and dies with you unless you make the format official and write tons of documents and format specifications and stuff of that nature. And that’s fine. I’m personally not planning on having my format official and it could very well be specific for use in my engine.
But it’s a lot more unstable that way, and needless to say it limits the artist’s freedom pretty extensively.
It also puts a huge strain on what modeling applications the artists can use as the exporter is only written for app. A and app. B but the artist uses app. C and so on and so forth.

The solution that works best is of course to use a format that is old and beaten into the rock by now, like OBJ or 3DS or any one of the older formats. And that’s fine. Totally OK. Except… Animation is once more the culprit and needs special attention as OBJ doesn’t support animation (except for a few exporters that export keyframe animation), 3DS I think supports animation, but god knows what limits it has and such (and I’m not about to look through the specification).

So another solution would be to use two formats, one for static models and one for animated models. And this, too, works fine. Except, there are not that many formats that support animation that well. The closest one I can think of is Collada.
Collada is great. It’s an incredible project. But it also suffers to some extent from the “application-specific problem” and the exporters for Collada range widely from application to application.

There’s the approach I mentioned a few blog posts back used by Wolfire games.
Using a proprietary tool for animation piped into common model formats such as OBJ… Well- at least that’s what I gathered it was. I could of course be mistaken. 🙂

There’s a lot to consider when battling different model formats.

Edit:

Forgot to mention:
I’m rethinking my choice of including submeshes into my format. I’ve come to realize that it clearly doesn’t make it worth the effort considering the little effect it produces. And there are different ways one can set up submeshes without having it coded into the format.
So I might be removing that functionality just to keep it simpler.