Game Development

Tools and more

Hello!

So I’ve been doing some stuff lately, and little of it has actually made it onto the blog so far… But that’s about to change!

Hark upon the tale which is: Programming tools in Win32 and OpenGL!

So yes. I noticed a while back that I really lack a good end at using any existing GUI APIs out there. And I went through a few hoping for the best. First off I tried, since I have Code::Blocks installed on my computer, wxWidgets using the wxSmith plugin that is included with Code::Blocks.

This had some success, but to the point which was not to my content. I couldn’t get OpenGL drawing working. And I found little to no help anywhere so I decided to scrap that in the end.

Next up I “tried” GTK+ but I scrapped that too… Uh… About 15 minutes into looking at it. *Cough*

After that I went onto looking at QT but I didn’t go for that either. Even though it seems awesome enough. I’ll probably use it for something in the near future.

At this point I didn’t really know where to look. The Visual Studio 2008 Express edition windows forms drag & drop thing isn’t giving me what I need so the only “sane” option was to code it in Win32 without any helping stuff like that drag & drop creator. And boy is it ever vague! But it’s OK. I actually got used to working with it pretty quickly. But the sheer design of it still baffles me at times.

About 8 hours into coding I knocked out a small application which works sort of like a realtime texture/mesh viewer.

This is what it looks like:

So yeah. The main idea behind this application is that you can load a 3d mesh, and specify a texture that it should map onto that mesh (since the mesh file already includes UV maps this program doesn’t need to care about that at all). And that texture is, every time it’s changed on disk, updated onto the mesh. This enables the user to see the texture update realtime when painting it in a image program. Makes the artist’s job slightly easier and a tad bit more visual! 😀

Moving on…

Besides this I’ve coded on my engine which is making less sense every day I have to say. I’m a bit distracted with the general design of it so far and I don’t have any way to visualize the architecture that much, which makes it a really dim experience. It feels like I don’t know exactly what I’m doing from time to time but I do think that if I just keep hammering at individual parts of it it will eventually fall into place… Right?

Right now, as in: in this very second, I’m coding a system for the engine to load a bunch of 3d meshes from a list and store them in a so called “Mesh Cache” that is then used by every object in a scene to reference from when they need to draw a specific mesh at a specific place. This ensures a pretty straight-forward solution for loading any given mesh only ONCE into memory.

I’m fixing on extending this list system to use with any type of resource the engine needs. Even so, I’ll need to somehow work out a system where it only loads resources that are immediately going to be used. And then unloading the resources it’s no longer using to pave way for new resources and… *Sigh*

Yeah. It’s a big chunk of stuff to do. And what I mentioned barely even constitutes a fraction of it.

Back to work! Bye! 🙂

Fascinating!

Hey there!

This is a fascinating time indeed. For I have now slain the beast that is vertex skinning completely (almost anyhow). And I am free from the malicious spell that it put me under.

So let’s begin.

Sorry for the simple model, it’s not nice to look at but it gets the message across I think.

I see that the quality of the image is astounding, sorry about that, but the text says “BoneJoint” and implies on that the same colored shapes is the bone joint. The blue dots connected with the blue lines are the “bones”.

Maybe you’re wondering what this all is about (if you’re not familiar with the term “vertex skinning” that is) so let me explain briefly what’s going on.

The blue lines are the graphical representation of a “bone” that in the world of graphics often implies on a near comparison with a real bone, those odd shaped things that hold your body together.

But what a bone really is in this case (and in my case, there are a number of ways one can implement vertex skinning) is a matrix. A 4×4 matrix that can be used in 3d graphics to represent rotation and translation. (In reality one only needs a 3×4 matrix to represent rotation and translation but I decided on 4×4.)

Now that we know that a matrix can be used in this way, it’s easier to make the connection that, since vertices are positions in 3d (or 2d) space, we can rotate a bunch of vertices (also referred to as a point cloud) around an arbitrary axis.

The arbitrary axis in this case is the root of the bone.

Here’s a few examples of what one can do with the information I’ve covered so far.

*Oh and in case someone wonders. I don’t have vertex weighting in just yet, that’s why the model bends so abruptly. It’s on my to-do list now that this works.

And for even further clarification why this is important at all: This technique (I think) has existed in most games after 1998 (or even earlier than that, I don’t know for sure.). Without this, your reduced to work with animation techniques like the ones in Quake 1 and Quake 2. Which work OK But they don’t offer a lot of flexibility and makes things like ragdoll hard or impossible to do.

Quake 1 used a whole different set of vertices for each frame.

Quake 2, however, introduced tweening making it look so much smoother.

Still, with these techniques you can only supply a character with that much animation before it becomes too expensive to really be worth it. This is where vertex skinning and bone (skeletal) animation comes in. And still, this information is as far as I can gather just by playing the games, looking at the characters and by looking at the specifications for their 3d file-formats. I can only logically assume this is what they do.

Just to give you a concrete example: I remember that they were talking about this technique in the videos covering the technology used in Half-Life 1. Well… I don’t *really* remember them from when they were originally released but you can find them on the web if you care to search. Oh! I did. I really recommend watching that if you’re a big a nerd as me. I *love* watching stuff like that and to see how they coped with the problems back then. Truly fascinating to say the least.

So. Back on task.

The main issue with my implementation right now is that it’s performed in software. That means everything is done on the CPU. I should move some of the operations to a vertex shader so I can have the GPU take that load instead. But for now the amount of vertices I’m dealing with is very low so the difference shouldn’t be that big.

The next thing I will need to do is animation of these so that it can actually be used as a legit animation technique. After all that I will proceed to optimize it as best as I know how.

I’m still unsure of how to extend my Blender exporter script to cover bones and stuff, and to get them in the format that my engine likes them.

So in other engine related news.

My post last month about my problems in programming with SDL and that I wouldn’t require the user of my software or games to install the dependencies that compiling a program under certain settings gives.

Turns out you don’t have to. I recompiled SDL with the settings I need the target program to have and it worked just fine. I’ll start reimplementing SDL into my engine.

I’m really happy that I was wrong. 🙂

Anyhow- have a great day, night, evening, whatever.

Bye.

Well damn…

Hey.

So it turns out that my ingenious (not at all) quaternion implementation of bone animation works only kinda like I want, that is- It doesn’t work at all.

So I’m basically back to the drawing board with this stuff. And boy is it bugging me already. I read some tips around the net about this and as expected, there isn’t much help to get, you mainly have to piece together different peoples observations until you get how it works, and later (if you’re lucky) you’ll have enough material to be able to build it. You’re going to have to start somewhere, unless you’re some kind of a genius.

So right now I’m forced to rethink the whole routine of animation, thus I’m happy that I didn’t spend too much time implementing it. It shouldn’t be that tricky ripping it out of the engine and recoding it.

This also means that I’m going to build this into my own model file format, since I basically hate writing loaders for other formats.(I gave up on COLLADA .dae for now) It’s such a relief to know exactly what your engine does and why, also knowing you wrote it from scratch. It’s quite fantastic. 🙂

I’ve made a goal that I’ll try to achieve with this engine. It’s not a complex one, since I know what happens when you get ahead of yourself.

Get this: I’m going to make it support the same features as Phantasy Star Online. That’s a rather low shot for today’s standards, but just might be enough (or too much) for me.

To be clear. I’m not saying I’ll write a game like Phantasy Star Online, I’m saying that I’m going to try to replicate what the engine supports and go from there.

This, however, is not a simple task. Anyone reading this that has a remote idea of what work goes into a 3d engine should understand that. And for me being a one-man army is not making it any simpler. But that’s the way I enjoy learning things. 🙂

Moving on…

Since I have a fully functional (almost, at least) 2d engine, I started coding a framework for a 2d RPG. (Yeah. I’m switching from the 2d side-scroller tool pipeline I blogged about a few days ago.)

This is what I’ve got so far. This is after about 2 hours of work.

I decided to first code a pretty solid code for handling interfaces and HUD elements. I made this window class that uses a set of images (corner, border, fill and titlebar) to support any dimension windows. Though I’m having some problems getting the layers to update at runtime, so you can switch window focus (for example).

Yeah well. Back to work on the 3d renderer and bone animation. I’m really hoping to get this to work soon.

Bye!

Working on the engine

Hello there, it’s time once again.

So yeah. I’ve been away from programming in general for a while but got my spark back recently.

The reason why I haven’t been programming is that I’ve done some other stuff instead, like map making for The Dark Mod (because I’m a HUGE thief series fan and that mod is next to the best thing that has happened to me in quite some time.)

Because I’ve been map making with their (I believe) mission editor and the mod is a mod to Doom3 I’ve gotten closer to seeing how Doom3’s engine architecture works. And I intend on picking up some of the concepts used in that engine to adapt into mine.

The piece of architecture that impresses me the most is how the usage of “Visportals” works. It’s basically a portal from one place to another that checks to see if it needs to render anything from that point on, determined if the player is able to see it. Basically- If cleverly placed, It doesn’t render anything you don’t see, which is a great way to improve performance. It’s a fairly simple concept but it does the job well I think.

If I combine this with my earlier plans to use bounding box frustum culling I think it’ll be a fairly solid system to work with.

The other concept that I’d like to implement is stencil shadowing, such as in Doom3. Given it doesn’t look that realistic in most cases and it’s frowned upon for taking up a lot of CPU bandwidth, I still think it’s a rather cool way to produce accurate shadows.

I think I can implement the “Visportal” system with Ray/Triangle intersection tests to determine first if the “Camera” is able to see the “Visportal” model at all, if not it can cut away everything connected to it in the render queue.

Whatever, I’ll see how that works out soon enough, I’m sure. 🙂

Oh! And it so happens that my engine supports quaternions now, at least it’s apparent that they work as intended, and that means that I can now start to implement bone animation within my custom model file pipeline.

The main problem I’m having now is that I’m still not quite sure of how I can print out bone information in the exporter script I have from Blender. I am able to print out the name of the armature object. Which doesn’t help that much. But since I can grab the armature object I can get to the bones contained within it, I think.

I’ll need to expand the information on my vertices to contain a weight and a name for what bone it is connected to. That way I can easily create a quaternion for each bone, push in all the vertices that the bone needs to move with itself and display it all. 🙂

I can of course animate it by applying rotations and translations for each bone for each frame!

Yes. I’ll get to work on this now. 🙂

Bye!

Plans

Hello there.

I thought that I might try to lay out some plans that I have been thinking about a while but haven’t done anything with them yet.

To start with I’m going to go through my engine and how it’s holding up right now.

Since I decided to work with OpenGL again for a while, I just went back to my old engine code and attempted at making an actual engine architecture for it, so that I could work with it more efficiently. This was unfortunately not so. I began to sketch up a brief plan on how I was going to do it, but my interest faded and that’s where I left it. It works and all, but I need to redesign and recode pretty much everything.

Another thing that is bothering me to no extent is my file format implementation, it works just great with static entities and stuff like that, but it cannot handle proper animation yet. The main problem being that I clearly don’t know enough about how bone animation works in games and how I extract that information from the bones in blender via my exporters.

So instead of spending too much time I began looking back at COLLADA and I’m still thinking about writing a proper parser/loader for it and use that instead. I’ll save myself the workload of coding the exporter at least. And I also took another brief look at the Open Asset Import Library but that lead me to nothing but misery. The damn thing wouldn’t compile, but I’m sure it’s my fault completely and that I’m doing some silly mistake.

So in conclusion, as you may guess, I can’t use animated objects yet and that limits my potential working with my own engine.

Another plan that I’ve had in my head for a while and recently just begun working with is a wrapper for OpenAL.

So far, I’ve got a very simple class for creating a “sound object” that I can place in a 3d environment and “listen” to through an arbitrary listener position that I can define through a different function, I plan on implementing this listener into a camera class so that I have both “eyes” and “ears” on the same object, much like a human head (Duh).

I think that I’ll leave all the other neat stuff you can implement through OpenAL like, doppler effect, fade and all those for later, when and if the time comes when I need that stuff. Right now I do not. I only need the sound “engine” to load a sound, play, pause, stop it and free the loaded sound from memory. It’s my intention to have these sound objects as one-shot effects, like a sword hitting a wall making it clang, or a monster being decapitated making a gruesome crunch sound.

With this sound engine structure built I can also continue working on my game where I used Ogre as the rendering engine since the only issue with that was that I didn’t have any sound.

And lastly, here’s a small tip for making the listener orientation to work in an isometric view, I had some issues with it.

Vector1 = (CameraTarget-CameraPosition);//CameraTarget could be your main character’s position
Vector1.Normalize();
ALfloat Direction[] = {Vector1.x, Vector1.y, Vector1.z, 0, 1, 0};
alListenerfv(AL_ORIENTATION, Direction);

This works for me. The funny thing is that the OpenAL documentation mentions that the values passed in the listener values don’t have to be normalized in order to work, yet they do in my case.

That’s it for the plans. I have more brewing in this crazy furnace I so often refer to as my brain, but there’s simply too much for me to write so I’ll lash it out in smaller doses.

Further more I haven’t actually programmed the much in these past few days, I’ve been working on drawing more, and making 3d graphics more awaiting the release of Blender 2.5. 😉 Graphics has always been my primary go to “toy” when I’m not feeling the programming in me.

Have a good night, day, morning, evening, whatever. Bye.

3D Engine In DirectX

Hi there!

Because OpenGL is so different in the 3.x version than the previous versions I’ve been reading up on DirectX for a few days and so far there has been only a few small confusions about different things. DirectX is however pretty similar to OpenGL so it isn’t that big of a jump for me.

Since I am pretty much re-writing my 3d engine in DirectX from my prototype in OpenGL I’ve got a clear image of the things I need to do, so I started with my model-loader and trying to convert that to work in DirectX.

This is what I’ve got so far, Take a look:

dxthing

Now this does look pretty disastrous, but it shows that my custom model-format works in DX as well as OGL! 😀

However, I am exaggerating a small bit there, the texture coordinates and the normals of that model are pretty broken because the loader (so far) cannot handle them, I am yet to figure out a solution for that, in fact, I’m working on it as I type this.

I haven’t got that big plans for this engine just yet, my first target is to get the model-loading from my custom .SA3 files to load properly. After that I guess I’ll have a go with DX’s shader capabilities and see what I can do with those. If I can manage to make my normalmapping shader from earlier work in my own engine, all the better. 😀

Furthermore I am intrigued by how many things DX has out-of-the-box compared to OpenGL. Given that, I can also assume that those things are far from the best around but it gives beginners to DX a head-start in their development.

That’s all for now, I’m going to keep working on the model loader, I feel the solution for my (two) problems isn’t that distant.

Edit* I fixed one of the problems. Fact is that it was working all this time. The problem was that I was transforming the model and rotating it 90 degrees in the exporter. This made the model normals recieve lighting even if they were facing away from the lightsource.

Bye!

3D Engine Update

Hello.

It was a long time since I posted anything, and that with good reasons, I assure you. Well… Maybe not, I’ve just been playing a bunch of games.

Anyway. The update I made for the engine is a pretty small one actually. I implemented animation into the custom model file format I wrote about a while back.

The animation is right now keyframe based, so there are no data available for bone animations and such, which sucks a little. But I’ve got to start somewhere.

How it works is that I first have a list of all the vertices used for each frame at the very top of the model file.

After all the frames are printed out I print out the data that remains the same regardless of frame, the normals, uv-coordinates and faces.

With this I can simply loop through all the vertex data and display any frame in my engine.

I actually made a completely new file format for this idea since I didn’t want to “ruin” my custom “.SA3” file format with vertex animation (I’ve got advanced plans for .SA3).  So I made the “.KMD” file format which is an abbreviation of “Keyframe Model Data”.

I am right now as- I type this, actually, studying the math behind Quaternions since I’ve been avoiding them for quite a while. I’m learning that so I can implement bone animation and vertex weights to the “.SA3” file format.

I have mentioned this back when I first started writing about my file format that it’s highly inspired from the DooM3 model file formats .mesh and .anim, since my format works much like them.

So there you have it. My engine is pretty much in the same state Quake 1 was in when it was new. Not very impressive but at least it’s my own work. 🙂

Furthermore the engine development is going pretty shaky due to reasons that are not my own fault, for once. It’s the whole OpenGL 3.x thing, I don’t know yet if it’s worth the effort to start working on that instead of my 2.x implementation. We’ll see how it goes. In the “worst” case I’ll just have to convert to DirectX which in a way isn’t that bad.

Yes, yes. Stay awesome. Bye! 😀