Planning a re-write…

… of some of the components within the engine that are working poorly/not at all at this point in time.

Scene Manager: Yeah, I’m going to rip out the old code and implement a new, much better system which actually allows for the basic things a scene manager is supposed to do. The old system was also very slow so I’m going to focus some on making it as fast as possible.
Frustum culling will have to be re-implemented too, it’s a tad bit round-about right now so I’m interested in making it more straightforward and “tight”.
I’m still on the fence about implementing some general purpose culling implementation. It may sound a tad bit crazy, but my engine isn’t inherently designed to take advantage of a schema like CSG or similar stuff older engines used.
The engines that support this by default have the strength of being able to freely enforce such culling techniques such as PVS or Portal rendering.
My engine doesn’t. Since my philosophy for making game worlds doesn’t necessarily follow the rules of these very strict concepts I’m going to have to figure out something else that limits the entire level from rendering at once, but still doesn’t choke the scene manager up for no reason.
A lot of times culling is counteractive and some games run faster by not culling at all. Of course this can’t be fully depended on since it depends on how the game’s levels end up looking.
My thinking is, there’s got to be some kind of rough solution that can be deployed easily within the level editor to maybe make entire sections of the level cull away based on player distance or something like that. It’s not automatic and definitely a bit of a hack, but if it’s able to do the job in these cases, I’m good.
My other thought was creating some type of intricate algorithm that runs through the entire level and generates these triggers for the level which allow different parts of the level to be rendered when the “player” enters a certain area in the level. This is kind of like PVS(I linked earlier). The difference here is it doesn’t check polygons, but entire clusters of objects or areas. It’s of course not fully planned out as you might gather from reading this explanation, so it’s just an idea.
At this point I’m not too concerned with having a solution that’s not entirely user-friendly, seeing as I’m the only user right now. šŸ™‚

Renderer: Yes, there’s still a ton of work to be done before this is fully functional to the point where I can point at it and not feel apologetic.
The main focus I have right now is really to trim away everything redundant. This also means I’ll be removing the backscattering I posted about a few posts back. Yes, yes- turns out I don’t have that much use for it after all. For the few things that need a faked translucency, there are other options that don’t hog up precious space in the GBuffer.
* CSM needs some work. I’m actually pretty happy with how well the CSM is playing with the rest of the engine right now, but it still needs some rubbing and polishing before I can relax.
* I no longer have any forward rendered lighting at all, all the lighting is deferred right now. Still missing out on deferred shadow maps for spotlights. I’m pretty sure there are cool things to be made with that support around so I’ve got no reason to trim that off.
* I need a new post processing pipeline which actually allows for some custom made post processing. The past one had set, optional effects running one after the other in a chain. It isn’t entirely bad, but sometimes you want to do something crazy, like set the screen on fire. The user should be able to.
* Particles. Still need to implement this in a way that isn’t terrible. Right now the depth sorting isn’t working too well so there are popping artifacts with multiple emitters. Really annoying.
It’s incredibly difficult to write a good general purpose particle system, it’s kinda crazy.
* ANIMATION! I need this so badly! I’ve just not had the energy to go back and re-implement the support yet. It’s an entire pipeline that needs double-checking, it takes time. I don’t have that much time right now.

What else… Those two are pretty much the largest problems right now. There are other things, such as the Resource Manager needing a lil’ kick in the rear to work better. But the rest is working OK at the moment.
I must add that through these years of coding engines and learning how engines work, the single biggest pain in the neck is model format support. If you back-track a loooong way in this blog, you’ll eventually start unearthing the many posts I’ve made in the past that are clad in pained wails about how it sucks to not have a model format that supports basic stuff and isn’t either closed source or in some weird format that everyone wants to make into their own.
Until the point where I said “TO HELL WITH IT, I’m making my own!” and I did.
But making your own format is actually a lot harder in the long run, because when other formats have bad support or poor documentation you can have the moral high ground of belittling work someone else did, but when your own format craps out- you just march your pretty little behind to the nearest mirror and let that guy have it!

Right now my own format only supports Blender as the 3d modeling suite, but not the latest Blender version because that broke a part of the code and I need more time to re-write that exporter… again.
But my model pipeline supports some common other formats as well, such as OBJ and very partial FBX support. I want to add some more like COLLADA and ASE because those are easy to parse. So once I get the motivation I need to make all this happen, it should be much better. Animation support will always be a very difficult subject matter though and unfortunately won’t see extensive support… Unless Autodesk finds it in their hearts to open source FBX… Please?
Edit: I should clarify that the other formats are actually converted from their source formats to my engine’s proprietary format. Just making that clear.

A lot of plans… A lot of plans… I just need that “spark” to do it all. *sigh*

Bye for now. šŸ™‚

Back in action (sort of)

So yeah.

I’ve some urge to code again after a pretty short break I guess (considering how arbitrarily long a break can be).
Something occurred to me a few days ago and I realized that this is actually a kind of smart thing to do.
I broke my mesh loading/processing/utilities into its own library. It’s way easier to manage, change or extend now that its no longer tied to a larger code base.
I, of course, realize that I could’ve done this months or even years ago but it never struck me as “enough” of code to make into its own library, but it is.
I’ll be free to re-implement the animation support a bit easier now since I don’t have to lug around the rest of the engine in order to change it.

This is really a minor thing, but it’s also the only thing I’ve done the past few days as far as programming goes.

It also makes it easier to maintain other tools that are closely connected to the mesh processing such as conversion utilities, viewers and exporters.

Engine Demo

Let’s start by making it clear that this demo isn’t finished.
It’s not finished because I’ve quite the engine re-design ahead of me and this project is sort of caught in the line of fire.
BUT. Instead of just coming on here and saying that it’s not finished for the millionth time I thought I’d at least show off a little from it.
It’s not a whole lot, and the videos are mostly still-shot.

But maybe someone could find something interesting in them. *shrug*

The following videos are shot from two different locations in the demo, both of which are not finished yet so they are very flat and boring.

This last video is a bit special, and a little bit of a failure:

It’s a bit of a failure because the framerate was so low during that video that the verlet cloth started acting weird and fell against its safety constraints which made it slouch somewhat.
And the purpose of this video was to show off the shadowing and how it interacts with the few banners that are suspended in this demo level.

Now, I’m going to elaborate a little on why it is that this demo isn’t finished yet:
That’s the biggest concern right now, and it’s intended that this demo has a couple of things that are animated. The “kobold” character I’ve shown (in the last post) is one of these “things”.
And when I noticed that it will take a little while to finish the animation support since the introduction of the binary model format unfortunately killed the animation some, I thought that OK- I’ll just post this anyway.
I’ll say again: It’s nowhere finished, so see it as a Work In Progress as I will eventually get back to this demo and spruce it up to match my new engine design.
I’ll still keep it in the shadows as to what this demo aims to do and what kind of a game it will be so at least that will still be a surprise when it’s go time for real. šŸ™‚

CSM Continued

OK! The cascaded shadow maps implementation is starting to show its first usable applications.

Given it still has some things that need to get adjusted to really make it complete, things like the calculations and how it’s integrated with the rest of the engine.

I have some quickly-thrown-together screenshots to share:
(I’m reworking the renderer so I’m without post processing. So no HDR bloom or AA.)

Here’s the result. It casts some nice shadows for the entire scene. As it should.

But. As you can see there are problems to this, such as this. As with the many posts I made a while back about rendering shadows outline, this is a common problem. Lack of hardware filtering ends in a brute-force approach which makes pretty jagged and noisy shadows at low resolution. It’s also possible in this screenshot to see the three cascades used for the shadows (Where the shadow suddenly drops in quality).

Here’s another problem I’m facing. Zooming out reveals the final cascade which is very low resolution compared to the two other cascades which show up close to the camera.
The Percentage Closer Filtering(PCF) doesn’t help a lot as you can see.

And this is a “for good measure” screenshot which shows the different cascades color-coded.

So even though the basics are already in and working there is a lot of work to be done. Part of this is to, again, explore different ways of filtering shadow maps to get softer visuals. I’m half leaning towards putting VSM in, but am tentative because of the light-bleeding artifacts. There’s also the possibility of LVSM which is similar to VSM but aims to reduce the mentioned artifacts.
*shrug* I don’t know yet.

Another part of what I need to improve is the actual calculation for the orthographic projection for the frustum slices. It’s right now at a “fudge” value, which isn’t accurate in any way so I will have to get the bounds of that projection to be more in tune with the bounding sphere.

Yeah, that’s about it. šŸ™‚

Edit: Just as a pointer: I know it’s not really necessary to render different size shadow maps per cascade, but that was just a thing I decided on, don’t take it too seriously.

Edit #2: Another one. Screen space dithering helps with the jagged situation. Still noisy, though. But I often find it’s a worthy tradeoff; Noise for reduced aliasing. šŸ™‚


OK. I don’t want to make a new post about this since it’s pretty minor and it’s connected to this post:

This is a different setup of the CSM which uses a consistent size for the shadow maps, 4 tap PCF and screen space dithering.
It looks a lot better now, albeit a bit noisy. And moving the camera around shimmers the shadows some, which I don’t really know if I’ll bother fixing right.
I think I’ll keep it as is for a while and focus on something else, like deferred shadow maps for spot lights. šŸ™‚
(Bonus: I threw some trees in for testing how that looks.)


So. I started this implementation last night and I feel like I’m getting somewhere finally.
I have the frustum slices set up, I have the bounding spheres set up and I get some primitive shadows drawing per shadow cascade.

However. As good as that sounds there are a few major problem.
Primarily the problems stem from incorrect projection matrices.
I can get the shadows to draw correctly using a standard perspective projection, of course this is not something you’d want to do because of perspective warp. Shadows will move around according to the camera and it looks plain wrong.
I did this for testing purposes of course, I’ll just mention that it indeed does “work”.
Now, we want an orthographic projection for the shadow mapping so that the different cascades get along.
This is where the bad stuff happens. I, of course, have code for generating orthographic projections for matrices, but it seems like that code (written ages ago) has some problems getting along with the rendering of the shadow maps.
I will have to fix that before I get to make something cooler.

The good news is that my setup for the frustum slices and bounding spheres seem to be working correctly. šŸ™‚
Back to work.

Edit: Initial work is done! I got the projections working and everything. I’ll make a real post about this soon! šŸ™‚


Well. Sort of an upgrade at least.
It’s an upgrade to my deferred shading which puts it in view space. It also allows me to reconstruct the Z axis of the normal buffer in order to reduce the size of the GBuffer by one channel.
And that’s pretty much it.

So in other news I started experimenting on putting a Subsurface Scattering approximation technique into the engine pipeline which allows me to render light pass-through on organic objects such as vegetation or skin.
I’m in other words not sure whether this is worth having or not.
Surely it could make quite the visual difference in scenes that contain a lot of shrubbery or… Naked… Skin?
But none of those are exactly in focus for the engine right now.
The neat thing about it, I think, is that it’s integrated with the deferred shading so light pass-through works uniformly across the entire scene without need of any special behavior.

It wouldn’t see much use in a standard game, which is (I imagine) a big reason as to why it isn’t present in that many engines today, but it’s a pretty new technique for real-time rendering which is also part of the reason.

The approximated technique is essentially the same as proposed in this document which is pretty great and is pretty close to the ideas I’ve had in the past regarding real-time subsurface scattering.
My idea was less complicated and involved multiplying a scattered color (think orange/red for skin) with a dot product of the inverse light vector and surface normal, which effectively makes the pixels shine when the light is situated behind them. As you may gather this is just an additional Nā€¢L(Nā€¢-L to be precise) for the light shader and is very, very simple.
“My” approach still works for some cases, but the technique proposed by Dice emulates the light source visible through the surface too, not just the pass-through itself. It also emulates how much the surface scatters the light as it’s passing through and a lot of other neat things.

I think in the end I’ll make this a completely optional thing, just so that we won’t force games that don’t need it to still render it.

There’s another thing I’ve been thinking about, but don’t know if it would make much sense in the bigger picture.
I was thinking of letting ambient occlusion be automatically adjusted to the contrast of the scene’s lighting, so you’d get a very slight ambient occlusion term when your ambient values are high and a darker ambient occlusion when the ambient is low.
I’m not sure of all the possibilities of this yet, but I think it could be interesting in games where the ambient values range enough for people to notice.
It could eliminate one of the few gripes I have with ambient occlusion terms baked into the diffuse texture. My problem with it is that in some situations it over-darkens things.
You could even go as far as to let the ambient occlusion term be modulated by the scene ambient itself.

I’ll test things out and write some more junk on here if it proves to be somehow interesting. šŸ™‚


Well. The start of it, at least.

So, now that you’ve seen what I am on about let me delve into the depths of this.

Quick notes on the video:
* Those green lines are the constraints of the cloth. It’s low resolution and doesn’t give a very wide range of motion usually found in cloth, but I’ll deal with that later.
* I think we should, as a people, already be used to a lowered framerate when capturing video- so No, this program doesn’t run just 30 frames per second. (Actually I’m not sure the compression of the video lets you see the FPS to well. It says 30 however.)

What you’re seeing is a simple, procedural animation for this banner, which hangs on a wall.
Why is this anything to get excited over?
Well, there are a ton of things that can be made with cloth animation, if not only enjoying the atmosphere it creates.
I’m planning a pretty extensive implementation of cloth and other soft body physics for my engine, mostly because of all the cool stuff one can make with it.
Right now, though, I’ve got the code for dealing with the banner above, and some pretty unfinished code for dealing with ropes that can be hung around the world.
Both are implemented using a verlet integrator.
I’m eventually hoping to move this code over to use the physics engine, since it will run faster and be more stable.
Verlet integration as it is doesn’t deal tremendously well with framerates that vary wildly.
During the course of session in most games the framerate can be very different from one area to another.
Can we blame the programmer for this framerate inconsistency?
Well. Yes.
But. Operating systems can sometimes cause framerate in games and movies to be inconsistent too.
Can we blame the OS programmer for this framerate inconsistency?
Well. I supposed you could. But then again, maybe we should just deal with the situation for what it is and stop blaming people left and right.
Deal? Deal.

From my experience so far the worst artifacts happen when framerate drops suddenly, but picks up again which usually makes the cloth object jerk violently.
In my test case and more serious case this isn’t a very big problem. The cloth simulates wind as is and it’s usually not that noticeable when it actually struggles.
I guess I should be thankful for the simplicity of my case.
I’m therefore not sure that the situation warrants any real action right now. But I think I might try to code some kind of damper or threshold for it.

Now that we’re aware of the first problem, here’s another one:
This type of cloth isn’t able to collide with anything (I’ll explain soon that this is not entirely true) because it would mean I’d have to deal with two essentially equivalent collision worlds and that’s just ridiculous. I will never do that.
So what I have in place instead is a simple, optional way to restrict the cloth from penetrating the wall that’s behind it. We can therefore mount these cloth objects on a wall (such as the banner in the video) and not have to worry about it sinking into the wall.
So technically, the cloth object is rather simple, but this variance in behavior is actually enough to make it useful.

For a start, this is more than I could’ve expected and it actually doesn’t suck terribly so I’m good for now. šŸ™‚

Edit: Guess I fudged on the promise from the last post about showing off the AO generator with some cooler models.
Truth be told the AO doesn’t get that exciting on said cooler models. So I, in my mind, voted against making a new post about it. Also I failed my first attempt at implementing a blur algorithm for it so that sucks too. But as soon as I get the blurring working I’ll make a new post with some cool models and AO, and you can see for yourself that it doesn’t make a whole lot of difference.
I have some other pressing matters to attend to, namely animation, so I’ll be working on that right now and not the AO.