So yeah. The plan I was working on during the last post didn’t work out that great. It turned ugly very fast and it very much made itself obvious that it wasn’t going to work out the way I had imagined it.
Because of that I made the decision to just include everything into the exporter like I had before, vertices, texture-coordinates, triangle indices and the animation support.
Normals and tangents I still generate during the mesh optimization step just because it’s easier to control, even though it has been giving me some very odd data a few times, probably due to precision inaccuracies or some other bug somewhere.
So with that situation again under control I can focus on some other stuff now. One of these things I want to explore is static light-mapping of scenes to create some nice lighting for 3d scenes and not spend all that processing power having it dynamic when it’s not going to change.
The thing that’s currently giving me a headache when it comes to light-mapping is the automatic unwrapping of meshes in the scene so that I can get a second set of UV-coordinates for any surface.
For very simple, blocky scenes, this is no problem at all. You just look up the surface normal and project the surface onto a 2d plane and in that way get the UV-coordinates.
But for complex, or sometimes very complex meshes and surfaces you will get tons and tons of bleeding artifacts and other discrepancies. I’m thinking right now I will have to instead group nearby surfaces as much as possible.
There’s an unwrapping function in Blender that does close to what I’m planning to use called “Smart UV Project”.
I’m thinking the few artifacts I will probably still have after such a routine will be somewhat minimalistic, texture stretching and such errors aren’t often that noticeable when we’re dealing with light-mapping.
There’s another way to do light-mapping that will require more preparation work, but will grant a more controlled unwrapping.
That is to unwrap the light-map yourself and store two sets of UV-coordinates per light-mapped mesh.
But I made the conscious decision to at least try to automate the process, creating art assets is already complex enough, I can’t add more redundancy to the pipeline or the diminishing returns will become too great.
I’ll get back to it now. 🙂
Edit: OK I understand now why my tangents and normals sometimes get messed up. The tangent gets a NaN value when a face has UV-coordinates that are weird. I mean weird in that they don’t form a triangle, like UV-coordinates should do, but the coordinates are so close to each other they form a line segment, not a triangle. Can’t really explain it in terms that I understand yet, but that’s what’s going on.
My normals are probably acting up in a similar way, since they use a code similar to the tangent generation code.