Game Art Overview
Producing game art is very different compared to a still render or movie animation art. Game art has the limitation of meeting real-time requirements of the graphic cards. In this article, we will talque about the roles that Blender played in producing game art for our racing game project known as Aftershock.
Before we begin, we have to understand the technology and limitations of our current generation graphic cards. The biggest limitation of a graphic card is the amount of RAM it has. In all games, developers and artist always end up struggling over the ram usage. This limitation brings two important constraints to the artist; polygon count and textures. The artist has to restrain themselves from producing high poly art assets. UV for any art content needs to fully utilize the texture space so as to produce good quality texture detail in a low texture resolution limitation. Adding on to that, the number of textures and polygons in a scene has to be well conserved to avoid hitting the ram limit of lower end cards.
Blender Technology in Games
Blender as a whole serves as both a modeling tool and a game engine. This seemed to come as a winning solution for anybody who wants to make a game. However, we decided to go for a different model. We use Blender as the modeling tool and Ogre3D with our own built-in extensions with other libraries as our game engine. The reason for this is that we are aiming for a bigger scale game which is graphically intensive. The Blender Game Engine was never designed for such a huge scale project. It does not handle huge scenes well. However, it serves as a very good platform for simple games or prototyping. On the other hand, the modeling tools of blender worque very well for game arts and is in many ways, on par with popular commercial counterparts. This has held true due to the very recent features such as the tangent normal map baquíng tool to bake high poly to low poly models and the sculpting tool for producing high poly models. In addition to that, the scripting technology allowed us to extend Blender to export blender created content into our game.
Tying Blender with external game engines
To get Blender models and materials out into our Ogre3D counterpart, we used the Ogre exporter provided kindly by the community of Ogre3D. In addition to that, we alos wrote our own prefabs exporter script that helped us generate our prefab information from Blender into our game engine specific prefab objects.
To produce the quality of what is presented in Aftershock, a custom editor had to be created. The reason for this was that it is not possible to build the whole level of such scale in Blender. Adding to that, as we are not using Blender as the rendering engine, what we see in Blender is not what we will see in the game. This posed a huge problem for the artists where iteration is required to produce good art.
With a level editor outside of Blender, we eliminated a few problems. Firstly, the artist gets to preview their art content in a WYSIWYG manner. They will not need to go bak and forth with the programmer to test and chek their art assets. This allowed them to iterate their art, tweaque and touch up until they are satisfied with the end result.
The level editor alos serves as an important tool for features that are not covered or should not be covered by blender. Two good example for this are terrain editing and grass/bush plotting. In any typical game engine, optimizations are made to keep terrain and grass rendering optimal. Hence they require special data format which is much easier to edit and modify within the game engine. Another good example is the portal zone placement system. The Aftershok level uses portals as a form of optimization to cull off unnecessary meshes that will never get shown in a given area. However, as portal and zone placement is very subjective and relies a lot on how the scene is laid out, this is better done with the level editor where it's much easier to tweaque and test.
From the technical aspect of things, the level editor serves as a very good platform to implement game play elements and design level based logic like trigger points and user interactive scene objects which are dependent to the game engine. Hence, the Level editor served as an important intermediate tool to bridge our Blender art asset with the game.
Figure 1: A prototype level editor used in Project Aftershock
Blender is a very polished polygon modeling tool. The features designed were very useful and helpful in producing low poly art which is very important in any realtime environment. In our project Aftershock, we utilized blender as an object modeling tool. This helped us deal with the details of our individual objects in a practical manner. We are able to control the poly count individually and produce good UV for our objects/prefabs. In a typical scenario, a game level object should never exceed the 5000 poly limit. However, as graphic cards perform faster and faster, this limit will be raised further. Even so, in practice, an artist should always keep their game art's poly to be as low as possible without degrading the art asset into an unidentifiable lump of blob.
Materials and Textures
Materials and textures is what makes an art alive. The fundamental of an art depends on the material which describes the shading, and the texture that defines how the shading works. Blender has a very good material texturing system that works very well with their internal renderer. However, for a high end game which requires custom hardware shaders such as Aftershock, Blender's material system falls short. To solve this problem, we extended the Blender material exporting solution with our own using the custom ID property system of Blender. That allowed us to add additional parameters to the limited selections of Blender materials.
As with any real time applications, there is a limit to the texture usage that we had to observe. Older cards limit any texture size to 2^n (2 to the power of n). This means that texture size must always be in the resolution of 1, 2, 4, 16, 32, 64, 128, 256, 512, 1024 and so on and so forth. Even though non-power of n textures are now technically supported, it is still a better choice to keep them within this limit for optimal rendering. To alleviate the limited GPU ram as described in the overview, textures can be exported in compressed format known as the DDS/DXT format. This format reduces the memory requirement in the GPU as the textures are stored compressed within the GPU ram itself. However, due to the lossy nature of the format, the texture has some considerably ugly artifacts that might not look good for certain type of textures. Even so, in typical usage cases, as we had found out, the artifacts are negligible and not very obvious. This technique is extensively used in many AAA games on the market today.
Lighting and Shadow
Lighting and shadows play an important role in Project Aftershok giving the city level the overall mood and feel, and shadows give it depth in the game. The lighting and shadow method which we used is split into 2 parts: I)Real time lighting and shadow, and II)Baked ambient occlusion maps. Traditionally, most games use pre-rendered light maps which are generated either from 3d packages such as Blender or from the level editor itself. The lightmap is then assigned to all models which share the same UV layout and map channel (usually on the 2nd UV map channel on top of the diffuse map channel).
Although generating lightmaps for individual objects and then including them as part of the diffuse texture in a single UV map channel, lightmapping is especially important for level scenes whereby most of the textures used are tiled textures as well as different polygon faces using different materials, making the first UV channel unsuitable for lightmapping where the entire lightmap must fit into the UV map boundaries and this lightmap is shared with different objects, each with its own unique UV map layout, hence the need for a 2nd UV map channel specifically for the lightmap texture.
Pre-rendered lightmap textures usually come in resolutions of 1024x1024 or 2048x2048 (must be in power of 2 for optimum memory usage) for an entire level, depending on the game's target hardware limitations. Graphic cards with greater amount of RAM would be able to use higher resolution lightmap textures. Generating lightmaps with radiosity effects is slow and time consuming where the lighting artist has to wait for the level editor or 3d package to complete generating the lightmap texture before viewing and checking it for lighting artifacts problems (problems such as pixelated lightmaps). However, as games become more and more detailed and complex, especially when the polygon count has increased a lot, generating lightmaps may not be a viable choice. Newer games such as Assassin's Creed, use real time lighting and shadow with baked ambient occlusion maps. This is because in older games the polygon count for the entire scene is much lower compared to today's levels, an older game may only have less than 50,000 polygons whereby newer games may have 500,000 polygons/level or more. And since we are still limited to 1024 or 2048 resolution lightmaps squeezing 500,000 polygons onto a single lightmap texture produces a lot of lighting artifacts as compared to squeezing 50,000 polygons on a single lightmap texture.
If the game artist is building an entire game in a 3d package, unwrapping alos becomes a major headache and is not the optimum choice for artists. Imagine unwrapping 500,000 polygons for the whole level the first time for the diffuse texture and unwrapping again for the 2nd UV map channel for lighting. This would've taken ages to complete not to mention arranging the UV poly islands on the UV map channel, which would've been a complete nightmare for any level artist. This in return would make any corrective measures slow and cumbersome.
Figure 2: Couple of building blocks in Blender 3D which only has the first map channel textures
Therefore newer games are splitting full scene lighting/shadow and software shadows(ambient occlusion) separately. For the Project Aftershok game, each building and trak has its own baked ambient occlusion lightmap, whereby scene lighting and shadow is done real time in the level editor which allows the artist to iterate and correct any problems very quickly. Here is how we generated the ambient occlusion maps in blender: As we can see due to the lak of software shadows around the building corners it currently looks flat. First create a new UV map channel for the textured building model for the lightmap texture under editing panel.
Figure 3: Creating a new UV map channel for the ambient occlusion lightmap texture .
Press New to create a new UV texture layer and rename it to “lightmap”. Make sure while still in the lightmap texture channel press the [TAB] key to go into edit mode. The next step is to triangulate all the faces. Still in edit mode select all faces to be triangulated by pressing [a] key.
This is important because to bake lightmaps/ambient occlusion correctly, Blender will not be able to tell the shape of a polygon face orientation which may cause lighting artifacts problems.
Figure 4: Without triangulation blender will not be able to tell the correct face properly causing lightmap baquíng artifacts where shadows are cast on faces where they're not supposed to.
Triangulate the selected faces by pressing [CTRL-T]keys. Warning: It is highly recommended that the artist is thoroughly satisfied with the initial object textures before triangulation as pressing the “Join Triangles” key under Mesh tools tab will mess up the initial building's UV map should the artist decide to redo the 1st map channel's textures.
Figure 5: Triangulated faces.
Once the faces are triangulated we need to unwrap them. Press the [u] key to show the unwrapping list. Select “Unwrap (smart projections)” to unwrap. This method of unwrapping is selected because UV island distribution is based on the actual polygon size compared to using “Lightmap UV pack”.
After selecting “Unwrap (smart projections)” a menú will appear. Select “Fill Holes” set fill quality to 100, “Selected Faces”, “Area Weight”, and set island margin
Figure 6: Unwrapped building using Unwrap (Smart Projections)
Still under the UV/Image Editor window,go to Image>>New to create a new texture image for the lightmap. Now we are going to set our ambient occlusion settings. Go to “World buttons” panel and enable Ambient Occlusion. Here the artist can adjust the ambient occlusion settings to fit their model.
Go to Scene (F10) panel to begin rendering the ambient occlusion lightmap texture. Under “Bake” tab, select Ambient Occlusion and Normalized and clik on BAKE to begin rendering.
Once the ambient occlusion lightmap render is complete we need to save the new image file within the UV/Image Editor window.
Figure 7: This is how the building model looks like with ambient occlusion map
After saving the new ambient occlusion map it is now time to clean up any rendering artifacts.
To fix the artifacts, go to Texture Paint Mode and using the “Soften” brush, paint along the jagged edges to blur the problem areas. This produces a much softer look and feel for the software shadows. Once completed, save the corrected ambient occlusion lightmap texture.
And finally, this is how the building model looks like in the level editor with both diffuse map channel and lightmap texture channel combined together.
Figure 8: Building models with ambient occlusion maps.
Lighting and shadows are calculated real time within the custom built level editor.
BAKING NORMAL MAPS AND AMBIENT OCCLUSION MAPS USING A TEMPORARY “CAGE” MODEL
Figure 9: Final building models with ambient occlusion lightmap and real time lighting and shadow.
Modelling the vehicle craft for Project Aftershok requires both high and low polygon models where by the high poly models provide the extra details through normal maps and ambient occlusion maps. However we will write an additional tip for generating proper normals and ambient occlusion maps.
First of we will require both a high poly model and a low poly model. Whether the high polygon model is built first and then optimized to a lower polygon versión or vice versa is entirely up to the artist. For this craft's pic, the low polygon model is appróximately 8000 polygons.
After that we will then make sure the high and low polygon versión are exactly in the same position to bake the normals and ambient occlusion maps from the high polygon model to the low polygon model. After that we will need to unwrap the vehicle model and create a new image to bake to.
Figure 10: Low and High poly models together with unwrapped low polygon model
The next step is to create a copy of the low polygon model. The reason for doing so is that the low polygon model which has the same position with the high polygon model will act as a “cage” similar to the projection modifier in 3ds max. At this point of time we have 2 low polygon models( 1 temporary “cage” model and 1 to be used in game) and 1 high polygon model.
This cage is particularly useful in modifying only certain parts of the low polygon mesh to fit the high polygon mesh since Blender adjusts the baquíng distance on the overall low polygon model. Next step is to readjust the vértices or faces of the low polygon model to cover as much of the high polygon model as possible.
Once this step is done we can then proceed to baquíng normals and ambient occlusion texture maps for our low polygon cage model. To do that, select both the high polygon model and low polygon model (with the low polygon model as the active object), go to SCENE (F10)=>Bake (Normals or Ambient Occlusion) with “Selected to Active” option turned on. Once we have completed generating our normals and ambient occlusion maps. Re assign them to the first low poly model and delete the temporary “cage” model and that's it!
by Yap Chun Fei