[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ] [ Search: ]

5.1 How to Create Optimal Maps for Crystal Space

Written by Jorrit Tyberghein, jorrit.tyberghein@gmail.com.

Note: Creating optimal maps is not very easy as there are a lot of factors to consider. Crystal Space has a lot of tools to offer (like sectors, portals, visibility cullers, etc.) but using these tools effectively is an art. In this chapter we will not talk about how to create maps. For that you use external tools like Blender, QuArK, or some other suitable tool. This chapter focuses on how you should partition your map into sectors, the kind of mesh objects you should use, the visibility cullers, packing textures for more efficient rendering, combining objects, lighting considerations, etc.

For a good discussion about sectors and visibility cullers, see Visibility Culling In Detail.

For more tips about efficient maps, see Some Tips for Efficient Maps.

Sectors and Portals

A sector is the basic building block in a map (see section Visibility Culling In Detail). When you create a map you should decide upon how to partition your map in sectors. The easiest solution is to use a single sector. In many cases that may even be acceptable. Here are some points to consider when deciding upon how to partition your map into sectors:

Visibility Cullers

Every sector has its own visibility culler. Crystal Space currently supports two kinds of visibility cullers: Frustvis and Dynavis. In the future we will have a PVS culler as well (potentially visible set) which basically precalculates visibility in a separate tool. Dynavis attempts to work more on culling but it also has more overhead. So, you should use Frustvis in cases when you have the following kinds of maps:

On the other hand, if you have a complex map with lots of large objects then you should consider using Dynavis. If you do decided to use Dynavis for a sector you should follow the following guidelines for that sector:

Here is an example showing how you can replace the occlusion mesh of some mesh object in a map file:

 
<meshobj name="complexWall">
  <plugin>genmesh</plugin>
  <params>
    ...
  </params>
  <trimesh>
    <mesh>
      <v x="-1" y="-1" z="-1" />
      <v x="1" y="-1" z="-1" />
      <v x="1" y="4" z="-1" />
      <v x="-1" y="4" z="-1" />
      <t v1="0" v2="1" v3="2" />
      <t v1="0" v2="2" v3="3" />
    </mesh>
    <id>viscull</id>
  </trimesh>
</meshobj>

And this is an example of how you can disable the occlusion mesh for an object:

 
<meshobj name="wallSegment">
  <plugin>genmesh</plugin>
  <params>
    ...
  </params>
  <trimesh>
    <id>viscull</id>
  </trimesh>
</meshobj>

Here is a special note about closed versus non-closed objects in Dynavis. A closed object is an object that has no holes in it (i.e. a cube is a closed object). A cube with one of the six sides removed is not closed. When Dynavis writes an object to the occlusion buffer (coverage buffer) it will choose a technique based upon whether the object is closed or not.

Here is an example which illustrates what this means in practice. Assume you have a highly tesselated sphere (a sphere mesh that has a large number of triangles). Also assume that there is another object inside the sphere. If the first technique is used (closed object) then Dynavis will be most optimal for updating the coverage buffer since writing an outline is very cheap. On the other hand it will not be able to cull away the object inside the sphere since the depth buffer is set too deep (i.e. the object in the sphere will be in front of the depth that is set in the depth buffer).

On the other hand, if the second (non-closed) technique is used then Dynavis will need to use a lot of CPU processing in order to update the coverage buffer but it will be able to cull away the object inside the sphere since the depth buffer will now contain accurate values per triangle.

We have not done any performance tests to find out which is better. If you have a really complex object (highly tesselated sphere) then the first technique will probably be better since the cost of writing the object to the coverage buffer will probably be higher then the cost of rendering the object inside the sphere. But we do not know where exactly the threshold is.

Object Types

Regardless of sector partitioning and visibility culling requirements, the choice of objects you use can also be important for performance. Crystal Space supports many mesh objects but the most important ones are:

Here are some guidelines on using and choosing these meshes:

Assisting the Renderer

When considering on how to design your objects you should keep in mind what the renderer prefers. For the renderer a mesh is defined as a polygon or triangle mesh with a single material and/or shader. So, if you are using a ‘genmesh’ mesh that uses multiple materials then this is actually a set of different meshes for the renderer. To avoid confusion we will call the single-material mesh that the renderer uses a render-mesh.

With OpenGL, and especially if you have a 3D card that supports the VBO (Vertex Buffer Objects) extension, the renderer prefers render-meshes that have a lot of polygons. So, for the renderer it is better to use 10 render-meshes with 10000 polygons each, as opposed to 100 render-meshes with 1000 polygons each, even though the total number of polygons is the same.

On the other hand, this requirement conflicts with some of the guidelines for the visibility culler. Getting an optimal setup depends on the minimum hardware you want to support. If you are writing a game for the future and decide to require VBO support in the 3D hardware then you should use fewer but larger objects. For the current crop of 3D card, finding a good compromize is best.

One other technique you can use to help increase the size of render-meshes is to try to fit several materials on one material. For example, if you have a house with three textures: wall, roof, and doorway then you can create a new texture that contains those three textures. The end result of this is that every house will be a single render-mesh instead of three which is more optimal for OpenGL. There are some disadvantages to this technique, however:

  1. You have to be able to fit the smaller textures in the big texture without too much waste. Fitting four 64x64 textures in one 128x128 texture is easy but fitting three 64x64 textures in one 128x128 texture is going to waste some precious texture memory. Of course, you could try to use the remaining 64x64 space for textures on other objects.
  2. It is possible that you have to use lower quality textures since combining them on a bigger texture may otherwise overflow hardware limitations.
  3. It is harder for the artist to create models with this technique.
  4. This technique is not possible if you have a tiling texture; i.e. a wall texture that is repeated accross a large surface. There are a few workarounds for this problem. For example you can pretile the texture on the super-material but that only works if there are not too many tiles required. Also you can split the polygons so that tiling is no longer required.

Lighting Considerations

When designing a map you also have to think about where to place lights. If you plan to use dynamic lighting you must be very careful to not exagerate the number of lights. Runtime performance of this technique depends on the number of lights and their influence radius. For this reason you are probably better off using lightmapped lighting in case you have a big map with a lot of lights.

With lightmapped lighting there is no runtime cost associated with having multiple lights (there is a slight memory cost associated with having many pseudo-dynamic lightmaps). A higher number of lights simply means that recalculating lighting will take longer.

You can also use pseudo-dynamic lights (lightmapped lights that may change color) which however introduces a new runtime cost with having multiple lights. That is each pseudo-dynamic light has it's own influence map that is used to calculate the lightmap given the light's current color at runtime. While there's not much computational cost associated with this unless you change the light often it increases the memory overhead, so keeping the number of pseudo-dynamic lights low is important.

You may also use dynamic shadowing, however if you plan to do so you should consider which kind of light to place. Generally the most suited light type for dynamic shadows is the directional light as it offers the best performance and accuracy (and is also very suited for very important lights like a "sun"). If you cannot use directional light you should consider a spot light which has similar performance, but has lower accuracy. Point lights are something you should try to avoid with dynamic shadows as those are turned into 6 spot lights for shadowing purposes and therefore pose the biggest runtime cost.

If you plan to use deferred rendering you can also use light clipping volumes which allows you to specify an arbitrary mesh (which should be reasonably simple) that allows you to specify the influence of a light in more detail. For example you can use a simple box to prevent the light from "bleeding" throughh walls instead of using shadowing for that purpose.

For best performance you should combine all of the above techniques. For example you may use few dynamic lights with shadowing for very important ones such as the sun/moon or a flash light/torch while using lightmapping for the rest.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

This document was generated using texi2html 1.76.