[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ] [ Search: ]

4.12.1.1 Shader System Overview

Terminology

The term “shader” is used for a number of different things in computer graphics. In DirectX, and recently OpenGL, “shader” is used to name the programs running on the GPU to process vertex and fragment data. Offline rendering packages use the term to describe the complete appearance of a surface.

Shaders in Crystal Space are closer to the second meaning, as they are a generic surface description. (“Shaders” in the DirectX/OpenGL sense are appropriately called “programs” in Crystal Space.) They are generic as a single shader describes a class of a surface, or even classes of surfaces, and specific aspects can be controlled via parameters. The benefit is that shaders are reusable, and actual values for such parameters can be provided by a material and other instances, defining the actual look of a surface. As an example, you may create a shader “cloth” emulating a cloth-like look. The shader would provide a generic description of the surface, with specific parameters (e.g. the actual color or pattern, whether it appears shiny or dull) being set as material properties. Hence you can have a lot of “cloth” materials by just changing these properties. (The mechanism for this are the “shader variables”, described further below.)

Components of a Shader

A shader itself consists of a number of components:

Metadata

This contains information about the shader as a whole which are relevant for the engine or the external tools. Currently, this only consists of the number of lights a shader can handle at most; in the future, the metadata may be expanded with e.g. descriptions of the parameters that can be controlled.

Shader Variables

Shader variables are “parameters” for shaders and are commonly used for things that should be customizeable, e.g. the diffuse texture of a surface or its shininess. A shader variable defined in the shader itself acts like a default value which can be overridden by other shader variable sources like materials. Shader variables are more described in a more detailed fashion further down. A number of commonly used shader variables are listed in Shader Variables.

Techniques

A shader consists of multiple techniques. The idea is that each technique realizes the same basic effect, but in a different fashion. At runtime, a technique is selected which is supported by the current hardware and renderer. This allows to support of hardware with different capabilities by the same shader. Each technique is assigned a priority, and all technique are tried in the order of their priority, highest first. Usually, the highest priority technique is geared towards more advanced hardware, while lower priorities gradually accomodate less advanced hardware (commonly also degrading the quality since not all effects can be achieved on all hardware).

Another mechanism to control the shader selection are the so-called “shader tags”. Currently, a tag can be set to be required by a technique to have the latter selected, or set to prevent a technique that has this tag to be selected. Tags can be controlled from the configuration system, hence this mechanism offers a simple way to disable a number of techniques from a configuration utility, e.g. to allow the user to disable certain effects to gain higher performance.

Passes

Shader programs as well as buffer and texture bindings are associated with passes. During rendering, a mesh is drawn once for each pass in the used shader: essentially, the same geometry is drawn multiple times with different shader programs (and hence different processing). Multiple passes can be used to overcome hardware limitations: for example, a lightmapped surface that should “glow” needs three textures (lightmap, diffuse texture, glow mask), however, implementing it as a single-pass technique is impossible on hardware that features only two texture units. Using multiple passes can be used here to get around the limitation: the first pass combines normal lighting and glow, while the second pass applies the diffuse texture.

Also made possible are special effects: for example a fur shader using fur shells; each fur shell can be a separate pass.

Using multiple passes has its backdraws, though: obviously, drawing geometry multiple times is potentially more expensive than drawing it once. Also, the only way to propagate data between multiple passes is the framebuffer: you can only “propagate” data to the immediately following pass, and it can only be combined with the this passes output via the fragment blending. (Also, in very dire circumstances the alpha channel may not be available for propagation.)

Shader Programs

Shader programs is where it is defined how a vertex of a mesh or a fragment of a triangle are processed. Consequently, the two types of programs needed are vertex and fragment programs. Both programs are usually tuned to work together to produce a certain effect or appearance.

Shader programs in Crystal Space are abstracted into plugins. Plugins exist for e.g. ARB vertex or fragment programs, Cg vertex or fragment programs or fixed function pipeline setup (not actually “true” programs, but by Crystal Space provided by shader program plugins as well).

A Crystal Space-specific feature grouped into shader programs is the support for “vertex processors”. These are not shader programs in the sense of being executed on the GPU, rather they're plugins that manipulate vertex data in software. The motivation for this feature was to allow vertex manipulation that might be too complicated, or simply impossible due hardware limitations, to realize with GPU vertex programs (the specific motivation was software lighting: some aspects of CS' light support don't map well to older hardware).

Mappings

Shader programs, as all programs, need some kind of input. For vertex programs, it's data varying per-vertex, provided by vertex buffers. For fragment programs, it's textures as well as per-vertex data (output by the vertex program - sometimes, computed in the vertex program, sometimes just data specified by the user passed through). Both also take uniform parameters - that is, data that does not change over the geometry drawn.

Vertex buffers, textures and uniform parameters are mapped to program inputs via XML statements. The actual format of the mapping destinations depends on the shader programs used - e.g. for Cg programs, variable names used in the actual programs can be utilized as targets. Other program types may have more abstract mappings (e.g. registers for ARB programs).

Shader Variables

Shader variables serve as an important carrier for data related to rendering in Crystal Space. The function “customize shaders” is actually just a subset of what shader variables are used for; other uses include passing around data inside the engine itself.

Shader variables are identified by a name which is a string ID obtained from the global iShaderVarStringSet. Note that a name is not unique to an instance of a variable. Indeed, commonly multiple variables will have the same name; there is no global single namespace of variables, which one is chosen depends on what variables are attached to the different contexts - see below.

Shader variables can contain vectors, transforms, but also textures, vertex buffers and even arrays of other shader variables.

Shader variables are provided by so-called “shader variable contexts”. Before rendering a mesh, the render loop collects shader variables from the contexts. There is a hardcoded order of preference: some contexts override variables from other contexts, if variables of the same name exist.

Contexts are provided by (in order of increasing preference):

Shader expressions

Crystal Space does not just support fixed values for shader variables, but also allows to specify simple formulas to dynamically compute values as well. Applications include transformation of values provided by the user or simple animations (e.g. a pulsating glow).

Shader conditions and processing instructions

Basic conditions

Shader conditionals allow a single shader to support an array of different features. For example, a shader might have to support both per-vertex and lightmap lighting, and optionally support a normal map and/or a glow map.

Without conditionals, basically all combinations of these features would require a separate shader: obviously a rather large amount of work to begin with and a continued maintenance headache, since a single change must be propagated to a number of shaders.

Shader conditionals allow to litter “conditions” throughout a shader. The possible conditions include tests for the existance of shader variables and comparisons between variables or immediate values.

This allows to only let the code that supports different features differ: e.g. in the case above, around the lighting code could be a condition that tests for the existance of a lightmap; is one present, lightmap lighting is utilized, otherwise per-vertex lighting.

Processing instruction

A feature useful for code reduction are templates. These are blocks of XML that can be inserted in a place via a processing instruction. Also supported are parameters, so special placeholders in the XML will be replaced by the value given as a parameter.

Related to this is XML generation in a fashion akin to for-loops. A block of XML is repeated a number of times, with a counting variable that can be inserted into XML via placeholders the same way template parameters can be inserted.

Last but not least other files can be included. This can be used to move recurring blocks of XML into separate file, or have the included file generate a number of utility templates.

For a detailed reference of both shader conditions and processing instructions see section Shader Conditions and Processing Instructions Reference.

Caveats

There are some pitfalls in shader conditions and processing instructions that one needs to be aware of:


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

This document was generated using texi2html 1.76.