We want to understand how shaders work so we can create our own ones.


If you want to use custom textures, there are some things you should know.

A high level overview

What do shaders do?

You may have heard of the Vertex and Pixel or Fragment Shaders that make today's games look good. These are not those shaders. Quake 3 was released before those Shaders existed and its “Shaders” describe how to layer and blend textures to get nice effects. Let's see how it works!


A shader consists consists of 1 to 8 layers, called stages. Here's an example:

Stage 1 Stage 2 Result
image background.jpg layer1.jpg mixdown1.jpg
  map textures/shadertut/background
  blendFunc GL_ONE GL_ZERO
  map textures/shadertut/layer1
  blendFunc GL_ONE GL_ONE

For each stage you define how it should be mixed with what's already there. In this example, Stage 1 would overwrite whatever color is already there (i.e. the things behind the surface to which it is applied, meaning it's not transparent), while stage 2 would be added onto what's already there - effectively making black transparent. (Colors are described by their Red, Green and Blue (RGB) values, 0 0 0 being black. Adding 0 doesn't change anything, so black becomes transparent.)

How rendering works

In order to understand how exactly stages work, we need to know how rendering works.

There are two buffers, the Color Buffer and the Depth Buffer.

The Color Buffer contains the image that will eventually be displayed to the user, once rendering is done. The 3D scene that gets rendered consists of many triangles which get drawn in no particular order (well, not quite, but more on that later). So for each pixel of a triangle that gets drawn, we first need to know: Is there already something in front of it?

That's where the depth buffer comes into play. Whenever a pixel of a triangle gets drawn, it's distance from the camera is saved in the Depth Buffer. This way we can just look in the Depth Buffer to check if there's something in front of the new triangle, and if there is, we can skip this one.


If you understood that, you might see a problem with transparent surfaces. If we draw a transparent surface and store its distance in the Depth buffer, no triangle behind it will be drawn anymore, so things behind the transparent object may disappear. But even if we don't write the transparent surface to the depth buffer, we'll get in trouble: things behind the surface will be drawn in front of it.

Quake 3's solution? First draw all opaque triangles, then the transparent ones. The latter don't get drawn to the depth buffer so they can't occlude each other. This works well as long as you only use additive blending - since addition is commutative, the order in which the surfaces get drawn does not matter.

There are actually more than 2 order entries - here's the complete list, from drawn first to drawn last:

# Name Description
1 portal Portals (including mirrors) work by having their view drawn before the rest of the level (try r_portalOnly 1 to see this), so they need to be drawn first.
2 Sky The sky is drawn in before any geometry
3 Opaque Opaque triangles get drawn before transparent ones, as described above
6 Banner For transparent things that are always behind the “usual” transparent things, so there are no sorting issues - i.e. things close to walls (banners)
8 Underwater Apparently intended for underwater foliage etc. of non-enterable water, which will thus always be “behind” the surface
9 Additive “Ordinary” transparent things
16 Nearest Things that should always be closest to the viewer, like muzzle flashes

Shaders without a BlendFunc (see below) or only BlendFunc GL_ONE GL_ZERO default to Opaque, everything else to Additive. So if you have a BlendFunc besides GL_ONE GL_ZERO that is not transparent, you'll want to correct this to Opaque.

Blend Functions

So what can we do with shaders? Basically, we define how the color of the new pixels of a triangle get mixed with the color previously there. It's a simple formula:

NewColor = SourceColor * SourceFactor + DestinationColor * DestinationFactor

Where the current stage is the source (e.g. a texture or a lightmap) and the destination is the color currently in the Color Buffer. The things we can change are SourceFactor and DestinationFactor, with the following possibilities:

SourceFactor Value Description
GL_DST_COLOR DestinationColor
GL_ONE_MINUS_DST_COLOR 1 - DestinationColor
GL_SRC_ALPHA The alpha channel of the source
GL_ONE_MINUS_SRC_ALPHA 1 - the alpha channel of the source
DestinationFactor Value Description
GL_SRC_COLOR SourceColor
GL_SRC_ALPHA The alpha channel of the source
GL_ONE_MINUS_SRC_ALPHA 1 - the alpha channel of the source

So if you want only this Stage, we'd set the SourceFactor to GL_ONE and the DestinationFactor to GL_ZERO:

NewColor = SourceColor * 1 + DestinationColor * 0 = SourceColor

If we want to add both together (to get a black-to-transparency effect) we'd set both SourceFactor and DestinationFactor to GL_ONE:

NewColor = SourceColor * 1 + DestinationColor * 1 = SourceColor + DestinationColor

If we want to multiply source and destination, we can set SourceFactor to GL_DST_COLOR and DestinationFactor to GL_ZERO:

NewColor = SourceColor * DestinationColor + DestinationColor * 0 = SourceColor * DestinationColor

To correctly display an image with an alpha channel, we'd set SourceFactor to GL_SRC_ALPHA and DestinationFactor to GL_ONE_MINUS_SRC_ALPHA. Caution though: This will likely lead to sorting issues, as discussed above. One exception are Decals, which are always directly on a surface.

Alpha Testing

If your image is only completely transparent and completely opaque, you can use alpha testing to only write to the depth buffer when the image is not transparent. The keyword is alphaFunc and the possible values are:

AlphaFunc Value Description
GT_0 Greater Than 0
LT_128 Less Than 128
GE_128 Greater than or Equal to 128

Depending on your Blend Function (i.e. if there's any and it's not GL_ONE GL_ZERO) writing to the Depth Buffer might be disabled - you'll need to re-enable it using the DepthWrite keyword.

Shader Syntax

Basic Structure

// You can comment using two slashes.
// first the name:
  // information mostly for Q3Map2 and the Radiant goes here, with some exceptions (like sort or cull)
  // image for display in the Radiant
  qer_editorimage textures/subfolder/image
  // if there is no lightmap stage, q3map2 mustn't create one
  // the first stage
    // stages contain (almost) all the information related to rendering
    // the image to display
    map textures/subfolder/image
    // the blend function to use
    blendFunc GL_ONE GL_ZERO
  // Additional stages would go here

General Settings

After the shader-starting {, you can set some general settings. These are mostly for the Radiant Level Editor, like qer_editorimage, the image to display, or for Q3Map2, like q3map_nolightmap (to indicate no lightmap is to be created), but there are some exceptions, including:

Name possible values Description
cull front, back, none Which side of the triangle is to be displayed in the game (usually front)
sort # or name from table above Defines the draw order as explained in the Transparency section


As you can see, a stage starts with { and ends with }. You need to define where the color comes from, usually using map, followed by a texture path (the extension can be omitted), $whiteimage (for a white image) or $lightmap.

You can also set the BlendFunction and other stage-specific settings.

Blend Functions

The Blend Functions are specified using the blendFunc keyword:

blendFunc <SourceFactor> <DestinationFactor>

For example:

blendFunc GL_ONE GL_ONE


That should be all you need to know to be able to understand the Q3Map2 Shader Manual, which I am not going to rewrite here.

tutorials/shadersexplained.txt · Last modified: 2012/04/30 13:42 by mrwonko
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki