Volumetric Shadows usingĀ Quake II meshes. (2002)

[gallery_bank type=”images” format=”thumbnail” title=”false” desc=”false” responsive=”true” display=”selected” no_of_images=”20″ sort_by=”random” special_effect=”” animation_effect=”” album_title=”false” album_id=”3″]Description

I discuss the some of the significant features of this demo in terms of development

and will give a brief account of their inner working. The implementation is done in C++ and OpenGL

for 3D rendering.

Camera Movement

A camera class is used to establish the point of view in the scene which supports free mouse

look (camera orientation) and movement such as strafing, forward, back ect. This is effected

via a 4×4 transformation matrix (Tr) member variable maintained by the camera class which specifies

how to transform the current world coordinate system with respect to eye coordinates. The non

technical way to say this is that this matrix defines the camera position and orientation. Tr

is built up by combining the translation and rotation of the camera represented by matrices T

and R respectively, also class member variables.

Free mouse look is achieved first by reading changes to X and Y mouse coordinates which are scaled

to represent angular changes in radians. Since R is an orthogonal matrix it can be assumed that the

rows of R represent the coordinates in the original space of unit vectors along the coordinate axes of

the rotated space. This means that the 1st row of R represents the X axis of the rotated space.

Thus an up or down look is achieved by setting up a rotational matrix about the X axis using mouse

data and multiplying R by it. Sideways look is achieved in a different way since the 2nd row of R

does not necessarily represent the world’s up vector. So to get around this the view vector is extracted

from R, represented by the 3rd row of that matrix and rotated by the world up vector (0,1,0). Then a

new R is derived from the transformed view vector with the aid of the world up vector as a reference.

R is then ready for use in calculating Tr.

The fact that R is a special orthogonal matrix also makes it convenient when effecting relative motion

such as moving in the direction of view or moving perpendicular to the direction of view or strafing.

To move in the direction of view the view vector is extracted from R and multiplied by a speed of movement

constant. The result is integrated or added into the translation matrix T. Only the x and z components

of the vector are used so as to effect motion in the XZ plane and eliminate any upward motion. Similarly,

strafing is achieved by using a vector given by the 1st row of R which is perpendicular to the view.

Scene Models

All the models in the scene, 3D MAX and Quake3, are encapsulated in classes. For the purpose of this

discussion a model will be defined as a collection of geometry objects grouped together and an object

a collection of polygons where a single material or texture only may be mapped to a given object.

So a complete quake3 model is not really a model in this sense but a collection of models (.md3) each

of which could represent the head, upper, lower body of a character or a weapon. This design by Id Software

was intended to allow the motion/animation of body parts and weapon to be independent of each other.

For example it would be possible to tilt back the head and upper body of a character in order to shoot

down an enemy at an elevated position while the legs are running.

Both 3DS and MD3 models are derived from a base class that holds objects geometry and materials, and

information such as number of objects and materials. The vertex and polygon normals for all models are

calculated during loading as this is necessary for casting shadows which will be explained below.

Shadows

In order to enable models to cast shadows it is necessary to generate a face connectivity map as well as

calculate the plane equation of each face for each model. This is done once during initialization and is

required for silhouette extraction.

During each shadow rendering pass each model class calculates the position of the light source relative to

its own local coordinate system. This is given by multiplying the light vector by the inverse matrix of an

object. This is necessary as the light source can change position. The faces of an object are are then

tested for visibility with respect to the light source which requires face normal and plane equation

information. Shadow edges or silhouette edges are determined by two rules:

1. If a face is labeled visible and it is neighboring a face that is not visible then the edge they share

is shadow casting.

2. An edge of a face is also classed as shadow casting if it is not shared with any other face.

Once all the shadow casting edges have been evaluated the next step is to process the shadow volume or

the volume in space absent of illumination as a result of the model occluding the light source. The shadow

volume is enclosed by planes extending from shadow edges away from the light source. This is rendered into

the stencil buffer of a graphics accelerator and the result is used as a mask to render a shadow into the

colour buffer.