BRender Technical Reference Manual:3 Functions (Structured Reference):Scene Modelling
The Actor
Hierarchical Relationships
Positional Relationship
The Reference Actor
The Model Actor
The Light Actor
The Camera Actor
Other Actors
Co-ordinate Spaces
Converting between Co-ordinate Spaces

Scene Modelling

The Actor

See br_actor74

Scenes are described in terms of actors*1, typically used to place models, lights and cameras. Nothing can see or be seen in a scene without actors. Of course it is still possible to perform 2D operations, such as copying a backdrop into a screen pixel map, or plotting text, but BRender's 3D effects are all obtained via actors.

The actor is intrinsically, just a means of orienting and positioning something with respect to something else. Thus a scene is organised around a system of actors. Admittedly, most actors are used to place models (visible shapes), but they can be used for a variety of other purposes, such as placing lights and cameras, and defining common frames of reference.

The function BrActorAllocate()89 is one of the many ways of creating an actor.

Hierarchical Relationships

See br_actor74{next,prev,children,parent,depth}

A scene is described in terms of a single actor, however, that actor can be augmented by any number of other actors, which can be similarly augmented in turn. This gives rise to the tree-like, hierarchy of actors describing a scene. It is often described as though it were a family tree, with terms such as parent, child, and sibling used.

The br_actor74 has various members defining its hierarchical relationship. The next and prev members describe a linked list of siblings, parent and children are self-evident and depth is equivalent to the generation of the actor.

The actor hierarchy is built up using functions such as BrActorAdd()80. Naturally, there will be some occasions when the structure must be modified, and functions BrActorRemove()81 and BrActorRelink()81 are provided for this purpose.

Positional Relationship

See br_actor74{t}, br_transform339

Descendants, in additional to their hierarchical relationship, also have an associated positional and orientational relationship with their parent. An actor's position and orientation is defined solely with respect to its parent, rather than being relative to some absolute co-ordinate space. For some, this has drawbacks, but most applications benefit from the ease with which complex systems of inter-related models can be positioned. If an absolute frame of reference is required then actors can all be made children of the `universe' parent.

When rendering BRender applies a general affine transformation, starting at the tree's root, between each actor and each one of its children. As the tree is traversed, the transformations are accumulated. This means that a simple modification to one actor's transform will affect the position and orientation of all its descendants. Note that the root of the hierarchy has no parent, and consequently, its transform has no meaning.

The transformation matrix that can be used to transform co-ordinates in one actor's co-ordinate space into that of another's, can be obtained by using the function BrActorToActorMatrix34()82.


See br_actor74{type,type_data,model,material,render_style}, br_model228, br_material151

While actors should be thought of as elements in a structure of relative frames of reference, actors can be set to perform specific functions. Usually this is to be a place holder for a model (scenery or props, say).

There are three primary types of actors: models, lights and cameras. Models are visible 3D shapes defined in terms of geometry (vertices and faces) and surface (colour and materials). Lights are invisible actors that cause lighting effects upon models' surfaces. Cameras are invisible actors that define a particular viewpoint and perspective from which to view a scene.

There are various other functions an actor can provide, typically for assisting BRender by reducing unnecessary rendering.

The smallest hierarchy that will produce an actual rendered scene consists of four actors, namely a root, a model, a camera and a light source. The root actor is patently at the root of the hierarchy, and the model, camera and light actors are its children.

The Reference Actor

See br_actor74{type}

This actor type specified by the symbol BR_ACTOR_NONE, is typically used to assist the layout and organisation of a hierarchy. Although every actor defines its own frame of reference (coordinate space), for this actor that is all it does.

In spite of its lack of specialisation it still serves a useful purpose. For instance it may be convenient to represent a flock of birds with each bird positioned relative to some notional position of the flock, and then simply move the flock as a whole*2. The position of the flock could be defined by a reference actor, and each bird a child model actor.

In general this type of actor is very useful when direct and independent control is required of each stage in a complex transform that needs to be applied to a model or system of models. Rather than recalculate this transform each time, each stage (typically very few) can be represented by a separate actor. For instance a reference actor could represent a translation and rotation, while the child model actor could be solely concerned with scaling the model.

Invariably, the root of a hierarchy will be a reference actor.

The Model Actor

See br_actor74{type,type_data,model,material,render_style}, br_model228, br_material151

The model actor is what it's all about; the linchpin of all 3D rendering - no scene should be without one.

As stated earlier, models are visible 3D shapes defined in terms of geometry (vertices and faces) and surface (colour and materials). A model actor is primarily defined by its model member, but also has the material member to define a default material to be used for parts of the model's surface that don't specify a material. The render_style member can also affect things by rendering the model in different ways. Some of these ways can be useful for things such as selection highlights or simple, but rapid rendering, e.g. wireframe style.

A feature of model actors is that they can inherit properties such as model, material and rendering style from their ancestors. Thus a flock of birds could consist of the same model, but have varying materials. If the model was changed then all birds would change. Note that the model, material and render_style members are still effective for purposes of defining inheritable model actor properties in other actors (even of type BR_ACTOR_NONE).

Create a model actor using BrActorAllocate(BR_ACTOR_MODEL,Null)89, then assign a model to the actor's model member.

The Model

See br_model228{vertices,faces}, br_vertex367, br_face122

The model is defined by a list of vertices and faces. Each vertex defines a vector from the model origin to a corner of a face, typically shared by two or more faces. Each face, representing a part of the surface of a model over which a material is rendered, is defined in terms of a series of vertices. Thus a cube can consist of eight vertices and six pairs of co-planar triangular faces, with each face specifying three vertices.

The model's geometry can be continuously modified, enabling powerful effects such as deformation and morphing. Because BRender may optimise a model's geometry, if you need to make modifications, you will generally need to retain the vertex and face data as originally specified, there are flags to control this such as BR_MODF_KEEP_ORIGINAL.

BRender maintains some useful information about models, such as their bounding radii and bounding boxes (see radius and bounds of br_model228).

Create a model using BrModelAllocate()239, ensuring that the model is added to the registry (BrModelAdd()235) before the actor hierarchy is rendered.

The Vertex

See br_vertex367{p}, br_vector3356

The vertex defines a point within a model's co-ordinate space that may be referred to by faces. By changing this point, faces referring to it will also change. The number of vertices in a model can be specified at the time the model is allocated.

The Face

See br_face122{vertices}, br_model228, br_vertex367

The face defines itself in terms of a polygon, typically a triangle, with vertices specified by indices into the vertices list within the model. The order in which they are specified is important; it specifies the perimeter of the visible side of a face, going anti-clockwise around it. It doesn't matter which vertex is specified first.

The face also defines the way in which its surface should be rendered, by specifying a material.

The number of vertices in a model can be specified at the time the model is allocated.

The Material

See br_material151{flags,ka,kd,ks,colour}, br_colour111

Materials define exactly how faces should be rendered. This can be anything from a single colour value to a lit environment map. It all depends upon how realistic the surface is required to be, how much it should be influenced by lighting conditions, and the complexity of the colouring.

Materials are so called, because it is expected that most models represent physical materials such as wood, granite, wall-paper, etc. Predictably, the more realistic the material is required to be, the greater the processing overhead is likely to be. Nearly all the parameters of BRender materials are concerned with making a compromise between quality and performance. The simplest material is defined simply in terms of a colour value, and is not subject to any lights in the scene; this gives cartoon-like materials. Lighting effects can be added by specifying that the material is lit and smooth shaded; this is suitable for materials such as plastic and painted walls. Textured materials such as wood and marble, can be represented using texture maps, and can also be affected by lighting.

Naturally, the more processing that can be done beforehand, the less that needs to be performed during rendering. For this reason BRender provides the ability to specify prelit materials. This feature can be utilised in situations such as outdoor scenes, where the sun moves relatively slowly and thus need not be involved in frame by frame lighting calculations. Of course, things frequently changing orientation in the scene will still need to be lit normally.


A model's material can be affected by lights in a scene. The material's colours will effectively appear dimmer or brighter according to how well they are lit. This will depend upon the surface's orientation with respect to the viewer and each light in the scene.

BRender uses the Phong lighting model. The following formula shows how the lighting l of a face depends upon q, the angle at the face between the light source and the face normal, and f, the angle at the face between the viewer and the reflected light ray.

The ambient factor is the amount of light assumed to be reflected from other objects and lighting in general. Zero can produce a material whose illumination is highly dependent upon light sources, whereas higher values can give ever fluorescent or luminous effects. A typical sunny scene might have most materials with a significant ambient contribution, whereas a dusk scene might have a much lower one, and a moonlit one, probably zero.

The diffuse factor determines how much of the reflected light is made up of the component dependent upon the angle of the face to the direction of the light illuminating it. The closer the face comes to being perpendicular to the light source, the more light the face receives, and thus the more diffuse light that can be reflected. Zero can give a shiny surface, whereas higher values can give surfaces a more matt appearance.

The specular factor determines how much reflected light is made up of the component dependent upon the angle between the reflected light source and the direction of viewer (naturally, if the angle is zero, the component will be at its maximum). The greater the value, the more prominent highlights will be. There is also the power of the cosine, and the greater this is, the sharper any highlights will be.


See br_colour111

BRender has an integral type dedicated to the task of completely defining a particular colour. It is used directly when specifying true colours, which is taken to mean any non-indexed colour (one not utilising a palette). Colour can either be taken to mean the colour of a screen pixel, the colour of light a surface reflects, or the colour of a light source within a scene.

The colour structure is currently 32 bits, made up of three (or four if you include an alpha component) bytes. You can construct a colour using the BR_COLOUR_RGB() macro.

The Shade Table

See br_material151{flags,index_base,index_range,index_table,colour_map}, br_pixelmap272

For performance reasons (with the added benefit of lower memory requirements) textures, rendering or both can be performed using colour indices instead of colour values. Each colour index is converted into a colour value using a colour look up table (CLUT), often called a palette. When textures are made up of indices there is no straightforward way of lighting them. The brute force way would be to look up each texture index in the colour table, apply lighting to it to see what shade it should be, and then search through the colour table for the index of the colour value most closely matching this shade (which may not be very close at all). Because there isn't any processing power to waste, BRender implements a scheme using a shade table which for a given colour index and proportion of light, will give a colour index corresponding to the shaded colour of the original index.

Being two dimensional and storing pixel values, the shade table is quite suitably represented in the form of a pixel map. It has as many columns as there are colour indices and as many rows as there are distinct shades (from unlit to fully lit).

A shade table is created as a pixel map, commonly of type BR_PMT_INDEX_8, when rendering in 256 colour modes, for indexed textures with 256 columns (28 for textures of type BR_PMT_INDEX_8) and 64 rows. Use BrPixelmapAllocate(BR_PMT_INDEX_8,256,64,Null)285, not forgetting to call BrTableAdd()283 before it is used in rendering.

Tools are generally available to take the effort out of making shade tables.

The Texture Map

See br_material151{flags,colour_map,map_transform}, br_pixelmap272, br_matrix23171

For textured materials, there is an additional problem of how a model's surface should be covered. This is a bit like the reverse of how to lay a map of the world on a flat sheet of paper. Similar, but while textures are effectively flat, models are hardly ever spherical (even ellipsoid). When it comes to wrapping textures around complex models, tools are essential.

When it comes to library facilities, BRender provides a general mapping transform (how the flat texture should be warped to cover each face) and texture co-ordinates at each vertex (defining the coverage of the infinitely tiled texture plane by each face). There is also an option as to whether textures should be perspectively correct or not (a compromise between performance and warping). A feature of texture maps in BRender is that the texture map transform can be continuously modified, thus providing animated surfaces for insignificant performance overhead.

A texture can even be produced by from a scene rendering. This can provide effects such as television screens and rear view mirrors. It can be extended further in combination with the environment map (a first reflection ray trace) to provide mirrored surface effects. Remember though, that fast rendering is not only a product of efficient rendering algorithms, but also upon the application programmer's skill at reducing its workload. For instance a wall mirror could have the rest of the room's reflection precomputed into a texture map to be used as an environment map. If passers-by should also be reflected, then all that's required is a rendering of just the passer-by over this map (from the mirror's point of view).

A reflective effect is often a good substitute for a reflection. Christmas tree baubles for instance can be environment mapped without really needing to reflect any movement by anything else.

A texture map is created as a pixel map, usually with dimensions being powers of two. If an index texture map is used that will be lit, then remember to supply an appropriate shade table as well. To create a texture map use BrPixelmapAllocate()285 followed by BrMapAdd()281 before it is used by a rendered material.

The Light Actor

See br_actor74{type,type_data}, br_light145, br_material151

While models are the only visible elements of a scene, as in reality they require light in order to be seen. For this reason light actors may be used to position and orient lights within a scene. Lights are not visible even if looked at directly. Or course, a brightly prelit model can be made a child of the light - thus making a fairly realistic spotlight model. Note though, that models are transparent to lights (without computationally intensive shadow processing, anyway).

If light actors are not used, there are only two ways of making surfaces visible. These are pre-lit textures and ambient lighting. For prelit textures pre-computed lighting levels at each vertex are stored in the vertex data structure and the BR_MATF_PRELIT flag specified with the material data structure. This is fine for relatively stationary models (such as buildings) that are primarily lit by relatively stationary light sources (such as the sun). Ambient lighting only, is really only appropriate for models that are moderately and evenly lit from all directions, but where this lighting level can change, e.g. ceilings in well lit rooms with `dimmer' light controls, or outdoor scenes with ambient lighting affected by cloud cover or nightfall.

Having all the control of the actor of which they are a part, light actors' position and orientation can be fully controlled. This greatly facilitates situations requiring moving lights such as: roving spot lights, sun-rises and sunsets, headlights moving with the car, etc.

Light actors can be created using BrActorAllocate(BR_ACTOR_LIGHT,Null)89. A light specification will automatically be allocated and pointed to by the type_data member of the actor. Of course, an instance of a br_light145 structure can be conventionally created and supplied as the second argument instead of Null. Note that light actors will need to be enabled for their lighting to affect the scene (see BrLightEnable()85).

The Light Specification

See br_light145{type,colour}, br_colour111

Lights come in various forms. The simplest light is a direct light, which has the effect of a light a long way off, such as the sun or moon, or in some cases a flood-light. It certainly won't be very good for effects such as a candle lit scene. That sort of thing is a job for the point light, which can also be used for things such as light bulbs, fires, street lights, etc. For more precise control of light, the spot light may be used. That can be used, predictably enough, for such things as spot lights, search lights, torches, head lights, light-houses (where the beam runs along a cliff face say), etc.

Point and spot lights can also be controlled in terms of how rapidly their light diminishes over distance (attenuates). This is useful for candles and head-lights, i.e. a candle doesn't light up much more than the immediate vicinity, and objects in the path of head lights get brighter as they get nearer.

The spread of spot lights can also be controlled in terms of an inner fully lit cone and an outer penumbral cone between which the light tails off. Note that spot lights are really conically limited point lights, rather than diverging (or converging) from a sized disc. This difference will need to be appreciated when it comes to implementing cylindrical lights such as lasers and search lights, the laser is probably better implemented using the BrScenePick3D()82 function, locating the face and point of illumination and then directly modifying the screen pixel (found using BrActorToScreenMatrix4()83).

Light specifications can be created automatically at the time the light actor is created, or they can be created conventionally.

The Camera Actor

See br_actor74{type,type_data}, br_camera107

There is a third ingredient to a scene, given a model to see and light to see it by, and this is an eye to see it with. Given our familiarity with photography and television, it is more useful to think of this eye as a camera that relays its image to our computer's monitor or TV screen. Then we have no problem with the possibilities of having more than one camera in the scene at the same time. Each camera defines how a 2D image is produced. Each image is invariably stored on a pixel map of particular dimensions. These images are then either incorporated into the 3D scene or another 2D image. Eventually, a 2D image will be produced for display on all or part of the screen.

The camera actor's position and orientation can like any other actor be continuously positioned and oriented. This allows things such as a driver's view round a race track, a bird's eye view, a fly-on-the-wall view, and any other view you can think of. Cameras are also useful for creating reflections and mirror views, TV screens, and crude shadow and lensing effects.

Camera actors can be created using BrActorAllocate(BR_ACTOR_CAMERA,Null)89. A camera specification will automatically be allocated and pointed to by the type_data member of the actor. Of course, an instance of a br_camera107 structure can be conventionally created and supplied as the second argument instead of Null.

The Camera Specification

See br_camera107{type,field_of_view,aspect}

There are two basic types of camera, the parallel camera and the normal perspective one. The parallel camera produces an isometric image, i.e. faces are not scaled in the image according their distance from the viewer. The perspective camera on the other hand, according to its field of view, can range from `almost parallel' to a fish eye lens effect (180xbc field of view). Somewhere in between we get a conventional view. If you wished to consider the screen as a window onto a virtual world, the field of view would be the angle subtended at the eye by the top and bottom of the screen. Of course, it depends how far away you are, but it can typically be between 10xbc and 40xbc . Given frequent exposure to the cinema and television screen, most people are quite happy with the larger end of the scale.

Note that the aspect of the camera must be specified to ensure that what is square in a scene remains square when it is displayed on the screen. Calculating the aspect boils down to simply measuring the physical width of the output image and dividing it by the physical height. In the process of producing homogenous screen co-ordinates, which are mapped to the full width and height of the output pixel map, the original x axis is scaled down by the camera aspect (see BrMatrix4Perspective()225).The formula to compute the aspect is thus:

So if you have a screen measuring 8" by 6", which has a horizontal resolution of 320 and vertical resolution of 240, and you are rendering to an image 120 pixels across by 100 pixels high, then the aspect is obtained as follows:

You'll notice that if a screen has square pixels (as in the above example) that the camera aspect simply needs to be the width of the output image divided by its height. The reason why BRender doesn't perform this calculation itself and have the `aspect' as the ratio between the sides of the physical pixel, is because the image pixel map is assumed to be the same shape as the screen - how many pixels are on each edge, in BRender's view, is simply a matter of resolution.

Most camera views need to be in perspective, but there are a few good uses for parallel views other than for viewing CAD models: projecting sun shadows onto walls, creating height fields in depth buffers (camera looking down), creating aerial maps, telescopic views, etc. Parallel views can be sized using the width and height members (not forgetting the influence of aspect).

Remember, the camera does not have to be reserved for the player's view.

Camera specifications can be created automatically at the time the camera actor is created, or they can be created conventionally.

Other Actors

See br_actor74{type,type_data}

There are three other functions actors may perform. These are primarily concerned with assisting BRender in reducing unnecessary processing. The bounds actors are ways of saving BRender from performing some of its usual `on screen' or `off screen' checks for models. The clip-plane actor is a way of selecting out model actors that although are in the view volume, are not required in the image.

These other actors are created in a similar fashion to other actors, e.g. using BrActorAllocate(BR_ACTOR_BOUNDS,Null)89. A bounds or clip-plane specification will automatically be allocated and pointed to by the type_data member of the actor. Of course, an instance of the specification structure can be conventionally created and supplied as the second argument instead of Null. Note that clip-plane actors will need to be enabled before they affect the scene (see BrClipPlaneEnable()86).

The Bounds & Bounds Correct Actors

See br_actor74{type,type_data}, br_bounds105{}, br_vector3356

Bounds actors are a way for the application to assist the renderer in removing the necessity for many on/off screen*3 checks. Bounds actors only affect rendering in terms of enabling or disabling rendering of descendant model actors, they are not a visible part of the scene*4. It is up to you to calculate the co-ordinates of the bounding box, BRender does not do this automatically, although you could possibly exploit the model's bounding box that it does calculate (upon BrModelUpdate()237). As long as you ensure that the co-ordinates of each model's bounding box are transformed into the bounds actor's co-ordinate space, they can all be accumulated as appropriate.

You should use bounds actors for complex systems of models that have relatively compact configurations, or systems of models with models that do not need to be drawn if a certain, primary model is not visible, e.g. selection controls. The difference between BR_ACTOR_BOUNDS and BR_ACTOR_BOUNDS_CORRECT is that the latter represents a guarantee to the renderer that there is no surface of a descendant model that projects outside the bounding box. If there is any chance that a descendant actor's model may protrude use BR_ACTOR_BOUNDS, as the undefined behaviour resulting from the use of BR_ACTOR_BOUNDS_CORRECT includes anything, not necessarily just corruption of the screen, or aborting.

Note that in the cases where, in spite of using bounding box actors, the renderer neither knows that a model is entirely off screen nor entirely on screen, that the model's bounding box is used to determine whether the model should be rendered (or its call-back called). If the model is partially or wholly on screen, each face is clipped against the viewing volume and enabled clip plane actors.

The Clip-Plane Actor

See br_actor74{type,type_data}, br_vector4365

Clip-plane actors in single-pass rendering are really only good for rendering simple effects such as crude cross-sectioning and selective lighting. For an example of selective lighting consider a room with a single spot-light being the only form of illumination. It is sometimes quicker (with a lot of objects in the room) to define a conical region with three clip-planes than solely rely on the attenuation of the spot-light to select-out unlit faces. Thus only objects entering the spot-light pyramid (and the viewing volume) will be visible and lit by the spot-light.

Clip-planes really come in to their own when used in multi-pass rendering. Shadows (as opposed to silhouettes) can be implemented by rendering one sectioned part of a scene with a direct light, and the other unlit (using only the ambient component), e.g. similar to the spot-light example above, a window frame could be used to define a fully lit pyramid using four clip-planes. The fully lit rendering has the planes facing inward, the shade rendering has the planes facing outward. Another lighting effect is a sunset (setting over a flat horizon such as the sea) where the tops of trees, buildings and hills are fully lit, and one or more lower sections have deeper hued and darker lights. A more obscure trick would be to use a clip plane to define the surface of a pool and render the pool and its contents slightly differently, possibly even using a different camera position to produce a refraction-like effect (though this would require rendering the pool image to an intermediate texture map).

Clip planes are not recommended as a way of pruning large actor hierarchies*5. Clip planes are intended for their clipping effects, not their pruning effects.

A clip plane is defined by a four-vector. This is made up of a three-vector, being the unit normal to the plane, and the offset of the plane from the origin (in the direction of the normal). The equation of the plane is given as the dot product of this four-vector and the homogenous co-ordinates of a point on the plane, being equal to zero. Thus:

Where np is the four-vector defining the plane in terms of a unit normal and offset, and Pp is a point on the plane.

Descendants' faces are clipped against the plane, with the side defined by the normal being `in scene'.

Note that clip-plane actors will need to be enabled before they affect the scene (see BrClipPlaneEnable()86).

Co-ordinate Spaces

While this manual requires understanding of 3D graphics, so much confusion arises out of the variety of co-ordinate spaces, that a discussion is worthwhile - even in this document. BRender itself, only really ever deals with three co-ordinate spaces, those of the model, the view, and the projected screen. Nevertheless, it is often useful for the 3D applications developer to have other co-ordinate spaces in mind.

During rendering, every model's co-ordinate space is transformed through accumulation of actor transforms into the view space, and thence to projected screen space: primitive vertex data, pixel and depth co-ordinates.

Of particular note to those beginning 3D graphics: BRender has no concept of a world co-ordinate space (an absolute frame of reference).

Model co-ordinate space

A geometric model consisting of vertices, and faces between them, has its own local right-handed co-ordinate system. There is an implicit origin at (0,0,0)*6. An untransformed model, as seen by an unrotated camera (translated along its positive z axis so it faces the model), will have its positive axes pointing as follows: x to the right, y upwards, and the z pointing toward the viewer. A unit of 1 in the model will remain a unit of 1 in the view space unless any intervening actor transform involves a scaling.

Actor co-ordinate space

The actor co-ordinate space is shared by its model (if it has one), but is relative to its parent actor's co-ordinate space through the use of a transform. The actor transform is defined as the transform which must be applied to points in the actor's co-ordinate space (of its model's vertices, say) for them to represent the same points in its parent actor's co-ordinate space.

World co-ordinate space

Whether there is any notion of an absolute frame of reference, or co-ordinate space, is entirely up to the application. It may be appealing to think of a root actor as defining a world co-ordinate space, but this is entirely arbitrary, as BRender treats the root actor like any other. The root actor is so called, because it represents the immediate parent of each actor supplied for rendering (e.g. using BrZbSceneRenderAdd()36), and the ancestor of the camera used for rendering. Its co-ordinate space is certainly not special. Of course, the root actor's transform is redundant as the co-ordinate space of its parent (if it has one) is never used (for a given rendering).

Camera co-ordinate space

The only actor whose co-ordinate space might be regarded as special is that of the camera actor used for a particular rendering. It is into this actor's co-ordinate space that every model is transformed*7 (during rendering). See br_model_custom_cbfn247 for details of transformation matrices and functions that can be used to convert model co-ordinates into view space, or screen space. The use of the term `view space' is preferred to `camera space'.

View space

View space is effectively the camera co-ordinate space, the co-ordinate system in which the view volume is defined. The view volume is the section of the pyramid*8 defined by the camera's field of view from hither_z to yon_z, and aspect ratio, aspect. It is further defined by the origin of the output pixel map, origin. Note that the sides of the view volume correspond with the sides of the pixel map irrespective of its own aspect, therefore the camera's aspect is the only means of ensuring a correct aspect ratio is maintained.

Homogenous screen space

An intermediate phase in the transformation between view space and projected screen space is that of homogenous screen space. This is the viewing volume transformed into a cube, still with a right handed co-ordinate system, defined between (left, bottom, near) (1,1,+1) and (+1,+1,1).

Projected screen space

Projected screen space is the homogenous screen space transformed into co-ordinates suitable for rendering to the screen. That is, the x and y limits will correspond to the bounds of the pixel map, and the z limits will be mapped to the range [32,768,+~32,767.9]. There are various functions dealing with such values, e.g. BrZbScreenZToDepth()31, BrOriginToScreenXYZO()252.

The projected screen space is still a right handed co-ordinate space, but the positive axes now point as follows: x right, y down, and z away from the viewer.

When `screen space' is referred to, i.e. without any qualifier, it should be assumed that `projected screen space' is intended.

Physical screen space

There isn't really a physical screen space, but it can be thought of as the projected screen co-ordinates converted into values used directly by the rendering engines, i.e. x & y converted to integer pixel map co-ordinates, and z converted to z buffer depth or z sort depth.

Converting between Co-ordinate Spaces

It is often necessary to convert from 3D space to 2D screen coordinates and vice versa. There are various functions that can assist in this. BrMatrix4Perspective()225 will produce the matrix transformation that transforms view space (of a notional camera actor) into homogenous screen space (assuming a perspective projection). It is often more convenient to have a transform between an actor's co-ordinate space and the homogenous screen space, and BrActorToScreenMatrix4()83 is provided for this purpose.

There are more extensive screen oriented functions available for use within custom model rendering call-backs (see br_model_custom_cbfn247). These are basically, functions to convert model co-ordinates into screen co-ordinates (BrPointToScreenXY()251), and a function to determine whether a model is on screen (BrOnScreenCheck()249).

The homogenous screen co-ordinate space is a cuboid defined between (left, bottom, near) (1,1,+1) and (+1,+1,1). The projected screen co-ordinate space is defined such that the scalar 2D x and y co-ordinates range across the output pixel map, i.e. between (left top) (origin_x,origin_y) and (width1origin_x,height-1-origin_y) and the z ordinate lies in the range (near to far) [-32,768,+~32,767.9].

There is an inverse relationship between z values in the view co-ordinate space and projected screen space z values. The conversion from a view z ordinate zview to the corresponding projected screen z ordinate Screenz is given by:

This result is the expansion of applying the transform obtained from BrMatrix4Perspective()225 to a z ordinate, and then multiplying the result by 215.

Note that z buffer and z sort depth values are not necessarily the same as projected screen space, z ordinates. Functions are available to convert between depths, projected screen space and camera co-ordinate space.


Convert screen z [32,768,+~32,767.9] to view z

br_scalar BrScreenZToCamera(const br_actor* camera, br_scalar sz)

const br_actor* camera

Pointer to camera actor.

br_scalar sz

Screen z value, e.g. as returned by BrOriginToScreenXYZO()252.


Corresponding z value in the camera actor's co-ordinate space (view space).


Convert point in screen space to point in a camera actor's coordinate space (view space) (compare with BrPointToScreenXYZO()

void BrScreenXYZToCamera(br_vector3* point, const br_actor* camera, const br_pixelmap* screen_buffer, br_int_16 x, br_int_16 y, br_scalar zs)

br_vector3 * point

A non-Null pointer to the vector to hold the converted point in camera space.

const br_actor * camera

A non-Null pointer to the camera actor into whose co-ordinate space the point is to be converted.

const br_pixelmap * screen_buffer

A non-Null pointer to the screen buffer to which the x & y coordinates apply.

br_int_16 x

X co-ordinate of pixel.

br_int_16 y

Y co-ordinate of pixel.

br_scalar zs

Screen z co-ordinate.

Between BrBegin()10 & BrEnd()11. Between BrZbBegin()28 & BrZbEnd()40.

Computes the x & y co-ordinates in screen space, and together with the z co-ordinate applies the inverse projection transform, and stores the resulting vector at point.

	br_vector3 p;
	br_uint_32 depth;
	br_scalar sz;
See Also:
BrOriginToScreenXYZO()252, BrPointToScreenXYZO()252, BrMatrix4Perspective()225.

Generated with CERN WebMaker