An introduction to vertex and pixel shaders

 

This section will give you a brief introduction to vertex and pixel shaders. You are very much encouraged to read the DirectX SDK documentation as well. It provides you with all the details on how vertex and pixel shaders work and their integration into DirectX. German readers are also welcome to read "Programmierbare Pixel- und Vertex Shader am Beispiel des GeForce3 von NVIDIA". When we're discussing effect details in the following section a fairly good amount of knowledge about vertex and pixel shaders is required.

 

Vertex and pixel shaders - what's the use?

 

What's necessary to render a 3D object to screen? At first we need its surface representation. Most of the time this will be approximated by triangles as they allow easy and quick processing. Nowadays a lot of modeling tools support high order surfaces to describe 3D objects. But  these will have to be triangulated as well either by the CPU or the GPU (graphic processing unit) before rendering takes place. In fact using high order surfaces have quite some positive impact on rendering performance because they can be tessellated to an arbitrary level of detail using the same compact surface description. If this tessellation is performed on the GPU a lot of system bandwidth can be saved, thus improving rendering speed.

How's each triangle being processed? Each triangle has to be transformed according to its relative position and orientation to the viewer. That is, each of the three vertices the triangle is made up of is transformed to its proper view space position. At this point it is possible to cull each triangle that is outside the viewer's frustum. The next step is to light the triangle by taking the transformed vertices and apply a lighting calculation for every light defined in the scene. At last the triangle needs to be projected to the screen in order to rasterize it. During rasterization the triangle will be shaded and textured.

Graphic processors are able to perform a certain amount of these tasks. The first generation was able to draw shaded and textured triangles in hardware. The CPU still had the burden to feed the graphic processor with transformed and lit vertices, triangle gradients for shading and texturing, etc. Integrating the triangle setup into the chip logic was the next step and finally even transformation and lighting (TnL) was possible in hardware reducing the CPU load considerably. The only drawback is that a developer so far had no direct (i.e. program driven) control over transformation, lighting and pixel rendering because all logic was hardwired on the chip.

Vertex and pixel shaders allow developers to code customized transformation and lighting calculations as well as pixel coloring functionality. Each shader is basically a micro program executed on the GPU to control either vertex or pixel processing. Besides the freedom offered to developers they also help to keep the 3D API simple. A fixed function pipeline has to define modes, flags, etc. for an increasing number of rendering features. Multiplied by the increasing number of data inputs (more colors, textures, vertex streams, etc.) due to continuously evolving hardware this token space becomes more and more complex in terms of both the API and the driver. A programmable pipeline on the other hand provides for scalability and the ability to evolve. Rendering features can be implemented in a more streamlined manner. New API features can easily be exposed incrementally by adding instructions, data inputs and program resource limits (e.g. number of available registers, program size, etc.).

 

Back to start of document

Last update on 2002-03-17

 

Meshuggah Demo and Effect browser.
Copyright © 2001, 2002 Carsten Wenzel.
All rights reserved.