Displacement maps are very likely to be a feature of DirectX9 and future generations of 3D hardware accelerators. This effect simulates a few of the aspects displacement maps will be able to do. Using displacement maps instead of bump maps will no longer destroy the illusion of surface detail once a part of an object has turn to a certain degree where you are able to see its silhouette. With bump maps the illusion of having scratches, dents or other surface characteristics disappears as soon as you can see the profile of an object because these surface imperfections don't really exist physically (that is, the geometry doesn't alter), they're just a texture effect. With displacement maps however geometry is altered based on the height values in the displacement map. Basically, what happens is the following. Take a triangle. For each vertex there is a normal and a 2D texture coordinate for the displacement map. When the triangle is sent to the GPU it will be subdivided a couple of times. Position, normal and texture coordinates for each artificially inserted vertex - due to subdivision - will be interpolated. Now the vertex position needs to be altered. Otherwise we would end up having a lot of small triangles representing the original one. Therefore, for each vertex the GPU determines the height value from the displacement map based on the provided 2D texture coordinate and uses this value to displace the vertex position along the vertex normal.
Since current hardware and 3D APIs aren't capable of using displacement maps they have to be simulated. To keep the workload of subdivision low we limit ourselves to one quad with a displacement map applied. This way we can simply subdivide it like a grid. There's no need to interpolate normals and the position as well as texture coordinates can be derived directly from the grid position. To get a high quality displacement map effect the quad needs to be subdivided to quite some extent depending on the amount and the size of detail provided in the displacement map. In Meshuggah the displacement map has a size of 256 x 256 pixels. The quad is subdivided 256 times both horizontally and vertically resulting in 255 x 255 tiny quads that need to be rendered every frame. Here it is a good idea to use a triangle list to render the quads. The vertex data for one row of quads is crammed into a vertex buffer and sent to the GPU.
What follows is a quick note on what the shaders are doing to render the displaced logo in Meshuggah. The vertex shader calculates the reflected view vector as well as the fresnel term for each vertex. Click here to see the formulas involved. In the pixel shader the reflected view vector is used to fetch a color from an environment map. It is multiplied by the material color of the logo. The fresnel term is used to blend this result with the material color, thus giving us the final logo color that contains reflections. Since we want certain parts of the logo to appear darker than others we create a copy of that color with only half the intensity by using the _d2 pixel shader instruction modifier. Now it is time to determine the final output color by taking the logo texture into account. Its RGB channels contain a decal value used to blend between the two colors we've just calculated. As as last step we take the value from the alpha channel of the logo texture to mask out all invisible logo parts.