## Wednesday, September 02, 2009

### OpenGL Matrix Tricks: Transforms

One of the cool things about OpenGL is that, after doing a ton of advanced math to work out your matrices, usually the end results boil down to an astoundingly small amount of C++.

Transforming Vectors and Normals

A vector can be defined via a point whose distance from the origin is zero. Really we need 3 coordinates. If we have a fourth coordinate, what do we store there?

My short answer is: why do you have a fourth coordinate? Don't waste space, most GPUs are faster when operating on less coordinates in shaders as well as bus bandwidth. But if you had to pick something, zero is probably a good choice, as we'll see below.

Now you may have read that you can't just transform your vectors the way you transform your points. The diagrams here explain why.

I have to point out that to a math professor, that "stretched" vector is exactly the expected result from applying a non-uniform scaling matrix to a vector.

The true problem is that we want the normal vector to stay normal - in other words, we really want to transform the plane that the normal vector is normal to, and then recover the normal vector. This is pretty different from actually transforming the vector itself, but you gotta know what you want.

See below for plane transformations, but plane transformations are expensive, because they require calculating the inverse of our transform matrix.

If we know that we don't have any scaling on our matrices (uniform scaling would change the magnitude of the vector, which can be undone I suppose) then we can transform the normal vector directly with our transform matrix. The trick is to only use the 3x3 upper-left corner (or simply set the fourth coordinate to zero). This will remove the translation part of our matrix and leave only the rotations. If we transform the vertex, it's meaning relative to the origin would be lost. See here for more details.

Important: you might think that you can transform your vectors by transforming the vector and the origin, and then subtracting. This is a very bad idea! The problem is that if the origin of your coordinate system is not close to zero, you'll lose a lot of precision on your normal vectors by moving away from the origin. I have had this implementation in X-Plane and the loss of precision quickly turns into visible artifacts.

Transforming Planes

A plane is represented by four values (typically ABCD, as in Ax + By + Cz + Dw = 0), and notation-wise, you would think of a plane as a column. (Since a point is a row, we are saying that point dot plane = 0.) ABC is the normal vector to the plane, and D is a scalar that defines which of the infinite number of parallel planes we have.

To transform a plane, you need to transform it by the transpose of the inverse of our matrix. See here for a derivation.

It should be noted that when you want to reverse-transform a vector (e.g. go from eye coordinates to world coordinates) this works in our favor.

When transforming a plane backward: tranpose(inverse(inverse(MV)) is just transpose(MV) - in other words, the inverse of our "forward transform" (E.g. the model view matrix) cancels out the inverse needed to transform a plane equation. So we can simply turn our vector into a plane with that vector as its normal, quickly transform it, and then pull out our vector.