Monday, December 15, 2008

Transform Normals Directly

EDIT: Technique 2 (origin differencing) is, besides a terrible idea, not the fastest way to transform normals. See this post for a much better treatment of the subject!

There are two ways to transform a normal vector N given a matrix M:
  1. Compute the inverse of M, transpose it, and use that new M' to transform the normal vector.
  2. Transform M directly, and transpose the origin (0,0,0) then subtract the transformed origin form the transformed normal. This is like transforming the two end points of the normal.
Which is better? Well, I'd say it depends.
  • If you are transforming a lot of normals, calculate the inverse and transpose it once. Now you can invert normals directly.
  • If you are transforming only one normal, it might be cheaper to transform two points rather than invert a 4x4 matrix.
But...there is a danger to the "transform the points" technique: loss of precision!

If your matrix contains a large translation, then the result of transforming the normal and origin will be two points that are close together, but far the origin. This means that you have lost some precision, and subtracting won't get it back!

The inverse/transpose method does not have this problem; the transpose moves the translation part of the matrix off into the bottom row where we don't really care about it - the remaining matrix terms should have fairly small magnitudes and not lose precision.

5 comments:

  1. what about set the translation components of M to (0,0,0), which then make any precision problem to disappear ?

    In HLSL is just transform the normal like this:
    OUT.vNormal = mul((float3x3)im_matworld, IN.vNormal);

    ReplyDelete
  2. This is kinda wrong, your "transform endpoints" method yields the same results as transforming normal by original matrix.

    ReplyDelete
  3. hmmm, not sure it is wrong. by transforming the two points he doesn't get the translation part.

    ReplyDelete
  4. Hi Guys,

    - I can't believe I ever posted this - transforming the two end points is a really, really, really bad idea...the precision losses are unmanageable.

    - Transforming the end points is "correct" in theory...since you get the translation twice, it cancels out when you difference. The only down-side is precision.

    - Gjaegy is 100% correct about zapping the translation EXCEPT: this assumes that there is no scaling in the original matrix. There are a lot of good reasons not to have scaling, and this is the case with X-Plane, but the technique of "zapping the translation" is not fully general.

    Of course, differencing the end points has the same scaling problems!

    ReplyDelete