There Are Two Ways to Get "Perspective" Texturing
"Perspective" texturing basically means that the sampling of pixels is non-uniform along a line in screen space. In OpenGL (assuming you haven't hacked the heck out of your ST coordinates per pixel in a fragment shader) this always comes from a "perspective" division somewhere in the pipeline. In fact there are two ways this can happen:
- Interpolation of per-vertex data is done in a "perspective correct" manner. In other words, if you tip your triangle over in 3-d space, interpolation of all of its attributes should "look right". In this case, the texture coordinates are varying over the pixels at the right rate to create the perspective sampling effect.
- If there is a "q" coordinate attached to your texture coordinates, it will be interpolated first, then divided. This creates non-linear sampling., since we are sampling 1/x where x is varying linearly. We are not linearly interpolating between 1/x1 and 1/x2.
So this post will present four cases of perspective correct texturing and explain why we get the right results.
Perspective Correct Varying Is More Or Less Free
Unless your GPU belongs in a museum, it almost certainly does perspective correct interpolation of per-vertex data correctly - on modern hardware this just happens, and you can't get rid of it.
Perspective Correct Geometry
The simplest case, and the one we care about most, is perspective correct texturing on geometry that has been foreshortened by a glFrustum projection matrix. This is a case of correct interpolation of varyings, not q coordinates.
In this case, the when the GPU rasterizes a triangle, it interpolates the varying values between vertices in a "perspective correct" manner - that is, to calculate the weight of the varyings from the 3 triangle vertices, it interpolates the X, Y, and W coordinates separately, then does a divide per pixel. (If we interpolate X/W and Y/W, that is, the screen-space pixel coordinates, we would get affine sampling, which is ugly.)
This case works as long as your perspective comes from a matrix that puts some "goo" in the 'w' coordinate for perspective divide. That's like saying "this works as long as the sun hasn't gone out", which is why perspective correct texturing isn't something game developers have to worry about anymore.
Perspective Deformation in 2-D
What if you want to make something look like it is 3-d when it isn't? That's what Plane-Maker does with its instrument warping function. You can drag the 4 corners of an instrument to make a trapezoid and we calculate the "faux perspective".
The way Plane-Maker does this is by finding a perspective matrix that correctly transforms the corners to the desired positions. As mentioned in the previous article, it turns out that such a "2-d perspective' matrix can take the form:
X1 X2 0 X4And by some lucky bit of mathematical serendipity, that's a matrix with 8 unknowns and we have 8 equations to plug in: the transform equation for all four corners (4 corners x 2 axes = 8 equations). Sure enough you can "solve" this matrix with 8 unknowns and you get a "2-d perspective" matrix.
Y1 Y2 0 Y4
0 0 1 0
W1 W2 0 1
What this matrix is really doing is putting the right values in the 'w' coordinate to make your vertices deform. Compare this to the usual frustum matrix, which simply puts the -Z coordinate into 'w'. The above matrix "works" because perspective simply requires something in your 'w' coordinate - it doesn't have to come from depth!
Like the geometry case above, this case textures correctly because the texture coordinates will be interpolated using that 'w' coordinate.
Explicit Q Coordinates
If you change the 'q' coordinate with glTexCoord4f or something similar, any perspective effects you get come from the divide at texture sampling time. Note that for GLSL fragment shaders, you need to use the "Proj" variant of the sampling routines.
You may also be getting additional perspective from your geometry. If you make a trapezoid, texture it with a square, and calculate the right 'q' coordinates on your texture to get sampling without distortion, but then you rotate the camera around this trapezoid, it's the 'q' coordinate that makes the texture fit the trapezoid, but varying interpolation that makes it look the same from all camera angles.
Projective Texturing With Perspective
There is one case where you might need the 'q' coordinate but not realize it: projective texturing with a frustum projection. Compare the matrices created by glFrustum and glOrtho. glFrustum puts your -Z (in eye coordinates) in the 'w' coordinate, while glOrtho always puts the constant 1.
If you want to "project" a texture onto a model (think of a slide projector projecting a slide onto a wall, but maybe also projecting onto a chair) one simple way to do this is to use the same kinds of matrices you might use to set up the camera. (This works because both camera and texture projection matrices map 3-d coordinate inputs into 2-d screen-space outputs.) One note: you will need to rescale the post-projection matrix from the range -1..1 (which is what the screen likes) to 0..1 (which is what texture samplers like).
If the "fourth row" of your texture projection matrix (the one passed to glTexGen with GL_Q) generates something other than "1" then your projection matrix has perspective, and the perspective comes from the divide at texture time, not the interpolation of varyings.
You can observe this in a GLSL shader by changing a texture sampler Proj call to a non-Proj call. Without the "Proj" your texture coordinates will go completely haywire if you have a perspective matrix.
Thanks! This little post helped a lot with my texgen's->glsl conversion project.
ReplyDelete