So if our model's vertices are encoded (into "model space") and a model positioning matrix decodes our model into world space (so we can show our model in lots of places in the world), how do we encode that model? If we have one house in world space, how do we encode it into its own model space?
One way to do so would be "basis projection". We could take the dot product of each vector in our model* with each basis vector, and that ratio of correlation would tell us how much of the encoding vector we want. What would a matrix for this encoding look like?
[x][Xx Xy Xz]
[y][Yx Yy Yz]
[z][Zx Zy Zz]
where we are encoding into the vector X, Y, and Z, described in the world coordinates.So we have put our basis vectors for our model into the rows of the matrix to encode, while last time we put them into the columns to decode.
Wait - this isn't surprising. The encode and decode matrices are inverses and the inverse of an orthogonal matrix is its transpose.
Putting this together, we can say that our orthogonal matrix has the old basis in the new coordinate system in its columns at the same time as it has the new basis in the old coordinate system in its rows. We can view our matrix through either the lens of decoding (via columns - each column is a "premade part") or encoding (via rows - how closely do we fit that "premade part").
Why bring this up? The previous post was meant to serve two purposes:
- To justify some particularly sparse implementations of camera operations. (E.g. how did we get the code down to so few operations, why is this legal?)
- To try to illustrate the connection between the symbols and geometric meaning of matrices and vectors.
* As in the previous article, we can think of our model as a series of vectors from the origin to the vertices in our mesh.
No comments:
Post a Comment