Monday, February 28, 2011

Order-Correct Translucency

When ATI released their order independent transparency demo, I nearly wet myself. Translucency has been the bane of X-Plane authors for years. The problem is that translucent surfaces remove hidden surfaces behind them, leading to artifacts. The thought of on-hardware OIT was tantalizing.

That is, until I found out how the tech works. My understanding is that OIT is implemented by "writing your own back-end" - that is, instead of shading into a framebuffer, you write fragments into a 'deep' framebuffer by hand, using compute-shader-style ops to create linked lists of fragments. (That is, fragments live in a general store and the framebuffer is really list heads.) In a post processing pass, you go through the 'buckets' (that is, the linked lists) and sort out what you drew.

That's a lot more back-end than I a [spoiled, lazy] app developer I was hoping for glEnable(GL_MAGIC_OIT_EXT) - but no such luck. The real issue is that, since our product already does a lot of 'back end' tricks within OpenGL, the cost of getting our shaders to run in a compute-style environment might be a bit high. (This is looking less burdensome with some of the newer extensions, but it still seems to me that it would be difficult to port legacy apps to OIT-style rendering without having compute-shader features like atomic counters inside the GLSL shading environment.)

As a side note, I also looked closely at depth peeling and even hacking the blend equation (e.g. accumulate and average) and both would probably be producable for X-Plane, which tends not to have that much translucent overlap - the most common case for us is windows.

The Traditional Approach - Automated

Now the traditional approach to translucency in X-Plane goes something like this:
  • Force opaque drawing first.
  • Use one-sided drawing and order the translucent polygons so they appear from back to front from any viewpoint.
That second point is key: consider an airplane with windows. If we draw the interior facing windows first and the exterior facing windows second, then from any viewpoint, we are drawing 'back to front'. This works because whenever we see two windows at once, we are seeing the inside window behind the outside one. Isn't topology grand?

Well, it turns out that this approach can be generalized: as long as none of our triangles intersect (except at their edges and corners), given any two triangles, we can always find a draw order between them that is correct. Given a set of triangles, we can always sort the whole mesh to be appropriately back-to-front. (At least, that's my theory until someone proves me wrong.)

There are basically three cases:
  1. Triangle B is fully on one side or the other of triangle A's plane. B should be clearly before or after A depending on which side it's on.
  2. Triangle A is fully on one side or the other of triangle B's plane. A should be clearly before or after B depending on which side it's on.
  3. Triangle B and A are both on one side of each other's plane; we can use either triangle to determine correct order - they will not conflict. (That is, this is a disjoint case, and either the two triangles are going to give you the same answer or they're facing in opposite directions and thus no visible at the same time.)
The fourth case would be two intersecting triangles - that's the case we can't necessarily get right.

The ability to find this sort order depends on using one-sided triangles - this is what lets us decouple the sort order for two opposite directions. By definition if a triangle is visible to a vector V, its back side is visible to -V.

This approach of course doesn't solve all problems:
  • Animation can deform the mesh in a way that violates our correct order.
  • Multiple unrelated objects still need a relative ordering that makes sense.
Theoretical Angst

Just a touch of angst...I'm no theoretician, and I can't help but wonder if there is a screwy case that this doesn't handle. In particular, the sort order needs to be a strict weak ordering or we're going to get goofy results, and I'm not entirely sure that it is.

Saturday, February 19, 2011


I must just be late to the party, but: I just realized (approximately a decade later than I should) that C0FFEE can be spelled in hex. How have I never seen a code base use this as a 'token word' (albeit with some high-bit junk)? Actually you'd want to pad the low bits to make it odd too.

The most common cookies I've seen in production code are 0BADF00D, DEADBEEF, and FEEDFACE.

Tuesday, February 15, 2011

Permalink Shortcode

This must exist somewhere in WordPress or as part of a plugin, so if you know what plugin I should have used, feel free to heap on the abuse in the comments section. Anyway...

I wanted a way to create a link to a WP page from inside a post or page that wouldn't need modification when the target page's parent was changed. WordPress's permalink scheme uses a hierarchy of parent/child/grandchild/ to identify pages, and this can change as a page is reparented. If the parenting scheme is meant to represent a navigational hieararchy, you could have dead links.

I ended up with this function, loosely based on snippets I found on the web:
function permalink_func( $atts, $content=null ) {
extract( shortcode_atts( array(
'p' => '1',
), $atts ) );
if ($content == null)
$content = get_the_title($p);
$link = get_permalink($p);

return "$content";
add_shortcode( 'prm', 'permalink_func' );

(It lives in functions.php inside a php block.)

The short code is used like this: [prm p=1] or [prm p=6]link title[/prm]. If the short code is used with no closing tag, the article's title is used to label the link. The parameter (p=17) is the ID of the page, and can be seen by mousing over the page or post in the admin interface. The URL generated by the shortcode matches the current permalink scheme.

Once again, I am amazed by how easy it is to get things done with WordPress. It doesn't seem right...

Thursday, February 10, 2011

Random Wordpress Notes

We're converting our website to WordPress (which I continue to be impressed by, but that'll be another post). One or two random notes.

If you put your news feed 'on a page' the page template is ignored - index.php is still used. I am sure this is by design, but I discovered it while creating a custom template. The page contents appear to be ignored too.

If you have an existing site and you want to merge in WordPress, you can do this:
  • Host your news feed on a specific page, rather than letting it default to 'home'.
  • Change the WordPress URL base (not install base) to your site.
  • Put a mod_rewrite rule into your site root to rewrite missing files to /wp/index.php (or wherever WP is installed).
  • If you want to replace an existing HTML page with a WP page you can use a rewrite rule from the old name to something like /wp/index.php?page_id=20 (or whatever page ID you want).
This is similar to the normal 'changing base' rules for WP, except that you don't need to create a second index.php in your root folder - your old site's home page stays in place.
RewriteEngine On
RewriteRule news.html /wp/index.php?page_id=5 [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /wp/index.php [L]
mod_rewrite is pretty cryptic. Basically what this says is:
  1. If the user asks for news.html, go to WordPress article 5.
  2. If the user asks for a missing file, let WordPress sort it out.

Friday, February 04, 2011

G-Buffer Normals, Revisited

A while ago I posted the G-buffer format for X-Plane 10, which, as of this writing is still in development. SebH brought up CryTek's normal map compression and I hand-waived a bit and wondered to myself whether some kind of normal map goblin was going to pop up later in the development cycle.

The short answer: yes.

I will try to write up a post later describing the precision problems with normal maps in more detail, but for now I'll post the problem and its partial solution, while I still have the debug code in my shaders.

This is a Baron 58 that Tom Kyler is working on for version 10. He is probably not very happy that I'm posting pictures of it, because it's still in progress, and while I think it looks pretty good, our art guys get a lot of, um, "artsy goodness" into the models in the last few passes. (If the lighting seems a little, um, bizarre, it probably is; lord knows what state of debug the sun shader was in when I took these pics.)

The left side image is the airplane, lit by an evening sun that has just barely set, directly behind us, the right image is the fully reconstructed per pixel eye space normals. The small icons show the rough contents of the four layers of our G-Buffer.

So far things seem reasonably sane - the engine nacelle is lit from the side but not the top. But here's where things go south:

This is a wing with a light on the leading edge. The surface normal of the wing is almost perpendicular to the light direction, which really stress-tests the quality of our normal vectors. The first picture is the 'classic' G-Buffer technique: 16-bit-float dx and dy eye-space vectors, with Z reconstructed in shader. As you can see, it develops banding at the very low edge of angle-based attenuation. (Note that this area would be super-dark if we weren't in linear space.) The second image shows the full XYZ normal (burning an extra G-Buffer 16-bit channel)...clearly this fixes the problem of reconstruction from low-precision sources, but channels are hard to come by in gbuffers.

Fortunately I found this totally awesome write-up of different normal compression schemes. The above picture on the right is a Lambert Azimuthal Equal-Area Projection using two channels.

Here are a few more pics of the gbuffer normal map, both projected and expanded:

Side benefit: Lambertian projection copes with negative eye space Z (but not 0,0,-1, which is unlikely even with tangent space normal maps on art assets) so no more hand-waving there.

One last thought for now: this entire post refers to the 'normal map' layer of a g-buffer, that is, the saved per-pixel normal information. Compression of 'normal map' textures for art assets is a bit of a different problem - the most immediate note is that they can be compressed off-line, so non-realtime compression techniques are fair game.

Wednesday, February 02, 2011

Losing Javadocs in Eclipse: SOLUTION

Occasionally and for reasons that I do not fully understand, Eclipse may lose track of your Javadocs. That means, when you mouse over an Android API call expecting to read about it, you'll get the dreaded "This element has no attached source and the Javadoc could not be found in the attached Javadoc" error message.

The traditional method to solving this issue is to delete the eclipse .metadata directory. This does in fact work (I tried it) but it also requires you to redownload and setup the Android you lose ALL of your eclipse settings and preferences. If you're like me and have custom fonts and syntax highlighting setup, then that's a nuisance.

The "right way" (and by right way I mean "this worked for me and i didn't lose data) to solve this problem is to follow these instructions:

  1. In eclipse, right click on your Android project and select Properties
  2. On the menu on the left, select "Java Build Path"
  3. On the right hand side, select the "tab" labelled "Libraries".
  4. Here you should see the Android SDK that you're targeting. For example: "Android 2.2".
  5. Click on the arrow to the left of the Android SDK to expand the sublevels.
  6. Find "Android.jar" and click on the arrow to the left of that one as well to expand it.
  7. You'll see a setting called "Javadoc location". Select that and then click on the "Edit" button.
  8. At the top, RESELECT the path to your javadocs. This is usually "path_to_android_sdk/android-sdk-mac_86/docs/reference/". I say RESELECT because even if it's right, you should browse and do it over anyway.
  9. Click on "validate". You should be all set now!