What application developers are getting at with this is that OpenGL is a rich API, and not all cards do everything at top speed. There is usually one fastest way to talk to OpenGL to get maximum throughput.
The problem is: this is ludicrous. Case in point, the GeForce 8800 - in my Mac Pro, running OS X 10.5.4 and Ubuntu 8. So what is the fast path?
If I draw my terrain using stream VBOs for geometry and indices that are not in a VBO, I get 105 fps on Linux. If I then put the indices into a stream VBO, I get 135 fps. The fast path!
Well, not quite. The index-without-VBO case runs at 73 fps on OS X, but once those indices go into a VBO, I crash down to 25 fps. Wrong fast path.
Simply put, you can't spec the fast path, so the spec doesn't matter. You find the fast path by writing a lot of code and trying it on a lot of hardware. I can't see there ever being another way, given how many different cards and drivers are out there.