- When an algorithm isn't naturally parallel, pipelining is often the best way to split it across cores. Message queues are great for creating the links between the pipeline stages.
- Message queues naturally solve the problem of resource ownership without extra locking. Since the message cannot be at the send and receive side of the queue at the same time, "owning the message = owning the resource" gives you clean resource allocation. Because there are no locks, you don't have to worry about dead-lock cases or locks getting contested.
When evaluating this kind of design we have to consider two performance aspects: throughput and latency.
For throughput message-queue designs work pretty well. We can easily scale up the number of worker threads based on the number of cores and simply pile a huge number of tasks into the work queue - that'll keep all of those cores busy.
Where message queue designs don't work as well is latency. When we queue a new task to be done, we have no idea how long it will take to get done, but we know the answer is "longer than if we didn't have a worker thread". Besides the time for the message to get to the thread (that thread might be asleep and have to wake up and then wait for CPU time) we also might have a whole pile of messages ahead of us. Some designs use priority message queues to get preferential dispatch of messages, but even then all threads might be busy. (Even more complexity can be added to try to address that case.)
We also pick up latency on the return side, since the main thread has to check for finished results, which might only happen once per simulation frame.
Fortunately for X-Plane, the kinds of things we push out to threads usually aren't very latency sensitive - we build up 3-d scenery, but we queue it well before we arrive at that location, so the additional latency is acceptable.