But calculating a minimum bounding sphere for a point cloud is not a fast operation - it's worse than linear time (which matters for huge meshes that are loaded while the simulator is running) and algorithms usually require robust mathematical operators.
Instead I use a bit of a heuristic - an incremental grow algorithm:
Initialize the sphere to size 0 and center of the first point.
For each point until there are none left.
If the point is inside the bounding sphere, ignore it.
Otherwise, grow the bounding sphere to just barely include this point.
This algorithm is pretty good but not optimal. The reason that it's not optimal is that when we grow to encompass a new point P, the opposite point that is "growing" (and not moving) the sphere is the farthest point on the sphere from P. This farthest point may not actually be an input point at all - it could just be an artifact of the fact that we are using the sphere instead of the original data to grow.
(A slower but more comprehensive algorithm would have us go back over the original point data to find the far-side point limits, giving us O(N^2) - plus we'd start to have floating point robustness issues.)
The quality of the bounding sphere has a lot to do with the order in which we grow it - the earliest points establish the size of a sphere, effectively inducing "phantom" points around the temporary (smaller) sphere.
So my first idea was to find the longest axis between any two points in my cloud and insert them first, in the hope that by establishing the long axis we could avoid growing the sphere in false directions. This option probably means fewer grow operations (since more points will be inside the long axis) but the cost of finding the longest axis is O(N^2) so it's not really a speed win.
What surprised me is that using the longest axis vs. the native submit order coming out of the host program, the longest axis was better sometimes but worse in others. The "natural" order of the points seemed to produce pretty good results a lot of the time too.
What I then tried was a randomized approach:
For each of N trials
Shuffle the data
Calculate the bounding sphere
If the sphere is smaller than our previous best
(or this is the first trial)
Save this as our best
Now the quality of these bounding spheres are random (and some are quite poor) but as we increase N, we are more and more likely to randomly find an order that is superior to any pre-determined heuristic. The shuffle takes linear time, and the number of trials is constant (and quite possibly a lot smaller than the number of points).
For N = 256, this produces better results than either the natural or long-axis-based sphere perhaps 90% of the time. And when it doesn't produce the best sphere, it is very close to optimal, usually within a few percent.