I didn't use marching cubes or anything like it. *phew* :-)
I spent some time watching the `hydromancy` screensaver on Mac osX at a low level of accuracy to get a sense of how the marching cubes worked - it's a blobby-objects screensaver you see, one of my favorites, simple but pretty. I also read up plenty about blobby objects/metaballs and variations on it, and how the algorithm works. Not only is there a patent issue with it but it just seemed it would take too much computing time and not make it very easy to produce bands of color or gradients depending on different simultaneous perimeter thresholds.
I could see using it to create a simple blob shape but could not see a simple or fast way to draw more bands of color and especially not antialiased. I wanted there to be effectively many different layers of nested blob meshes with unique coloring for each one. I also wasn't too keen on the fact that blobby objects tend to be limited in what shape each blob can be. Usually they are based on spheres, you can do ellipses or lines or cubes or whatever, but that still isn't variegated enough especially if you have to work out the influences using math formulas for every pixel. I wanted energy intensities to be defined on a per-pixel basis and animatable.
I think the marching cubes is largely suited to generating a blobby display of objects out of polygons and is probably a good way to do it in 3d, but there's a lot of work involved. Even a 2D-only version has a lot involved. You have to take into account all the blobs near enough to any given blob to have an influence which in itself is a lot of work. Then you have to keep subdividing the results until your polygons are small enough to give a smooth surface, or you can apply filtering passes to smooth things out, and then you're only left with a basic blob outline - not very interesting considering the effort involved. There's quite a lot of math to it and several passes to access data in main memory.
If I created blobs in 3D I could certainly apply cool lighting to make them look more solid, and move things about three dimensionally, but at a terrific expense of speed and flexibility to work out the dynamic meshes every frame. I didn't want it to just look like a hard surface with a solid boundary, either. I wanted it to be representative of energy fields and auras that subtly blend into each other. Blobby objects usually are `surfaces` that wrap around several individual blobs - I wanted surfaces of a sort but I didn't want it to be a flat boundary.
So I came up with my own algorithm that doesn't have anything to do with marching cubes or subdivision of polygons, and doesn't use the CPU at all (except of course to pass calls to the OpenGL driver).
The graphics card is doing a heck of a lot of processing. With 20 blob objects with an influence covering about 512x512 pixels each, plus a smaller 256x256 texture applied over the top of each blob object, plus several full-screen quads that have to be drawn over the whole display with various blending/filtering, I would say there's about 8-10 *million* texels being considered every frame. There is no real `optimizing` going on - if I coded it in the CPU I am sure I wouldn't check every single blob against every other at every single pixel, but with the GPU it is using brute force to get it done and it's necessary to allow every part of every blob to have unique animatable intensity levels for every pixel.
It currently runs somewhere near to 60fps at 800x600 with full antialiasing. So, probably at least 600 million texels per second - about 10 million per frame. I think my graphics card is meant to be 1 billion texels per second so there's still some room to manouver. :-)
|