For one, you're equating information with mass, which is incorrect.
And how do you plan to store the information? You have to have something there, and that something either has mass (most everything) or energy (the very few massless particles).
Second we don't need 10^66 bits of information to model a scene.
The fact that you're already calling for 0.5 molar bytes of information just to get a cubic mile of stuff at reasonably high precision is testimate to the rediculous problems of scale.
Fourth, our memory chips/boards take up more space than one cubic centimeter.
Which has precisely what to do with calling for a 0.5 molar bytes of information?
Fifth, it could be sufficient to not model empty space with empty voxels.
Look at 2D sprites. Notice how the empty space of the rectangle still has to have data for the empty space. Sure, when you're talking about moles of data, you have some pretty good incentives to come up with data reduction methods. But you still have moles of data; the order of magnitude is still accurate.
Also, let's say we store this data as some kind of sparse matrix. The sparse areas just happen to be the surface elements. Which roughly approximates a sequence of texturemaps on polygons.
Sixth, one does not need to hold all of LA in "local" memory (RAM in today's paradigm) at once.
It's got to go somewhere. Whether it's RAM or local harddrive storage, it has to go somewhere. Storing all of LA at 600 dpi would require truly ungodly storage.
Look closely at the pog. Does it look like a perfect cylinder?
The basic shape is. Anything more can be done given what I say at the end.
And bump mapping at this level of detail looks fake.
Displacement mapping doesn't look fake, becuase it isn't. I'll get to this a bit later.
You argue mathematical surfaces are sufficient for gfx. Yet mathematical representation of audio (eg arcs and curves), as you mentioned, is too complex; sampling was better. Applying your line of reasoning to 3D objects, one would find that voxels would become less complex than mathematical surfaces at a certain level of detail.
Yes. I would also argue that JPEG compression, while a nice format for images, is terrible for sound. Also, MP3 compression, while nice for sound, is terrible for images. Both of these statements are true. They are true because sound != images.
But he said that that's not actually that far off from today's hardware capacity
That's funny. My computer, 5 years later, still doesn't have 3GB of RAM in it. And I'm really not going to download a 3GB level.
He was off by a bit. It's going to be quite some time before we're going to really start having 3GB levels of games. Granted, the 32-64-bit transition is going to be what takes the most time, but even so, it's going to be awhile.
And he said it was really nice having only one data structure for the entire world--no more points versus faces versus BSPs... just one octree node (and subnodes of the same type) representing everything.
And one of my professors at CMU wrote a functioning ray tracer on a buisness card. The simple solution isn't always the best. Indeed, in many cases, the simple, brute force approach tends to get progressively worse, not better.
The analogy John likes is the comparison to the switch from vector graphics (which described a 2D screen in terms of points connected by lines) to raster graphics (which described a 2D screen in terms of a semi-continuous 2D mapping of values). Vector graphics didn't take much data to start with, but broke down once 2D images got really rich.
The problem there is filling in the lines. For a 2D creature, 2D vector graphics looks just fine because no one can see inside the vector image. Such a creature sees a 1D projection of the 2D space, so they can't see around the 2D object and notice the emptiness. For us, we can see the holes. Filling those holes with something was the equivalent of a 2D rect-blit, so we may as well do that.
For 3D beings, like us, as long as nobody knows your 3D image is just a shell, it is as solid as any voxel image. There's no need for a 3D rect-blit.
Likewise, current analytic polygon/curve techniques can describe simple models with not much data, but once models get really complex, you get into trouble and you wind up wanting something that describes the 3D space in other terms.
That "trouble" can already easily be solved. Without resorting to either ray tracing or voxels. Observe.
Note: I cannot stand this technique. I really, truly, hate the idea of rendering like this. But it is likely to be the way things go in the future for game scan conversion, so here it is.
How to get microdetailing into rendered images. It is really simple, ultimately. Displacement mapping.
In the scan conversion world, displacement mapping means to take a low poly model, use vertex/fragment (and, eventually primitive) shaders to tessellate it and a texture to apply offsetting to it, and then feed the high-poly resulting model back into the renderer for the final render of the mesh. Of course, the displacement mapping can be done in screen space, such that you end up with a model that is tessellated to the resolution of the screen. No more, no less. One vertex per pixel (or sample, if antialiasing is used).
Now, take this a step further. There are two competing methods for hyper-detailed mesh modelling: subdivision surfaces (taking a mesh and using a subdivision algorithm to produce a smoother one) and spline patches. Either one will work with this method. The spline patches are tessellated directly into the high-density triangles. And the subdivision surfaces just require the use of the subdivision algorithm instead of normal mesh tessellation in the displacement mapping routine, though they have a problem with subdividing irregularly.
Now, here's the important part. The displacement texture, the texture that defines how to shift the new vertex, is procedural. It's can be some Perlin noise or a nice fractal or whatever else looks convincing.
And, unlike voxels or super-high-detailed polys, you can have multiple buffers of data that you're rendering to and rendering from, so you aren't taking up significant room.
Efficient, economical, and sufficient to get the point across. The detail is there, and we didn't need to make black holes worth of memory to do it. Put a little thought into a problem, and you can find an elegant solution to the brute-force methods. And, usually, the elegant solution is better.
BTW, about physics. Microscale particle physics doesn't equal out to what you get on the macroscale. By particle physics, I don't mean hardcore atomic particle physics. I mean the physics of modelling something at 600dpi and expecting to get, for example, friction out of it. It doesn't work. It, also, takes a really long time to do. Not a particularly good mechanism for physical modelling.