Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » 2.5D game questions

This thread is locked; no one can reply to it. rss feed Print
 1   2   3 
2.5D game questions
Korval
Member #1,538
September 2001
avatar

Quote:

Theoretically one could store 10^66 bits of information in a cubic centimeter.

What material are you using to do this? I don't know of a material that can fit into a cubic centimeter and to have a mole of a mole of it. An electron (one of the least massive thing we can find and control) has a mass of 9.1*10^-31 kg. Putting 10^66 bits in there would result in an object with a mass of 9.1*10^35 kg. The Sun itself only has a mass of about 2*10^30 kg. So you're not only upsetting the balance of the solar system, you're not too far from super-massive black-hole territory (a hundred billion Suns or 2*10^41kg). Do you see how absurd this can get?

Meanwhile, the vector representation of the same thing takes dozens of orders of magnitude less memory.

Quote:

Your convictions have a tone that there is a right and wrong.

Well, when you start talking about theoretical memory units with the mass greater than that of a star, it becomes difficult to see it as anything approaching possible.

Quote:

If a circle/sphere cannot accurately model anything real, what good is it?

You can model a pog virtually precisely with a textured cylinder. And spline patches (NURBS can get spheres perfect) can model lots more than just pogs.

Quote:

Imagine objects with so much detail to the point that any mathematical model sufficient in its approximation requires data storage that is less efficient that using volumetric elements.

And yet, there is not one object around me that requires voxels to model sufficiently. Indeed, I have yet to see such an object.

Quote:

Recording a 2D image is like recording a "3D" image. For a 2D image, the digital element is called a pixel. For a 3D image, the digital element is called a voxel.

Recording a 2D image is like photography. Recording a 3D image is like holography. Neither of which has anything to do with producing one from scratch. Which is what rendering is.

Quote:

No, do you? Will there ever be such a point?

Yes there will be. That is without question. All you need to do is look at current multiprocessor systems to see that. A dual-processor machine, even running a multithreaded app, isn't as fast or capable as a single processor machine of twice the clock speed. A 4-processor system isn't even as good as a 3x clock speed increase. The returns have already deminished. It is merely a question of how many processors do we get before the returns aren't significant for doing a single operation. How much can you multithread an application?

Quote:

My point is that we can tell the difference.

But you're wrong. For all you know, you could be plugged into the Matrix and all that is around you is just a signal being beamed into your brain. Or, as Descartes pointed out, that an Evil Demon could be controlling your mind and decieving you about the entire world. If you model a world with surfaces (rather than volumes) such that no one can tell the difference between the surface version and the volume version, there is no difference.

Quote:

Remember my blather about cones and spheroids?

Which was in relation to using different primitives for voxels: spheres rather than cubes. That's very different from modelling with CSG primitives. You aren't trying to approximate a surface with a bunch of spheres at regular intervals. Instead, you're approximating a surface by taking mathematical figures and doing CSG operations on them to create a new object. Indeed, a PolygonBall could be a CSG primitve, as long as the mesh itself is closed.

CSG, btw, happens to be another advantage of ray tracing that you missed.

Plucky
Member #1,346
May 2001
avatar

Quote:

"Theoretically one could store 10^66 bits of information in a cubic centimeter."
What material are you using to do this? I don't know of a material that can fit into a cubic centimeter and to have a mole of a mole of it.

For one, you're equating information with mass, which is incorrect. Second we don't need 10^66 bits of information to model a scene. Third, I recently heard of prototype S/DRAM chips that can store 16 MB in 32 sq. mm. Since silicon transistors are less than 1 micron deep, we've theoretically achieved >4e12 bits per cubic centimeter. I don't believe we're anywhere close to what we could achieve many years from now. Fourth, our memory chips/boards take up more space than one cubic centimeter. Fifth, it could be sufficient to not model empty space with empty voxels. Sixth, one does not need to hold all of LA in "local" memory (RAM in today's paradigm) at once.

Quote:

"If a circle/sphere cannot accurately model anything real, what good is it?"
You can model a pog virtually precisely with a textured cylinder. And spline patches (NURBS can get spheres perfect) can model lots more than just pogs.

Look closely at the pog. Does it look like a perfect cylinder? No. Sure at a distance one could model a pog as a textured surface. Maybe add some bump mapping techniques. Look at the pog closely and you need more than a simple cylinder. And bump mapping at this level of detail looks fake.

Quote:

And yet, there is not one object around me that requires voxels to model sufficiently. Indeed, I have yet to see such an object.

How about a rock or stone. (Actually any object will do.) Look closely and you see a lot of fractal geometry. Sure you can model it all as surfaces, but a multitude of sufficiently tiny surfaces have the same difficulties you harp on as voxels. Now take hammer and break the rock in pieces. With voxels, this is a lot more intuitive to model physically. Furthermore, graphics and physics modeling work together rather than separately.

Quote:

Recording a 2D image is like photography. Recording a 3D image is like holography. Neither of which has anything to do with producing one from scratch. Which is what rendering is.

Apparently you got confused. I was not originally talking about rendering with this example. Digital audio is composed of samples of amplitude along a time axis. In 2D gfx, a digital sample element is called a pixel. For a 3D "image", it's called a voxel. Voxel != rendering technique. You argue mathematical surfaces are sufficient for gfx. Yet mathematical representation of audio (eg arcs and curves), as you mentioned, is too complex; sampling was better. Applying your line of reasoning to 3D objects, one would find that voxels would become less complex than mathematical surfaces at a certain level of detail.

Quote:

But you're wrong. For all you know, you could be plugged into the Matrix and all that is around you is just a signal being beamed into your brain. Or, as Descartes pointed out, that an Evil Demon could be controlling your mind and decieving you about the entire world. If you model a world with surfaces (rather than volumes) such that no one can tell the difference between the surface version and the volume version, there is no difference.

Of course Descartes is right... the demon could exist, just as God could exist. Yes, purely from a simplistic optical point of view (we'll ignore things like transluscence and "subsurface" scattering), the world is composed of surfaces. But nobody believes the world is composed solely of surfaces because we observe the world where the visual is not separated from the physical. This is why I conclude that we would eventually model scene physics and scene gfx as one. We a have terrific model that describes how light reaches our eyes and how objects interact. Why not use the same principles to model a scene where we strive for realism? I argue further that for sufficient realism in the future, we must use this model.

Besides, I have yet to see computer graphics in a movie where I cannot distinguish them from real objects. "Real" in the sense of a photographed actor/object/costume.

Quote:

Which was in relation to using different primitives for voxels: spheres rather than cubes. That's very different from modelling with CSG primitives. You aren't trying to approximate a surface with a bunch of spheres at regular intervals. Instead, you're approximating a surface by taking mathematical figures and doing CSG operations on them to create a new object. Indeed, a PolygonBall could be a CSG primitve, as long as the mesh itself is closed.
CSG, btw, happens to be another advantage of ray tracing that you missed.

Let me quote myself:
"... but other primitives work if you're allowed to overlap. In this sense you add more information to a volume "element". I guess I'm talking about geometric primitives rather than voxels per se. "

I didn't miss the use of primitives for RT. I think you would eventually want "primitives" small enough to handle details that are small relative to the scene but are too big when the observer is up close.

Incidently, while googling around related to a recent allegro voxel thread, I came across this page describing some thoughts John Carmack had about 3D graphics 5 years ago: http://unrealities.com/web/johncchat.html
Don't know if he still has the same opinions, but some interesting tidbits:

  • Polygons and curved surfaces are both analytic representations that have serious problems of scalability. The "number of visible polygons on screen" problem comes up if you build your world out of polys. Curved surfaces seem to help with that, but not for long... soon you run into problems of "number of visible curves on screen" and you're back to square one.

  • John's hunch is that eventually 3D hardware will be based on some kind of multiresolution representation, where the world is kept in a data structure that fundamentally supports rendering it in near-constant time from any viewpoint.

  • Voxels are one example of such a structure. John mentioned that he actually wrote a voxel renderer and converted an entire Quake 2 level to use it. It wound up being about 3 gigabytes of data! But he said that that's not actually that far off from today's hardware capacity, if you use intelligent streaming techniques to move the relevant portion of the voxel set into and out of memory. And he said it was really nice having only one data structure for the entire world--no more points versus faces versus BSPs... just one octree node (and subnodes of the same type) representing everything.

  • The analogy John likes is the comparison to the switch from vector graphics (which described a 2D screen in terms of points connected by lines) to raster graphics (which described a 2D screen in terms of a semi-continuous 2D mapping of values). Vector graphics didn't take much data to start with, but broke down once 2D images got really rich. Likewise, current analytic polygon/curve techniques can describe simple models with not much data, but once models get really complex, you get into trouble and you wind up wanting something that describes the 3D space in other terms.
  • Korval
    Member #1,538
    September 2001
    avatar

    Quote:

    For one, you're equating information with mass, which is incorrect.

    And how do you plan to store the information? You have to have something there, and that something either has mass (most everything) or energy (the very few massless particles).

    Quote:

    Second we don't need 10^66 bits of information to model a scene.

    The fact that you're already calling for 0.5 molar bytes of information just to get a cubic mile of stuff at reasonably high precision is testimate to the rediculous problems of scale.

    Quote:

    Fourth, our memory chips/boards take up more space than one cubic centimeter.

    Which has precisely what to do with calling for a 0.5 molar bytes of information?

    Quote:

    Fifth, it could be sufficient to not model empty space with empty voxels.

    Look at 2D sprites. Notice how the empty space of the rectangle still has to have data for the empty space. Sure, when you're talking about moles of data, you have some pretty good incentives to come up with data reduction methods. But you still have moles of data; the order of magnitude is still accurate.

    Also, let's say we store this data as some kind of sparse matrix. The sparse areas just happen to be the surface elements. Which roughly approximates a sequence of texturemaps on polygons.

    Quote:

    Sixth, one does not need to hold all of LA in "local" memory (RAM in today's paradigm) at once.

    It's got to go somewhere. Whether it's RAM or local harddrive storage, it has to go somewhere. Storing all of LA at 600 dpi would require truly ungodly storage.

    Quote:

    Look closely at the pog. Does it look like a perfect cylinder?

    The basic shape is. Anything more can be done given what I say at the end.

    Quote:

    And bump mapping at this level of detail looks fake.

    Displacement mapping doesn't look fake, becuase it isn't. I'll get to this a bit later.

    Quote:

    You argue mathematical surfaces are sufficient for gfx. Yet mathematical representation of audio (eg arcs and curves), as you mentioned, is too complex; sampling was better. Applying your line of reasoning to 3D objects, one would find that voxels would become less complex than mathematical surfaces at a certain level of detail.

    Yes. I would also argue that JPEG compression, while a nice format for images, is terrible for sound. Also, MP3 compression, while nice for sound, is terrible for images. Both of these statements are true. They are true because sound != images.

    Quote:

    But he said that that's not actually that far off from today's hardware capacity

    That's funny. My computer, 5 years later, still doesn't have 3GB of RAM in it. And I'm really not going to download a 3GB level.

    He was off by a bit. It's going to be quite some time before we're going to really start having 3GB levels of games. Granted, the 32-64-bit transition is going to be what takes the most time, but even so, it's going to be awhile.

    Quote:

    And he said it was really nice having only one data structure for the entire world--no more points versus faces versus BSPs... just one octree node (and subnodes of the same type) representing everything.

    And one of my professors at CMU wrote a functioning ray tracer on a buisness card. The simple solution isn't always the best. Indeed, in many cases, the simple, brute force approach tends to get progressively worse, not better.

    Quote:

    The analogy John likes is the comparison to the switch from vector graphics (which described a 2D screen in terms of points connected by lines) to raster graphics (which described a 2D screen in terms of a semi-continuous 2D mapping of values). Vector graphics didn't take much data to start with, but broke down once 2D images got really rich.

    The problem there is filling in the lines. For a 2D creature, 2D vector graphics looks just fine because no one can see inside the vector image. Such a creature sees a 1D projection of the 2D space, so they can't see around the 2D object and notice the emptiness. For us, we can see the holes. Filling those holes with something was the equivalent of a 2D rect-blit, so we may as well do that.

    For 3D beings, like us, as long as nobody knows your 3D image is just a shell, it is as solid as any voxel image. There's no need for a 3D rect-blit.

    Quote:

    Likewise, current analytic polygon/curve techniques can describe simple models with not much data, but once models get really complex, you get into trouble and you wind up wanting something that describes the 3D space in other terms.

    That "trouble" can already easily be solved. Without resorting to either ray tracing or voxels. Observe.

    Note: I cannot stand this technique. I really, truly, hate the idea of rendering like this. But it is likely to be the way things go in the future for game scan conversion, so here it is.

    How to get microdetailing into rendered images. It is really simple, ultimately. Displacement mapping.

    In the scan conversion world, displacement mapping means to take a low poly model, use vertex/fragment (and, eventually primitive) shaders to tessellate it and a texture to apply offsetting to it, and then feed the high-poly resulting model back into the renderer for the final render of the mesh. Of course, the displacement mapping can be done in screen space, such that you end up with a model that is tessellated to the resolution of the screen. No more, no less. One vertex per pixel (or sample, if antialiasing is used).

    Now, take this a step further. There are two competing methods for hyper-detailed mesh modelling: subdivision surfaces (taking a mesh and using a subdivision algorithm to produce a smoother one) and spline patches. Either one will work with this method. The spline patches are tessellated directly into the high-density triangles. And the subdivision surfaces just require the use of the subdivision algorithm instead of normal mesh tessellation in the displacement mapping routine, though they have a problem with subdividing irregularly.

    Now, here's the important part. The displacement texture, the texture that defines how to shift the new vertex, is procedural. It's can be some Perlin noise or a nice fractal or whatever else looks convincing.

    And, unlike voxels or super-high-detailed polys, you can have multiple buffers of data that you're rendering to and rendering from, so you aren't taking up significant room.

    Efficient, economical, and sufficient to get the point across. The detail is there, and we didn't need to make black holes worth of memory to do it. Put a little thought into a problem, and you can find an elegant solution to the brute-force methods. And, usually, the elegant solution is better.

    BTW, about physics. Microscale particle physics doesn't equal out to what you get on the macroscale. By particle physics, I don't mean hardcore atomic particle physics. I mean the physics of modelling something at 600dpi and expecting to get, for example, friction out of it. It doesn't work. It, also, takes a really long time to do. Not a particularly good mechanism for physical modelling.

    Plucky
    Member #1,346
    May 2001
    avatar

    Quote:

    And how do you plan to store the information? You have to have something there, and that something either has mass (most everything) or energy (the very few massless particles).

    Still stuck in the 20th century? It was recently demonstrated that one could use individual atoms to store a qubit(a bit of information based on quantum states) each. Thus a few grams of carbon is enough to provide 10^23 bits of information. One cubic centimeter diamond crystal, solely using carbon atoms, can store this magnitude of information. And the nice propery of qubits is that one could process all possible states simultaneously. Imagine 2^(10^23) states being processed at once. .... Yet 10^66 is still unimaginably large relative to 10^23.

    Quote:

    That's funny. My computer, 5 years later, still doesn't have 3GB of RAM in it.

    That's funny, mine at work does.

    Quote:

    Displacement mapping.... Of course, the displacement mapping can be done in screen space, such that you end up with a model that is tessellated to the resolution of the screen. No more, no less.... And, unlike voxels or super-high-detailed polys, you can have multiple buffers of data that you're rendering to and rendering from, so you aren't taking up significant room.

    I'm glad you mentioned this: of course the same principles can be (and have been) applied to voxels. For example take a CRT monitor. Model it like a set of geometric primitives at the core, and more detailed voxels near the surface. In a physics modeling phase, subdivide the primitives into smaller elements as necessary. (eg including "elements" that make up a modeled cathode ray tube components if such realism was desired. We could even do this "procedurally".) If you split the CRT in two, subdivide again as necessary. So on and so forth.

    Subdivision is not limited to polygons; it works well for voxels. Thus voxel objects would not necessarily require many orders of magnitude more information than polygon objects. As for matching the screen pixel resolution, a simple octree data structure provides the same feature for voxels. (This is not to say that one must use an octree to get this feature.)

    One key difference between our visions is that you propose calculating or otherwise creating detail (eg with fractals or procedurally) to avoid additional data storage overhead. However voxels are not exempt from using this feature or process. However if you want unique details for added realism, with surfaces or with voxels, you need to store more data. A unique world (ie no calculated details) that is a kilometer square filled with unique objects with 40x40 micron surface sections would also require 10^~20 bits of data.

    Voxels is not a fundamental reason for requiring such large storage requirements. Uniqueness, which provide added realism to a certain point, is the primary reason. e.g. An asteroid field. Each asteroid is a simple spheroid. I could procedurally create a bunch of voxels. The size of voxels can be scaled as necessary to match the necessary resolution of a screen pixel. But if the asteroids (or other objects) do not lend itself for procedural creation or some basic uniquess was required, then the required data storage is greater. Perhaps at a certain level (still larger than say a 600 dpi screen pixel), any smaller detail can be procedurally created if uniqueness is unnecessary at this detail.

    Quote:

    I mean the physics of modelling something at 600dpi and expecting to get, for example, friction out of it. It doesn't work.

    If you were trying to give a good example, you chose rather poorly. A quick google on friction and finite element analysis should allay your fears. Micro (ie micron) level physics is fairly well understood. As you (and I) explained, subdivision offers a method to simplify certain physical models. Detail can be added as necessary. At the nano level however you're starting to approach the fuzzy realm between classical and quantum mechanics.

    Quote:

    For us, we can see the holes. Filling those holes with something was the equivalent of a 2D rect-blit, so we may as well do that.... For 3D beings, like us, as long as nobody knows your 3D image is just a shell, it is as solid as any voxel image.

    Again I'll say that for certain level of realism, I believe that modeling the world as surfaces is insufficent because visually, the physical interaction between objects would be insufficiently accurate. We don't observe the world as shells.

     1   2   3 


    Go to: