2.5D game questions
Kanzure

I'm interested in creating a 2.5D game. 2.5D, as in "voxels". Not 'pixels', or 3D objects - but inbetween (or something). I would like to know the following:

(1) Is Allegro a good library for creating 2.5D games?

(2) Are there any open source projects or tutorials that describe how to go about making 2.5D graphics?

(3) Know the equation for displaying a voxel?

Any help would be hot. :)

Kitty Cat

1) Yes, as long as you're willing to write your own renderer. There were tons of threads about raycasting (or raytracing, I can never get them straight) a while ago, which are in essence 2.5D engines.

2) Try and search for said raycasting/tracing threads. They had full fledge engines w/ code in them practically.

3) For each voxel, draw voxel? Dunno.. check out the Build source.

Oscar Giner

A Voxel is just a square. The size of the square is inverse proportional to the distance. You can use allegro 3d math routines to implement translation, rotation and the 2d projection.

Kanzure

I'll look at raycasting/raytracing. I was looking for an actual formula for displaying a single 'voxel'. For most programming languages, blit would blit a group of pixels..but what about blitting a voxel?

Kitty Cat

Just blit each voxel as a filled rectangle, and come up with a simple algorithm to cull unseeable voxels.

BTW, "voxel" is a single 3D pixel. A voxel model is made up of voxels.

Kanzure
Quote:

Just blit each voxel as a filled rectangle

Such as, get the voxel 'texture' (much like how 2D RPGs have tiles), and then rotate that image, to create the '3D' look??

Obviously, I'm in need of much help.

Kitty Cat

A voxel doesn't have a texture, much like a pixel doesn't have a texture. First, you loop through each voxel in a model, determining which ones can be seen. Then for each one that can be seen, you use rectfill (or quad3d w/ POLYTYPE_FLAT if you want tilt) to color the area of the screen that voxel takes up (the closer the voxel, the bigger it is, ect).

Maverick
Kanzure said:

I'm interested in creating a 2.5D game. 2.5D, as in "voxels". Not 'pixels', or 3D objects

Traditionally (to my knowledge, anyways), "2.5D" is generally used to describe the rendering in Doom (and similiar) engines, where it appears to be 3D, but every wall is restricted to being perpendicular to the ground plane.

Kanzure said:

(3) Know the equation for displaying a voxel?

It's the same as displaying any other 3d object: rotate, translate, and scale it from world space to camera space, project it into 2d, and draw it. As Kitty Cat said, drawing is a simple matter of blitting a quad to the screen of the appropriate size (generally, size = 1/z * original_size)

For rendering an entire scene of voxels, you'll want to sort them for back-to-front draw ordering, use z-buffering, or use deferred rendering techniques (like the horizontal span method Allegro's 3d routines used to use, but apparently don't anymore). Essentially, the same things you worry about when rendering polygons; just every polygon is square, and always faces the camera.

-Maverick

Tobias Dammers

Generally, voxels are just the same as pixels, except they're 3D, not 2D (and they're not 2.5D, either). Which means that where a pixel is a little square on a plane (a.k.a. 2-space), a voxel is a little cube in a 3-space. Pixels are parts of bitmaps (which are usually 2-dimensional arrays of pixels), voxels are parts of voxel maps (which are then 3-dimensional arrays of voxels). The usual approaches to this are:
a) For each voxel, draw a complete cube (correctly projected to a 2D screen)
b) Use a ray tracing algorithm to find the correct voxel for each screen pixel.
Obviously, a) is better when the voxels are much larger (something around 10 times the size) than screen pixels, which is usually not the case, so b) is the way to go.
The problem with this is that to get an acceptable resolution, the voxel map must be quite large and will eat lots of memory (just give it a shot: figure out the size of a 256x256x256 voxel map with 8 bit color depth - have fun!).
This is where 2.5D kicks in: A 2.5D terrain has all voxels on each position (x|y) filled with the same value up to a certain z-height. All voxels above that are "clear". With this knowledge, we can reduce the storage to a 2D array of heights (the height map) plus a 2D array of colors (the color map or texture), which is absolutely acceptable. Please note that this representation is not a voxel one anymore, but just a heightmap. It is not capable of handling all the scenarios of a voxel map.
Now that we have a heightmap, a variant of rendering approach a) becomes more useful - we can now divide the surface into triangles instead of rendering each voxel independently, which is exactly what most 3D terrain engines do. This way, it is relatively easy to benefit from 3D hardware acceleration and such, as well as "simplify" the terrain for performance boost. Approach b) is still good, though, especially if there is no 3D hardware, or if you can profit from some more assumptions (for example you could use a fixed camera height and pitch, allowing for a lot of pre-calculation to save precious cpu time).

Oscar Giner
Quote:

Which means that where a pixel is a little square on a plane (a.k.a. 2-space), a voxel is a little cube in a 3-space.

No. A voxel is represented by a filled square, not by a cube. Haven't you play any game that uses voxels? It's clear they use filled squares.

I wouldn't use Tobias method of saving a voxel space in memory. Just have a Voxel type, which have x, y, z and color properties. Then have a list of these objects. Draw each voxel as Maverick said.

gnolam
Quote:

No. A voxel is represented by a filled square, not by a cube.

No, a voxel is indeed a cube in 3d space :) (the word even stands for "volume pixel").

Quote:

Haven't you play any game that uses voxels? It's clear they use filled squares.

Have you ever seen an MRI scan? ;)

Oscar Giner

But we're talking about implementing it for a game. I'm talking about what people understand by voxel in game development. If you're going to do it drawing cubes, then better give up and use polygons: will look better and run much faster.

Tobias Dammers
Quote:

No. A voxel is represented by a filled square, not by a cube. Haven't you play any game that uses voxels? It's clear they use filled squares.

I was pointing out the difference between voxels and pixels here, not the actual techniques used to render voxels. If you read my post a bit more closely, you will see that I don't really recommend rendering voxels as cubes, but rather to use ray-tracing or a terrain tesselation algo, or to render them as verticals lines or rectangles.
Voxels represent cube-shaped regions in a 3-space, pixels represent square-shaped regions in a 2-space (assuming equally sized dimensions). That does not mean you must render a voxel as a cube. Neither does it mean that you have to render a pixel (or texel) as a square (just consider trilinear filtered textures - or a traditional monitor, which makes three more or less elliptic blobs).
It is important to understand what voxels represent in order to employ techniques to render them.

Korval
Quote:

Traditionally (to my knowledge, anyways), "2.5D" is generally used to describe the rendering in Doom (and similiar) engines, where it appears to be 3D, but every wall is restricted to being perpendicular to the ground plane.

Actually, the traditional concept of 2.5D is a game that is 2D in nature but 3D in graphics. Viewtiful Joe would fall into this category.

Kanzure
Quote:

will look better

Quote:

Actually, the traditional concept of 2.5D is a game that is 2D in nature but 3D in graphics.

Exactly.

So..a heightmap? But somebody said here that it won't be true 'voxels'. (?) What about displaying voxels via a linked list, for speed issues - that way it won't be just a 256x256x256 map x8 bit color. It would be a 'theoritcal' 256^2 map, or something...right? Would that be faster?

Am I confused, mucking up this entire thing? Does anybody have any suggested sites I can visit? Are voxlap/terraVox good examples?

gnolam

"True voxels" will require either projection/drawing of all visible voxels or raytracing of some kind.

I don't know about voxlap, but terraVox is a simple 4DOF (no looking up or down) wave surfing heightmapper, as described here (and that's a great tutorial BTW :)).

Korval
Quote:

Exactly.

I'm not quite sure what you're trying to say.

A Voxel is merely a method of storing (and, with a voxel renderer, rendering) 3D data. This doesn't say anything about having 2D gameplay. Hence, this has nothing to do with making 2.5D games.

That aside, voxels are generally considered a pretty bad idea, in terms of a method for representing 3D data. Especially if you want to render it.

Voxels are hard to model with, since no modelling package actually supports them. They're notoriously difficult to render with, and the results usually don't look any better than regular polygonal surfaces, and that's if they don't look far worse. And, certainly, no 3D accelerator supports them. As such, there is really little point in working with them.

Kitty Cat

Well, they are useful if you need to represent 3d objects in a software environment. As for their look, they're definately worse than polygonal models, but I think they're kind of a mix between polygonal models and 2d sprites. They get pixelated like sprites, but are fully 3D. And at best, you can just use a series of rectfill's to render them, instead of textured 3d polygons (Ken Silverman's voxel editor makes them look like legos, though.. :o though the editor draws them as full-on lit cubes, not flat rectangles).

That said, however.. if you're going to be using 3d hardware acceleration, or if you can get your hands on a good (fast) software 3d renderer, you might do better to use polygonal models. Through the software route, it may bog the cpu a bit more, but as I said, polygonal models can look a lot better than voxels, and voxels end up taking more disk space.

The way I see it: voxels are a good replacement for 2d sprites in 2.5d worlds (Doom, Build, ect). Polygonal models are better in 3d worlds (Quake, Unreal, ect). But they are by no means restricted to those types. It depends on what's needed and how they're applied (polygonal models for objects, voxels for translucent fog, for example).

Korval
Quote:

Well, they are useful if you need to represent 3d objects in a software environment.

In what way? A decently well-optimized software engine on modern machines, or even relatively recent ones, can render quite a few polygons. Think back to the 3D games right before the Voodoo 1 came out. You had games like Earthsiege, MechWarrior 2, and X-Wing vs Tie Fighter, all of which were quite capabile of pumping out a decent number of textured triangles. And those had to run on Pentium 75 systems with really crappy RAM access, not the multi-GHz monsters of today.

I wonder what a modern optimized 3D rasterizer would look like.

Kitty Cat
Quote:

A decently well-optimized software engine on modern machines, or even relatively recent ones, can render quite a few polygons.

Of course. But as I said, it depends on application. And also, rectfill'd voxels won't bog the CPU down as the resolution increases (or the models get closer) as much as polygons will, especially since even Allegro can tap into some hw accel for the rectfills, if writing direct to vram.

But I'll say again. Yes, models look better, and with a good software renderer can keep pace with voxels. But, I also don't think you should have comparitively good quality models in a rather barren/Wolf3d-like world. It would stand out way too much, imo. Even Doom would be pushing it if you had good models, although that could probably work.

Chris Katko

Well, if you want to get technical. Voxels are better than polygons, right? You just need a high resolution model. It's the 3D equivalent of a bitmap. So resolution applies. Just like a low resolution bitmap won't look nice, nor will a voxel map/model.

AFAIK, medical scanners normally use voxels to build 3-D images. It wouldn't really make sense to represent a point as a polygon.

Kitty Cat

Thing is though, is that voxel's would take up a lot of memory/diskspace (especially if animated). I'm not sure who "owns" the voxel format, but Ken's docs on it have it severely size-restrained (I think it was about 256x256x64 max.. but I don't remember), and it can only use 8-bit color.

Obviously, you're not required to abide by these restrictions. Just make your own editor and tweak the format a little, and poof. But, considering the OP, and that he probably wants to keep it simple (if I'm wrong, I sincerely apologize), that would require yet more work.

Korval
Quote:

And also, rectfill'd voxels won't bog the CPU down as the resolution increases (or the models get closer) as much as polygons will, especially since even Allegro can tap into some hw accel for the rectfills, if writing direct to vram.

But you can't rasterize a voxel image using rectfills. Remember, each voxel element is effectively a 3D cube. You cannot guarentee the angle of the camera (and if you can, why not just blit a 2D sprite and be done with it), so you can easily be looking at the cube at various angles. Most of these angles don't produce a 2D projection that is a rectangle. They produce something more akin to a polygon.

Quote:

But, I also don't think you should have comparitively good quality models in a rather barren/Wolf3d-like world. It would stand out way too much, imo. Even Doom would be pushing it if you had good models, although that could probably work.

Where does this Doom/Wolfenstein 3D stuff keep coming from? Neither of these games used voxels. For non-wall objects, they used what would be called "imposters". These are sprites that are created from various angles so that, when the camera is in a particular direction, an appropriate sprite can be blitted that looks somewhat close to what a real 3D model would have.

And if you're bothering to have voxels at all, then leverage that power and build the world out of them. No need to use that "ray-casting" crap.

Quote:

Well, if you want to get technical. Voxels are better than polygons, right?

No. Voxels are 3D raster images of a model; a 3D sprite, for all intents and purposes. They are digital.

Triangles are 3D vector representations of a model. They are analog, to within floating-point precision.

It is best to postpone analog-to-digital conversion for as long as possible. This allows for filtering and antialiasing techniques to be done specifically for the display in question.

For 2D, raster images are OK usually, because you mostly only look at them in one orientation. The "analog-to-digital" conversion doesn't change if you do it before the final viewing or after.

For 3D, the orientation is never guarenteed. It can be stretched and rotated in 3 dimensions. As such, a raster representation, regarldess of resolution, can never be as correct as a vector represntation. Coupled with the fact that vector graphics take less space and time to render, what more do you need?

Kitty Cat
Quote:

But you can't rasterize a voxel image using rectfills.

Why not? Just calculate the position of each voxel, it's distance to the camera, and use rectfill to stretch it. Granted it's not the best way to rasterize it, but in software, such shortcuts are usually alright. Especially since 2.5D engines don't typically have any z-tilt so all you'd really lose is the edge shading. and even if there is z-tilt, you could go with POLYTYPE_FLAT quad3d's, which would still be faster than POLYTYPE_?TEX*, and still scale better.

Quote:

Remember, each voxel element is effectively a 3D cube.

Right. But that doesn't mean it has to be rendered as such.

Quote:

Where does this Doom/Wolfenstein 3D stuff keep coming from? Neither of these games used voxels.

No, but the OP mentioned a 2.5D game, which is what Doom/Wolf-3D are, and voxels would be pretty well suited for that type of visual environment, IMO.

Niunio
Quote:

Where does this Doom/Wolfenstein 3D stuff keep coming from? Neither of these games used voxels.

But Blood and Shadow Warrior do, and both use Ken Silverman's Build Engine, also used int Duke Nukem 3D, Redneck Rampage and more.

Quote:

A voxel is represented by a filled square, not by a cube. Haven't you play any game that uses voxels? It's clear they use filled squares.

Ken is working in a 'pure' voxel engine, and seeing the screenshots I'm not sure if he's using poligons or cubes...

BTW, I'm now writing a tutorial about how to write a simple landscape voxeled renderer that will be published at Pixelate. I'll try to finish it before the 14th issue.

Trezker
Quote:

Ken is working in a 'pure' voxel engine, and seeing the screenshots I'm not sure if he's using poligons or cubes...

That's really cool, I want liero 3D!

Mars

Like those poxels in Worms 3d?

Krzysztof Kluczek
Ken`s site said:

In June 2001, Tom Dobrowolski joined my "team" as a programmer. He's currently a student in Poland and he's been writing the game code for my voxlap engine demo in his spare time.

This world is really small. TD is my friend. ;D

Quote:

That's really cool, I want liero 3D!

Really good idea! I`ll have to tell him. :)

Korval
Quote:

Why not? Just calculate the position of each voxel, it's distance to the camera, and use rectfill to stretch it. Granted it's not the best way to rasterize it

It's not a good way to rasterize them. You may as well use sprite imposters; those will actually look good at certain resolutions.

And how precisely do you convert a cube into a rect so that you don't make cracks in your model?

Quote:

But that doesn't mean it has to be rendered as such.

Yes it does. If you don't, then the model will look worse than it would have normally.

Quote:

No, but the OP mentioned a 2.5D game, which is what Doom/Wolf-3D are

Doom is clearly a 3D game, gameplay-wise. The fact that it has motion along 3 dimentions is enough. While, yes, it has 3D collision issues (not being able to go over or under monsters), it still has the concept of height. As such, it is a 3D game.

Wolf-3D is a bit wierder. Clearly, the intent was to be 3D; hence the name. But they didn't have the technology at the time to make real 3D gameplay. But the intent was still there. So, technically it is 2.5D, but that was never how the game was supposed to turn out.

Quote:

But Blood and Shadow Warrior do

Then bring those games up. And, btw, they aren't 2.5D. The Build Engine is a fully 3D system, like Doom.

Plucky

Seems like a lot of discussion in want of definitions. Here's what I gathered:
There's a difference between graphical representation of the game world: 2D vs 3D.

There's difference between how the game world space is stored as data: "2D" vs "3D". Perhaps this is where the confusion of the term 2.5D comes from. A world could be stored substantially only in 2D, but the graphics are 3D.

And then there's a difference is degrees of freedom: 4 DOF, 5 DOF, 6 DOF.

As someone said earlier, a voxel is simply a digitalization of a 3D solid. For voxel representation of terrain, only the "topside" is stored, and there is no "bottom" because the earth is orders of magnitude large than the game world. The "wavesurfing" technique takes advantage of the lack of "bottom" to reduce computation.

A voxel engine is not too different than a ray-casting engine. Both trace rays. The former looks for a digital "cube" and the later traditionally looks for a "surface". And there are video cards out there designed solely for ray-casting. My opinion is that eventually we will get to real-time ray-casting, and polygon representation of 3D bodies would give way to a voxel representations.

For games, polygons are better until technology catches back up to voxels. And voxels are catching up albeit slowly. I've been working here and there on an allegro terrain voxel engine. In a 440x330 window on a 1.7 Ghz machine, I get ~30 fps with the following features:
32 bit color, blended (seamless) terrain textures, mipmapped textures, fog, dithered scrolling sky map, +/-45 deg pitch capability, precalculated terrain lighting, terrain mouse picking, simple 2d sprite world locating.

Korval
Quote:

And there are video cards out there designed solely for ray-casting.

Let us now make a distinction between ray casting and ray tracing.

Let us say that ray casting is the techique used by Doom and so forth to render their worlds. Let us say that ray tracing is the rendering technique that traces rays from a camera in 3D space into a scene to "sample" the scene.

Given this, ray casting hardware does not exist, for the obvious reason that it doesn't need to. Ray casting was fast enough on 486 machines, let alone Pentium quality.

Now, ray tracing hardware does exist, but not in any real, commerically viable, form. And most of them don't deal in voxels.

Quote:

My opinion is that eventually we will get to real-time ray-casting, and polygon representation of 3D bodies would give way to a voxel representations.

Then your opinion is wrong. Once we get real-time ray tracing, polygons will be abandoned for spline surfaces or geometric CSG primitives. You know, things that can actually be round, not just successively approximated by squares. Heightmaps will be their own heightmap primitive, ray traced directly. Also, these things take up far less room than voxels.

The best (read: only) real use for voxels in ray tracing is for fog-banks. Something where each voxel element describes the density and color of fog within it.

Quote:

and polygon representation of 3D bodies would give way to a voxel representations.

That's never going to happen. That's like saying that Photoshop is clearly a better product than Illustrator (Photoshop deals in raster images, Illustrator in vector ones). Now, Photoshop is generally the more useful one because people use raster images all the time. But the fact is, if you want to produce great images, irregardless of resolution, you use Illustrator, not Photoshop.

Voxels are a bad idea for solid geometry. They aren't used in high-end CG production, nor are they going to be. Lighting with voxels is quite poor as well, since cubes don't have curve-approximating normals or faces.

Quote:

+/-45 deg pitch capability

The fact that the rendering technique restricts the pitch alone shows that it is nothing more than a hack.

Quote:

precalculated terrain lighting

We've had precalculated lighting on terrain that ran at 30 on chips with 2 orders of magnitude slower clock speed. Doesn't impress me.

Chris Katko
Quote:

The fact that the rendering technique restricts the pitch alone shows that it is nothing more than a hack.

Wrong. If you restrict it to 45 +/- movement, you can add some extra optimizations to it. Using your same logic, our textures are hacks because they need to be powers of two.

Outcast used voxels for the terrain, and it looked pretty decent (especially considering the alternatives at the time).

Plucky

Sorry I meant ray-tracing. I'm aware of the technical difference.

Well with voxel representation I was thinking of a cross between a "tiny cube" and a complete mathematical surface. e.g. A point in space that has the following kinds information: Color, ambient light, normal vector, etc. Granted one can argue having a normal vector is like a very small triangle plane, and so it is. But when the triangle is sufficiently small it will look like a small voxel. For a game, if the number of polygons rendered = the number of pixels on the screen, it starts to look like "backward" ray-tracing. Add millions of polygons to cull for such high resolution scenes, and ray tracing could become more efficient.

Quote:

The fact that the rendering technique restricts the pitch alone shows that it is nothing more than a hack.


I can implement full pitch if I spent more time on it. Terrain voxel engines assume up and down orientations for speed. When the pitch becomes > 45 deg, for a subset of the rays, you must switch the up/down assumption. I just haven't coded it. Perhaps you could try coding a voxel engine to see what I mean.

Quote:

We've had precalculated lighting on terrain that ran at 30 on chips with 2 orders of magnitude slower clock speed. Doesn't impress me.

The point was to show that one could add quite a number of features to a voxel engine at decent speed with current processors. Anyways, for a large map, I found it faster to precalculate each section of the map all in one go. It also looked better because it was easy to interpolate the light map rather than having to perform multiple real-time lighting calculations for each pixel.

Korval
Quote:

Add millions of polygons to cull for such high resolution scenes, and ray tracing could become more efficient.

In what way? Scan converters have certain intrinsic advantages. Chief among them is that they operate purely in immediate-mode. That is, they render only what, and in the specified order, triangles that they are asked to. They don't have to, for each frame, build up a renderlist completely with space partitioning for optimized rendering. This render list takes up a lot of room.

Next, ray tracing samples this list at random locations; scan conversion is much more cache-friendly. Scan conversion is a linear process. Ray tracing is a recursive process. You can hide latency in linear processes by making deep pipelines, like modern GPU's. The only way to really accelerate ray tracing is to throw more processors at it.

The other thing is that ray tracing != voxels. Should ray tracing become faster than scan conversion, one still needs to justify voxels as opposed to spline patches (which never pixelate geometrically, since they are sampled rather than raster) or other model representations that ray tracing allows.

Plucky

Well you've separated the issues into RT vs rasterization and voxels vs spline patches.

For RT I see that the advantages include: logarithmic rendering cost vs number of triangles with the use of hierarchical structures, straightforward visibility and occlusion culling, relative ease of parallism, and the use of arbitary rays for true shadows, reflections and global illumination.

I agree that there's a memory coherency problem. However some of this can be currently optimized by tracing groups of rays (e.g. using SSE for 4 rays), etc. Perhaps memory on video cards can be hardwired to help this problem in the future.

A set of questions can be: (1) Does Ray Tracing offer superior graphics quality? (2) If so, Can Ray Tracing become fast enough for acceptable game play?

A better question is: will rasterization ever match ray tracing's potential graphics quality? Perhaps a hybrid offers the best solution, though ray tracing seem to mimic physics better.

I just found an interesting short opinion paper on this subject. It describes a few opinions on the subject by industry experts: http://online.cs.nps.navy.mil/DistanceEducation/online.siggraph.org/2002/Panels/01_WhenWillRayTracingReplaceRasterization/cdrom.pdf

I never said (nor implied) ray tracing == voxels. Spline patches aren't the ultimate answer either. (e.g. getting them to fit seamlessly). Perhaps the industry would evolve to a hybrid solution: patches or polygons to describe larger "mathematically friendly" surfaces and voxels to model more intricate details. And perhaps voxels would evolve from just cubes into other primitive volumes like cones, spheroids, etc.

Korval
Quote:

logarithmic rendering cost vs number of triangles with the use of hierarchical structures

Which doesn't begin to offset the cost of doing ray tracing.

Quote:

straightforward visibility and occlusion culling

And this matters because... why?

Quote:

relative ease of parallism

So?

Quote:

and the use of arbitary rays for true shadows, reflections and global illumination

Which only serves to slow the scene down by requiring the casting of more rays.

Scan converters can fake reflections with environment maps, shadows with shadow maps or volumes, and global illumination with... wait. Ray tracing only gives you global specular illumination. It does nothing for diffuse global illumination. Scan conversion can get specular global illumination by using an environment map with a proper shader. You'd need radiosity, or data from a radiosity render of the scene, to get global diffuse illumination. There are various tricks you can play to approximate it (a diffuse environment map), but you can play those tricks in either ray tracing or scan conversion.

You can, technically, get global diffuse from ray tracing. But you'd have to cast a ray in every direction from an intersected surface. Hardly economical.

So, what about the deficiencies of ray tracing? Like:

1: Antialiasing. There are any number of techniques avaliable to a scan converter for various kinds of antialiasing. From true anisotropic filtering to edge antialiasing, there are many ways for a scan converter to antialias. However, because ray tracing samples rather than scan converts the analog represntation, it just doesn't have the information necessary to perform antialiasing. The only recourse, therefore, is to cast more rays. And un-antialiased ray traced renders look pretty bad.

2: Basic hardware optimization. Scan conversion is an iterative, linear process. There can be early outs, but the only question is when does something exit the pipe, not how long does it loop over steps 1-5. However, ray tracing is a recursive process. A ray must continuously be tested with the scene until a hit is determined. It has to do this loop of undetermined time.

3: Differred rendering. Having to store (and hierarchically optimize) each frame in order to render it is a daunting task at 30+fps. And that has to be done after transformation. Storing this data and building a ray-traceable representation from it isn't cheap, in memory or performance.

4: Primitive hardware optimization. A ray tracer that can't trace arbitrary surfaces isn't a ray tracer. One of the the primary purposes of using a ray tracer is that you can ray trace any surface you can define a ray-surface intersection algorithm for. In order to optimize this, hardware developers would have to develop a primitive specification language that allows the programmer to, much like modern shader languages, create primitive intersection/interpolation/sizing routines. Scan conversion doesn't need this, since it only operates on triangles.

5: Advanced hardware optimization. So, precisely which spatial subdivision scheme would you suggest be employed by the hardware? You can't use all of them; you have to pick one. You could go for octrees, but they don't handle uniform arrangements of objects too well. You could go for a grid-bag, but sparse scenes slow down ray tracing, and they take up quite a bit of memory. Maybe you could go for full-on BSP, but they can have simiar problems with octrees, and they really take time to build. There is no correct answer that fits all. Obviously, scan converters don't have this issue.

6: Multipass. Scan converters always have the opportunity to multipass geometry. This is useful if the required rendering effects exceed the capabilities of the hardware. For example, on my Radeon 9500, if you wanted to use 20 independent texture coordinates, you'd need 3 passes. A hardware ray tracer would need to have similar hardware limits, but multipass is not an option. Ray tracers just don't lend themselves to it.

BTW, to be fair, you forgot to mention that ray tracing provides order-independent transparency.

Scan conversion, as a process, lends itself better to hardware acceleration than ray tracing. Which explains why many hardware ray tracers never really get out of the research room.

Quote:

will rasterization ever match ray tracing's potential graphics quality?

Gollum, not to mention most of Pixar's movies, was not ray traced, so I'd say that the answer is yes. If you put enough hacks into it, it can look really good. And still be faster than ray tracing.

Quote:

using SSE for 4 rays

That makes absolutely no sense. How can you use SSE for 4 rays, when you're using SSE for things like doing a single dot product with a ray direction and some vector? SSE can only do 1 vector operation, not 4. It does 4 scalar ops, but ray tracing requires vector ops.

Quote:

Spline patches aren't the ultimate answer either. (e.g. getting them to fit seamlessly).

As a professional modeller what they'd rather work in: triangles, spline patches, or voxels. Many of them probably don't even know what a voxel is. There's a reason for this.

Quote:

And perhaps voxels would evolve from just cubes into other primitive volumes like cones, spheroids, etc.

And what good is that? You can't create a flat surface (just a bumpy one), and you can't even guarentee that there are no holes between objects. At least cubes can fit perfectly together.

You seem to have this idea that voxels are a superior method for expressing 3D models. They are not. Vector representations of anything are always better than raster representations.

And how would you go about creating a skinned character? With voxels, you'd have to do sprite animation, but with meshes or spline patches, you can just do regular bone animation and weight the mesh/patches to the bones. Having bone animation is a very good thing; you get animation blending, IK, and all manor of other good animation stuff.

Richard Phipps

Korval do you think that hardware accelerated curved (spline or otherwise) polygons will be the next big thing for the PC and consoles? To that extent that every polygon can be curved and drawn with the same speed as a flat polygon.

He's right about voxels by the way, they are not the future and are limited in application.

Plucky

First I'm curious as how I'm trying to have at least some discussion (with proposals of different ideas, perhaps hybrid systems, and so forth), and I get a seemingly inflexible, dogmatic responses.

If ray-tracing was so bad and scan conversion so superior, how come Renderman and Maya both have ray tracing modules to enhance certain effects, of which I'm certain effects like Gollum used to some degree? Could it be that the "hacks" (a term which you now proudly use... yet earlier you used the same term as derogatory) are insufficient in many circumstances?

I proposed "voxels" do not have to be cubes. Sure, in cartesian space, a cube makes the most sense as 3D pixels... but other primitives work if you're allowed to overlap. In this sense you add more information to a volume "element". I guess I'm talking about geometric primitives rather than voxels per se.

SSE. Have you thought of multiple SSE registers, each containing only one component from each of the 4 vectors?

Parallism. If parallel computing become the standard?

Speed and acceleration. Frankly it doesn't matter that much if ray tracing is slower than scan conversion. It's true today. And there's no reason why Moore's law won't apply to both in the future. It doesn't matter if you need more sampling rays as long as the number is not out of control. And as you suggested, if scan conversion is allowed its hacks, so does RT. The question not whether ray tracing a complex scene ever get to real time. Of course it will. Could it look better, that's the real question. Apparently the jury is out.

Pretty sad if a professional modeller doesn't know what a voxel is. I guess it's fortunate that none of us are one.

I've thought about skins and complex surfaces... one reason why I first proposed a hybrid. Perhaps you missed it in the rush to deride me. Another thought is the possibility of having a mesh of voxels. Imagine the texture skin. What if each texel was a voxel? What if you can animate each voxel/texel like a node in a mesh? With specific physics and so forth? In other words, apply finite element physics modelling to voxels. Hmmm, this is very intrguing, because you can potentially model animations much more realistically rather than "hacking".

Yes I know, adding a finite element physics model would slow things down ridiculously (at the moment). But just as it seemed fantastical 20 years ago that we'll be able to real time render millions and millions of triangles...

Korval
Quote:

First I'm curious as how I'm trying to have at least some discussion (with proposals of different ideas, perhaps hybrid systems, and so forth), and I get a seemingly inflexible, dogmatic responses.

Because your "different ideas" don't correlate to anything in the real world?

And my responses are not dogmatic. You will note that I fairly pointed out a (non-trivial) ray tracing advantage that you did not. Did you ever do anything similar for scan conversion?

Quote:

If ray-tracing was so bad and scan conversion so superior, how come Renderman and Maya both have ray tracing modules to enhance certain effects

Because there are some things that ray tracing does very well. Certain special cases where ray tracing comes in handy. Specular comes to mind.

Quote:

of which I'm certain effects like Gollum used to some degree?

Sheer speculation, at best. I bet you also think that Gollum was modeled with voxels too.

Quote:

Could it be that the "hacks" (a term which you now proudly use... yet earlier you used the same term as derogatory) are insufficient in many circumstances?

They are hacks; what do you expect? An environment map has limitations; it assumes that the environment is at infinity. That's the basic assumption of environment mapping. If you violate that assumption, you don't get good results. That's why it is a hack rather than a solution.

However much of a hack it is, it is a useful hack. It is always important to note that it is a hack when using it so that it is not used inappropriately. But that doesn't make it useful in 90+% of cases.

And I'm not proud of resorting to hacks upon hacks for high-performance 3D. I would like to use ray tracing. However, it isn't going to happen, so there's no point pining for it.

Quote:

SSE. Have you thought of multiple SSE registers, each containing only one component from each of the 4 vectors?

Trust me; SSE doesn't work that way. You're not going to speed up 3D operations by using SSE in a different way.

SSE gives you vector operations; that's all. It doesn't give you matrix operations; you have to build them out of scalar operations.

Quote:

Parallism. If parallel computing become the standard?

The standard what?

My computer already has 3 processors. A CPU, a GPU, and an audio DSP. That's pretty parallel, if you ask me ;)

The kind of parallelism that it takes to make ray tracing work would require a large array of processors. Like 64+. We're not going to see that for a while, because programming massively multithreaded apps is both a pain and very difficult to maintain. For the relatively near future, programs are going to stay relatively single threaded. Once we get better programming languages, then we can start to see largely multithreaded applications being developed.

Quote:

Frankly it doesn't matter that much if ray tracing is slower than scan conversion. It's true today. And there's no reason why Moore's law won't apply to both in the future.

On the assumption that it applies to both equally, if ray tracing is slower than scan conversion, this means that it will always be slower. Which means that performance gains in scan conversion can easily be put into improved visual quality. Maybe you're willing to use shadow mapping or shadow volumes. Maybe you're willing to try HDR rendering. Maybe you add dynamic lighting to everything. Maybe you use BRDF functions to improve the quality of illumination on a surface. Maybe you incorperate a Fresnel term into your specular.

Ray tracing can't do these things if it's barely able to keep up. Granted, ray tracing gets shadowing for free, but it doesn't get HDR for free, nor BRDFs or Fresnel specular computations. These cost each method the same ammount, but scan conversion can afford it.

BTW, scan converting GPU's have been exceeding Moore's Law.

Quote:

The question not whether ray tracing a complex scene ever get to real time. Of course it will. Could it look better, that's the real question. Apparently the jury is out.

By the time ray tracing reaches real-time (and this would only be basic ray tracing. A few lights and no non-shadow recursion), scan conversion will still be visually leaps and bounds ahead of it. No matter what, ray tracing is the slower performing rendering solution. It takes more time to produce a ray traced image than to produce the same one with a scan converter. For most visual databases that realtime apps are interested in.

Quote:

Pretty sad if a professional modeller doesn't know what a voxel is.

Isn't that kinda like saying that it's pretty sad if a programmer doesn't know how to program in FORTRAN? Both FORTRAN and voxels are part of their respecive fields, but they are esoteric parts at best. Outliers that only a few specialist know, and only those specialists need to know them.

Quote:

I guess it's fortunate that none of us are one.

I work with several. Some have expressed interest in going from triangles to spline patches, but none have done so in terms of voxels.

Quote:

I've thought about skins and complex surfaces... one reason why I first proposed a hybrid.

But a hybrid of a bad idea and a good one is still a bad idea. It may not be as bad as the original one, but it is certainly less good than the good idea.

Quote:

What if you can animate each voxel/texel like a node in a mesh? With specific physics and so forth? In other words, apply finite element physics modelling to voxels. Hmmm, this is very intrguing, because you can potentially model animations much more realistically rather than "hacking".

And take forever to do so.

Quote:

But just as it seemed fantastical 20 years ago that we'll be able to real time render millions and millions of triangles...

There was never any question that we would get there, eventually. It was a question of when.

Contrary to popular belief, computing speed cannot increase forever. Eventually, you reach the speed of light, and you can't do anything more. You can fake more by going "wide" and parallelizing it, or going "deep" by pipelining. But you can't actually do anything more in a particular timeframe.

To even consider something like "finite element physics modelling" as a reasonable solution to the problem of animation is just ludicruous. It takes up so much performance and memory that it is never a reasonable solution to the problem. It just isn't worth the effort, when a couple of quick hacks can get you 99.9% of the way there that can actually be done today, and not in 50+ years.

A "hack" becomes a practical method when it covers the majority of all important, and even outlier, cases. As long as the alternatives all place undo burden on the system, the "hack" prevails.

Once again, modern CG graphics doesn't need this; animators can just use bone animation and get exceptional results.

Plucky
Quote:

And I'm not proud of resorting to hacks upon hacks for high-performance 3D. I would like to use ray tracing. However, it isn't going to happen, so there's no point pining for it.

This is what baffles me. Of course real time quality ray tracing won't happen if no one ever pined for it. Fortunately many do. (More than just idiots like me.) You appear to know much about this subject, and you give up. Others who know at least as much as you do appear not to give up.

Quote:

On the assumption that it (Moore) applies to both equally, if ray tracing is slower than scan conversion, this means that it will always be slower.

This logic doesn't totally follow. If RT has logarithmic rendering cost wrt # of triangles, potentially RT can catch up.

Quote:

By the time ray tracing reaches real-time (and this would only be basic ray tracing. A few lights and no non-shadow recursion), scan conversion will still be visually leaps and bounds ahead of it.

I'm not talking about "basic" RT; I mean of good quality, eg Star Wars or LOTR. Seeing demos of real-time RTs, they already appear to meet your basic criteria.

Quote:

Isn't that kinda like saying that it's pretty sad if a programmer doesn't know how to program in FORTRAN?

No, it's kinda like saying it's sad to see an experienced C programmer who never heard of Fortran.

Quote:

Sheer speculation, at best.

A quick google: http://cgw.pennnet.com/Articles/Article_Display.cfm?Section=Articles&Subsection=Display&ARTICLE_ID=196304 "'We stuck with what we were doing, although Ken added a little raytracing to increase the level of detail in the ambient occlusion,' says Greg Butler, sequence supervisor for Gollum."

Quote:

I bet you also think that Gollum was modeled with voxels too.

The personal insults keep coming. Classy.

Quote:

Trust me; SSE doesn't work that way. You're not going to speed up 3D operations by using SSE in a different way.

I'm surprised that you seem unaware of the difference between structures of arrays and array of structures and how they apply to SIMD.

Quote:

The kind of parallelism that it takes to make ray tracing work would require a large array of processors. Like 64+. We're not going to see that for a while, because programming massively multithreaded apps is both a pain and very difficult to maintain. For the relatively near future, programs are going to stay relatively single threaded. Once we get better programming languages, then we can start to see largely multithreaded applications being developed.


RT doesn't need parallelism. It just benefits greatly from it. I see parallelism in the future as standard on "desktops". Apparently you do not.

Quote:

Contrary to popular belief, computing speed cannot increase forever.

No one here thinks this. Would computing speed be able to increase a few more orders of magnitude? I don't see why not. (And it doesn't have to be silicon/semiconductor based either.)

Quote:

Eventually, you reach the speed of light, and you can't do anything more. You can fake more by going "wide" and parallelizing it, or going "deep" by pipelining. But you can't actually do anything more in a particular timeframe.

It's not "faking". It is more per unit time. It's like saying one factory with 2 identical production lines produce no more per unit time than another factory with only one identical line. Perhaps you're trying to say, "More per unit time and per unit space"?

Quote:

To even consider something like "finite element physics modelling" as a reasonable solution to the problem of animation is just ludicruous.

Just as kinematic FEA on any computer 50 years ago was considered ludicrous. I never said it realtime was possible now; I said it could be possible in the future.

Quote:

It just isn't worth the effort, when a couple of quick hacks can get you 99.9% of the way there that can actually be done today.... animators can just use bone animation and get exceptional results.

Bone/muscle/skin animation is of course physics modeling. For example Bone and simple Muscle animation is simply rigid body kinematics. I do see nice effects when an animator hand-tweaks a bulging muscle in a single frame. And it seems like the animator is using some human judegment as to how such a bulge should look. But I haven't seen muscle and skin effects in real time... perhaps real-time effects look more real with real physics behind them. And is skin modeled with physics (e.g stretched or furrowed) or determined by human design? I can imagine that to accurately model skin (more accurate than hacking) a mesh works quite nicely. And a mesh relates well to FEA.
Imagine a vehicle crumpled up in near infinite ways. Imagine material being ablated away by some laser. Imagine being abe to examine up close a gash wound. In real-time, you need rules that provide sufficient realism. For the most part, physical rules work well. Sure physical "hacks" work, but I suspect you get to a point that their realism is insufficient.

In a way I grow tired of this discussion because it's less of a discussion and more explicit naysaying. I take your thoughts seriously; you do not return the respect. Obviously you feel that I'm a complete idiot in having ideas that RT is possible in real time. And that voxels have a place in the future. And that more accurate real-time physics modeling is possible in the future. I'll never convince you otherwise... still I was hoping for some exchange of ideas. Apparently not. :(

Tobias Dammers
Quote:

Once we get better programming languages, then we can start to see largely multithreaded applications being developed.

I don't think the problem lies in the programming languages we have so far, but rather in the interface.
Would be nice, though, to have a C/C++ compiler that somehow extends the language to make thread-safe coding easier.

Then: Moore's law. Well, of course there are barriers that seem "absolute" right now, like the speed of light. I would like to add two more facts to this discussion:
1) Human brains are still way superior to computers in terms of "intelligence" or "power". Sure, we can't add up numbers as quickly, but the more complex and non-standard the task, the more we outrun computers.
2) Ultimately, the human brain is based on real-world physical and chemical processes, just like computers. This means that the human brain has the same physical limitations a computer has (unless you believe in a non-physical soul or spirit or something like that, but I prefer to go with science on this one).

These facts imply that eventually (I'm talking like maybe hundreds of years), it is possible to reach a level of computing power that is comparable to (or at least in the vicinity of) human intelligence, though I don't believe computers will ever be "more intelligent" than humans.

There was a time when everybody said that 1 GHz could never possibly be reached, because of the speed-of-light issue; as we speak, you can buy 3 GHz machines off the shelf, and Moore's law has proven pretty accurate. There will always be barriers, but eventually, they will be broken. If we can't get more power into a single CPU, then we'll have to think other options, like: Going more parallel (we're just getting started here...), using completely different architectures, alternate ways to represent data - eventually, solutions will be found.

All of which is totally irrelevant to the topic, of course...

Korval
Quote:

This is what baffles me. Of course real time quality ray tracing won't happen if no one ever pined for it. Fortunately many do. (More than just idiots like me.) You appear to know much about this subject, and you give up. Others who know at least as much as you do appear not to give up.

The fact that one technique, no matter how clever and nice, doesn't seem to be panning out performance wise doesn't mean that I've "given up". On ray tracing as a means to achieve real-time photorealism, probably. On achieving photorealism at all? Nope. It is merely an analysis of a particular rendering technique in comparison to others.

BTW, something else occurred to me. What if you don't want photorealism? What if you're doing something like a cartoon renderer? Ray tracing doesn't hande non-reality very well at all, simply because the basic rendering mechanism is so tuned into reality that unreality becomes that much harder. Outlining, in ray tracing for example, is far harder than doing so in scan conversion (where there are numerous methods).

That others have not given up on ray tracing simply shows a willingness to stick to an idea that may well fail. It may well not pan out, and it probably won't. In the mean time, I'm going to be busy making graphics using a method that is proven to work.

My problem isn't with ray tracing; I think it is an excellent rendering system. My problem is with the absolute belief that ray tracing, real time at that, is the future. There is no guarentee that it is, and the likelyhood is that it isn't.

Quote:

This logic doesn't totally follow. If RT has logarithmic rendering cost wrt # of triangles, potentially RT can catch up.

Of course, there's more to it than that.

As I pointed out, ray tracing has setup costs that scan conversion doesn't. Building that logarithmic hierarchial data structure isn't cheap. Indeed, best case, the setup time itself is O(n) (one operation for each triangle added). So the total cost of ray tracing is O(n) + O(log(n)), which make it O(n).

Quote:

The personal insults keep coming. Classy.

Well, you are the one who believes that a fundamentally flawed technique like voxels should have anything to do with advanced modelling. You basically said that the mathematical definition of a circle isn't as good or accurate as a rasterized digital image of one, or that there is some benifit to using the digital circle over the vector one if there is a choice. It's harder to take you seriously after that one.

Quote:

It's not "faking". It is more per unit time. It's like saying one factory with 2 identical production lines produce no more per unit time than another factory with only one identical line. Perhaps you're trying to say, "More per unit time and per unit space"?

It is faking it because you aren't actually making anything faster. If you go wide and multiprocess, you're losing efficiency. Two processors can't guarenteeably do the same work as 1 processor at twice the clock speed. If you go deep, a branch mispredict murders your performance, and you lose efficiency that way. Sooner or later, you will reach a point of deminishing returns. At some point, you will build the system that just doesn't get faster for performing a single task. You can make it more responsive for multitasking/multiprocessing. But you can't make a single application run any faster.

Yes, that's a long way from now, and likely will require at least one fundamental, PC-breaking shift in technology (no more silicon, for one). But it will happen eventually.

Quote:

Imagine a vehicle crumpled up in near infinite ways. Imagine material being ablated away by some laser.

Neither of which require voxels or massive physics simulations of very small things. A particle system and some basic macro-scale physics is good enough. Heck, in this instance, Verlet integration, the mother of all physics hacks, is probably good enough.

If I were inclined to take your route, for highly accurate modelling, I would do this in one of two ways. One way would be to dynamically break a mesh, such that pieces of it can actually fly off. I would use 3D textures to represent the interiour surface. There may even be several layers of models. Alternatively, I use CSG primitves and use CSG operations to break them. The mesh method is likely slower, but more accurate.

Quote:

And that voxels have a place in the future.

That one is, without question, false. You may not believe me, but raster images really aren't a good idea in 3D.

Quote:

And that more accurate real-time physics modeling is possible in the future.

More like hyper-accurate. Getting close in real-time, sure. Getting it perfect, or doing micro-detailed modelling of physics? No. I can find something else to do with those clock cycles.

Also, you haven't mentioned how you're going to handle the storage constraints of having to store these massive databases of information. You can just say, "well, in the future, we'll have more memory", but I can just as easily say that, "in the future, we'll have bigger databases." Both of these statements are true. We will want bigger textures. We will want more geometry. And so forth.

Bob
Quote:

I see parallelism in the future as standard on "desktops"

That prediction has been tossed around for the last 40 years or so ;)

Quote:

There was a time when everybody said that 1 GHz could never possibly be reached, because of the speed-of-light issue; as we speak, you can buy 3 GHz machines off the shelf, and Moore's law has proven pretty accurate.

Moore's "law" refers to chip complexity (~ number of transistors), and has nothing to do with clock rate. That said, problems at 1 GHz are of a different nature: Intel had great difficulty getting there (Pentium III recall), and needed a wildly different architecture and process technology to break that barrier.

Another thing is that you cannot increase clock rate and decrease feature sizes indefinitely. Eventually, the electron tunneling effect will kick in, and that's when the real problems start - what if you didn't know for sure if you had current flowing or now?

Quote:

SSE. Have you thought of multiple SSE registers, each containing only one component from each of the 4 vectors?

Assuming the rays don't diverge, then it would be something appropreate to use, although a pain in the ass to write.

Plucky

Well at least I got one good thing out of all of this: My opinions have gotten more specific... it started out a bit vague.

What I want is real-time realism, both physical and optical (graphical). We will always strive to get realism to at least the point where we can't tell the difference. For animation, accurate physics would eventually be needed; if the hack was so good that we can't tell the difference real-time between the hack and the real thing, then either the hack could replace the particular physics or the hack is already using accurate physics. For graphics, I follow similar line of logic, that optical theory models what we see pretty well. This is not to say ray-tracing alone is the ultimate. Ray tracing follows specific physical optical models. There are other models for light diffusion and so forth.

Thus I conclude that ray-tracing, because of its analog to physics, would be part of any graphics rendering system in the future.

I make a similar conclusion with voxels. The world is 3-dimensional (duh). Yet graphically we model the world as a bunch of surfaces; we model a 3D world with 2D objects. This limits the physical modeling that can be done to achieve better realism. There shouldn't be an artificial barrier between graphics modeling and physics modeling. One should not limit the other.

Another thought is if we have a surface element that is never larger than a screen pixel of say 40 microns square (~600 dpi). At this resolution, the difference between a surface element and a voxel is blurred... except that one could physically model a voxel better.

Quote:

What if you don't want photorealism?

So? Don't use photorealism techniques. Use sprites for a 2D side scroller, for example.

Quote:

As I pointed out, ray tracing has setup costs that scan conversion doesn't. Building that logarithmic hierarchial data structure isn't cheap. Indeed, best case, the setup time itself is O(n) (one operation for each triangle added). So the total cost of ray tracing is O(n) + O(log(n)), which make it O(n).

Would we need to setup the whole data structure anew for each and every frame? I think not. (e.g. Hierarchical bounding volumes for each object need not change.) Are there techniques to add an object without having to redo the whole structure. I think so.

Quote:

You basically said that the mathematical definition of a circle isn't as good or accurate as a rasterized digital image of one, or that there is some benifit to using the digital circle over the vector one if there is a choice.

The world is not filled with perfect circles. Or other convenient mathematical shapes. Think bolder.

Why don't we describe sound waves as a string of mathematical arcs or surfaces? Why do we discretely digitalize them instead? Complexity?

Quote:

It is faking it because you aren't actually making anything faster. If you go wide and multiprocess, you're losing efficiency. Two processors can't guarenteeably do the same work as 1 processor at twice the clock speed. If you go deep, a branch mispredict murders your performance, and you lose efficiency that way. Sooner or later, you will reach a point of deminishing returns. At some point, you will build the system that just doesn't get faster for performing a single task. You can make it more responsive for multitasking/multiprocessing. But you can't make a single application run any faster.

Think of it this way. We have two black boxes. If you're omniscient, you would know that one has 512 parallel processors, and the other has 65536 parallel processors. But you're not all-knowing... all you know is that one black box can compute faster than the other. Efficiency is immaterial. That each processor has the same clock speed is immaterial. Diminishing returns is immaterial. Task output per unit time of each black box is the only real metric for speed. I see no reason for diminishing returns to reach zero before we can physically and graphically model real-time a system realistically.

Quote:

One way would be to dynamically break a mesh, such that pieces of it can actually fly off. I would use 3D textures to represent the interiour surface. There may even be several layers of models.

Surfaces to model 3D is, well, superficial. A 3D gash looks more realistic up close than a simple 2D texture representation of a gash. Several layers starts to imply 3D... enough of them and then you're talking about a volumetric model.

Quote:

Neither of which require voxels or massive physics simulations of very small things. A particle system and some basic macro-scale physics is good enough.

I disagree. If I can tell the difference (both graphically and physically), then it's not good enough for modeling realism. As I said earler about voxels, they represent the real world better. Surfaces are nice for graphics, but limited. A volume includes surfaces.

Quote:

Getting it perfect, or doing micro-detailed modelling of physics?

It just has to be detailed enough for it to be indistinguishable from reality. We're far from this point.

Quote:

That prediction (parallelism) has been tossed around for the last 40 years or so ;)

Sort of like predicting every few years that controlled fusion will happen in 10 years. :) Yet not many people doubt that we will ever have fusion reactors.

Korval
Quote:

Would be nice, though, to have a C/C++ compiler that somehow extends the language to make thread-safe coding easier.

That would only help a little.

To be able to take advantage of massively parallel systems, you need a compiler and a language that is inhierently multithreaded. One where mutex's are transparent and race conditions can never happen. Where the compiler can analyze the code and determine which parts should be run on which thread. Stuff like that. C/C++ just aren't well designed for hardcore multithreading.

Quote:

Another thought is if we have a surface element that is never larger than a screen pixel of say 40 microns square (~600 dpi). At this resolution, the difference between a surface element and a voxel is blurred... except that one could physically model a voxel better.

And take up far more memory in the process.

A 100 cubic-yard object would take up no less than 43GB of room, and that's if each texel was only one byte. Of course, you're going to want normals and so forth, so you need more room than that. Maybe 12 bytes. So, you're looking at 531GB.

GTA stands in as a pretty huge game. True Crime:Streets of LA finds a way to model a pretty hefty chunk of LA, and is even bigger. We're talking miles here. One cubic mile would be 231 Tera-Texels, or (with the 12-byte voxel) 2.772 Peta-Bytes. Or 6 orders of magnitude more than what my machine has in RAM.

Note that the actual game itself is able to fit on a 1.5GB disk. No resources is so limitless that one can say, "Hey, let's spend 8 orders of magnitude more than we really have to."

Also, let's not forget that somebody has to take the time to build this massive voxel world. Voxels aren't exactly the easiest thing to model in. It is much easier to use meshes and spline patches, or even CSG primitives.

Games already are pressed to release in 2 years with a set of models and textures using a relatively friendly modelling scheme. This will only get harder and longer as models become more detailed. Adding the layer of voxel complexity isn't going to help.

[edit] I did my computations wrong. It's actually worse. I forgot about the 600-dpi part, and only counted it as 1-dpi.

So the 100-cubic yard area is really 8.7 Exa-Texels in size. And the cubic mile, of course, 54.7 * 10 ^21 Texels. You're approaching Avagadro's number, here, so it's clearly outside the realm of possibility.

Voxels just aren't practical.

Quote:

So? Don't use photorealism techniques. Use sprites for a 2D side scroller, for example.

You seemed to have missed what I was saying, so I'll try again.

What if you want to render something non-photorealistically? You want 3D, and you want animation, but you want each frame to look like anime or a comic book page? Or even a pencil sketch?

This is not a 2D side scroller; we're talking Zelda: Wind Waker-level stuff here. And ray tracing clearly can't get the job done. Game developers want the freedom to do whatever they want, realistic or not, so they can't use a technique that limits that freedom.

Quote:

Would we need to setup the whole data structure anew for each and every frame? I think not. (e.g. Hierarchical bounding volumes for each object need not change.) Are there techniques to add an object without having to redo the whole structure. I think so.

So, you want the hardware (we are still talking about dedicated ray tracing hardware, here) to actually own and handle the meshes and primitives themselves? What happens when I'm instancing a mesh (rendering a mesh multiple times with different transforms in different places)? And you keep avoiding the memory costs of storing all of this data.

Quote:

The world is not filled with perfect circles.

If a modelling technique can't even get a pog right, what good is it?

Quote:

Why don't we describe sound waves as a string of mathematical arcs or surfaces? Why do we discretely digitalize them instead? Complexity?

Because more of the human brain is devoted to sight than audio. Our ears have very good antialiasing qualities to them. Our eyes, however, pick up aliasing instantly.

Also, because sound waves change very rapidly, and it is the rapidity of the change that is vital in reproducing the sound. Any modelling technique that approximated them would quickly be larger in data than just storing the PCM data.

Also note that the analogy itself is invalid. Sound recording is analogous to image recording. That is, taking a picture and storing it on a computer (which is also done as a raster image). The equivalent of graphics rendering in audio would be something like voice synthesis or .mod music synthesis. Which is quite analogous to using real images (that may be touched up) as texturemaps in graphics; something that is done not too infrequently.

Quote:

Think of it this way. We have two black boxes. If you're omniscient, you would know that one has 512 parallel processors, and the other has 65536 parallel processors. But you're not all-knowing... all you know is that one black box can compute faster than the other. Efficiency is immaterial.

Efficiency is very material; indeed, it is vital. If box 2 with 2 orders of magnitude more complexity (not to mention cost) only outperforms box 1 by 5%, I'm sticking with box 1. Indeed, I'd probably pick up box 0.5 or 0.25.

Quote:

I see no reason for diminishing returns to reach zero before we can physically and graphically model real-time a system realistically.

Is there a basis for that statement? Do you know how many processors it takes before you reach the point of diminishing returns?

Quote:

Surfaces to model 3D is, well, superficial.

Let's get a little Descartian. If you can't tell that it's just a surface... does it matter if it is just a surface? If it behaves as expected, does it matter that it isn't using the actual physics to get there?

Thus, if I break a surface dynamically into pieces that are themselves surfaces, can you really tell whether or not they are just surfaces? You don't need to do the "right" thing in order to get the right results. As long as noone can tell you're using magic, magic is just fine.

Quote:

A 3D gash looks more realistic up close than a simple 2D texture representation of a gash.

Once again, you didn't read what I wrote, so I'll try it again.

One could dynamically break apart a mesh, such that pieces of it could actually fly off. One could use a 3D texture to represent the interior surface. There could even be several layers of models.

I even suggested using CSG primitives, which are solid constructs.

Bob
Quote:

Of course, you're going to want normals and so forth

Well, all you really need is color - everything else can be computed from the object itself - after all, if you have voxels at such a fine precision, then for all intents and purposes, you have the surface itself.

Plucky

I changed my mind about this discussion. Korval's commentary has been helpful in that I've grown more confident that my ideas are going in the right direction.

Quote:

So the 100-cubic yard area is really 8.7 Exa-Texels in size. And the cubic mile, of course, 54.7 * 10 ^21 Texels. You're approaching Avagadro's number, here, so it's clearly outside the realm of possibility.

You should be careful with the term "realm of possibility." Do you mean "possible with the technology we have" or "physically possible"? Theoretically one could store 10^66 bits of information in a cubic centimeter. The universe is estimated to have 10^100 bits of information. In theory one could store the entire universe in a sphere a tenth of a light year in diameter. Maybe holographic storage would provide the density and speed to make such ideas practical. Maybe it's the next thing after that.

I'm reminded of those who said before the invention of the integrated circuit that to perform so and so calculation would require a building the size of ____ (eg pentagon) and take ____ (eg 10) years.

Your convictions have a tone that there is a right and wrong. There isn't necessarily a right and wrong. Who knows what technology revolution will happen? What computing breakthrough is in store for us? You may give great arguments why so and so is impractical. But often it's only impractical with the technology or paradigm already in hand.

My conviction that ray tracing and voxels have a place in the future is because the artificial barrier between physics modeling and gfx modeling would eventually be eliminated in modeling a real world scene of which we can see in real-time. Why are these two models separate? Limitations with current technology? Graphics will use the most realistic models that we have: physics. And RT is uses physical principles, and so it seems to follow that RT in some form will be present as a rendering technique in the future. Similarly voxels fit nicely with physical modeling methods.

Quote:

Efficiency is very material; indeed, it is vital. If box 2 with 2 orders of magnitude more complexity (not to mention cost) only outperforms box 1 by 5%, I'm sticking with box 1. Indeed, I'd probably pick up box 0.5 or 0.25.

You originally said that parallelism is "faking" speed increase because "you aren't actually making anything faster". The point of the black box example is that you could measure computational output per unit time. In taking this measurement one does not need to know what is inside the box. If parallelism provide more computational output per unit time, great. Efficiency has nothing to do with which black box produces faster results. A jet fighter guzzling gallons of fuel per second is still faster than a gas-electric hybrid car.

Quote:

If a modelling technique can't even get a pog right, what good is it?

If a circle/sphere cannot accurately model anything real, what good is it?

Quote:

You seemed to have missed what I was saying, so I'll try again....W hat if you want to render something non-photorealistically?

No, I got it. You missed what I'm saying. I never said RT is the only way to go to render graphics. I gave an example of a different rendering method that provides different results. In my example if you're making 2D gfx you don't want a 3D renderer. One can make the same argument with only 3D renderers and kinds of outputs as well, which was what you did. I don't think we're disagreeing. I only wanted to respond to your implication that I had thought RT was the only rendering method available.

Quote:

Also, because sound waves change very rapidly, and it is the rapidity of the change that is vital in reproducing the sound. Any modelling technique that approximated them would quickly be larger in data than just storing the PCM data.

Interesting. Using a mathematically defined surface [edit] or lines[/edit] (which includes a time dimension) is too complex? Require more data than a digital element representation? This was the point I was making. Imagine objects with so much detail to the point that any mathematical model sufficient in its approximation requires data storage that is less efficient that using volumetric elements.

Quote:

Also note that the analogy itself is invalid. Sound recording is analogous to image recording. That is, taking a picture and storing it on a computer (which is also done as a raster image).

Let's continue your line of logic. Recording a 2D image is like recording a "3D" image. For a 2D image, the digital element is called a pixel. For a 3D image, the digital element is called a voxel.

Quote:

Do you know how many processors it takes before you reach the point of diminishing returns?


No, do you? Will there ever be such a point? Nobody (but you apparently) knows precisely what the future of computing is.

Quote:

Let's get a little Descartian. If you can't tell that it's just a surface... does it matter if it is just a surface? If it behaves as expected, does it matter that it isn't using the actual physics to get there? ... One could dynamically break apart a mesh, such that pieces of it could actually fly off. One could use a 3D texture to represent the interior surface. There could even be several layers of models.

My point is that we can tell the difference. Sure you can break up a surface into little parts, and then rearrange them so they are now showing a detailed gash that reaches bone. But I argued earlier that if a "hack" represents this realistic enough, either the hack is a scientific alternative to our current physical theory, or it actually uses physical theory. And for some reason, not many of us are under the illusion that the world is solely composed of surfaces.

If you need several layers, then you're going down the slippery slope of needing many layers for sufficient realism. Enough layers and you might as well use volumetric models.

Quote:

I even suggested using CSG primitives, which are solid constructs.

I laughed at this because I suggested this concept fairly early on. I used the term "geometric primitives" rather than CSG. Remember my blather about cones and spheroids? How one could fit them or overlap them together? Remember how you derided me for it and then ignored it later?

Bob said:

after all, if you have voxels at such a fine precision, then for all intents and purposes, you have the surface itself.

I more or less mentioned a similar concept! If you have a small enough texel, for all intents and purposes you have a voxel. I'm sure the converse (your statement) is true as well.

Bob
Quote:

I more or less mentioned a similar concept! If you have a small enough texel, for all intents and purposes you have a voxel. I'm sure the converse (your statement) is true as well.

This isn't what I was referring to ;)
You don't need to store surface normals if you have a fine voxel image, because those can be computed from the image itself.

That doesn't imply anything about color or other information. It certainly doesn't imply that texels are voxels (although 3D texels are voxels).

Plucky
Quote:

You don't need to store surface normals if you have a fine voxel image, because those can be computed from the image itself.

I had suggested it because Korval said that cubes (voxels) "don't have curve-approximating normals or faces". So you're saying that if the voxels were fine enough, one could derive the normal from adjacent voxel elements (sort of like a poor-man's calculus)?

I had assumed color was self-evident as a probable voxel attribute.

Bob
Quote:

So you're saying that if the voxels were fine enough, one could derive the normal from adjacent voxel elements (sort of like a poor-man's calculus)?

Yes. This is similar to generating normal vectors from a height map or bump map.

Korval
Quote:

Theoretically one could store 10^66 bits of information in a cubic centimeter.

What material are you using to do this? I don't know of a material that can fit into a cubic centimeter and to have a mole of a mole of it. An electron (one of the least massive thing we can find and control) has a mass of 9.1*10^-31 kg. Putting 10^66 bits in there would result in an object with a mass of 9.1*10^35 kg. The Sun itself only has a mass of about 2*10^30 kg. So you're not only upsetting the balance of the solar system, you're not too far from super-massive black-hole territory (a hundred billion Suns or 2*10^41kg). Do you see how absurd this can get?

Meanwhile, the vector representation of the same thing takes dozens of orders of magnitude less memory.

Quote:

Your convictions have a tone that there is a right and wrong.

Well, when you start talking about theoretical memory units with the mass greater than that of a star, it becomes difficult to see it as anything approaching possible.

Quote:

If a circle/sphere cannot accurately model anything real, what good is it?

You can model a pog virtually precisely with a textured cylinder. And spline patches (NURBS can get spheres perfect) can model lots more than just pogs.

Quote:

Imagine objects with so much detail to the point that any mathematical model sufficient in its approximation requires data storage that is less efficient that using volumetric elements.

And yet, there is not one object around me that requires voxels to model sufficiently. Indeed, I have yet to see such an object.

Quote:

Recording a 2D image is like recording a "3D" image. For a 2D image, the digital element is called a pixel. For a 3D image, the digital element is called a voxel.

Recording a 2D image is like photography. Recording a 3D image is like holography. Neither of which has anything to do with producing one from scratch. Which is what rendering is.

Quote:

No, do you? Will there ever be such a point?

Yes there will be. That is without question. All you need to do is look at current multiprocessor systems to see that. A dual-processor machine, even running a multithreaded app, isn't as fast or capable as a single processor machine of twice the clock speed. A 4-processor system isn't even as good as a 3x clock speed increase. The returns have already deminished. It is merely a question of how many processors do we get before the returns aren't significant for doing a single operation. How much can you multithread an application?

Quote:

My point is that we can tell the difference.

But you're wrong. For all you know, you could be plugged into the Matrix and all that is around you is just a signal being beamed into your brain. Or, as Descartes pointed out, that an Evil Demon could be controlling your mind and decieving you about the entire world. If you model a world with surfaces (rather than volumes) such that no one can tell the difference between the surface version and the volume version, there is no difference.

Quote:

Remember my blather about cones and spheroids?

Which was in relation to using different primitives for voxels: spheres rather than cubes. That's very different from modelling with CSG primitives. You aren't trying to approximate a surface with a bunch of spheres at regular intervals. Instead, you're approximating a surface by taking mathematical figures and doing CSG operations on them to create a new object. Indeed, a PolygonBall could be a CSG primitve, as long as the mesh itself is closed.

CSG, btw, happens to be another advantage of ray tracing that you missed.

Plucky
Quote:

"Theoretically one could store 10^66 bits of information in a cubic centimeter."
What material are you using to do this? I don't know of a material that can fit into a cubic centimeter and to have a mole of a mole of it.

For one, you're equating information with mass, which is incorrect. Second we don't need 10^66 bits of information to model a scene. Third, I recently heard of prototype S/DRAM chips that can store 16 MB in 32 sq. mm. Since silicon transistors are less than 1 micron deep, we've theoretically achieved >4e12 bits per cubic centimeter. I don't believe we're anywhere close to what we could achieve many years from now. Fourth, our memory chips/boards take up more space than one cubic centimeter. Fifth, it could be sufficient to not model empty space with empty voxels. Sixth, one does not need to hold all of LA in "local" memory (RAM in today's paradigm) at once.

Quote:

"If a circle/sphere cannot accurately model anything real, what good is it?"
You can model a pog virtually precisely with a textured cylinder. And spline patches (NURBS can get spheres perfect) can model lots more than just pogs.

Look closely at the pog. Does it look like a perfect cylinder? No. Sure at a distance one could model a pog as a textured surface. Maybe add some bump mapping techniques. Look at the pog closely and you need more than a simple cylinder. And bump mapping at this level of detail looks fake.

Quote:

And yet, there is not one object around me that requires voxels to model sufficiently. Indeed, I have yet to see such an object.

How about a rock or stone. (Actually any object will do.) Look closely and you see a lot of fractal geometry. Sure you can model it all as surfaces, but a multitude of sufficiently tiny surfaces have the same difficulties you harp on as voxels. Now take hammer and break the rock in pieces. With voxels, this is a lot more intuitive to model physically. Furthermore, graphics and physics modeling work together rather than separately.

Quote:

Recording a 2D image is like photography. Recording a 3D image is like holography. Neither of which has anything to do with producing one from scratch. Which is what rendering is.

Apparently you got confused. I was not originally talking about rendering with this example. Digital audio is composed of samples of amplitude along a time axis. In 2D gfx, a digital sample element is called a pixel. For a 3D "image", it's called a voxel. Voxel != rendering technique. You argue mathematical surfaces are sufficient for gfx. Yet mathematical representation of audio (eg arcs and curves), as you mentioned, is too complex; sampling was better. Applying your line of reasoning to 3D objects, one would find that voxels would become less complex than mathematical surfaces at a certain level of detail.

Quote:

But you're wrong. For all you know, you could be plugged into the Matrix and all that is around you is just a signal being beamed into your brain. Or, as Descartes pointed out, that an Evil Demon could be controlling your mind and decieving you about the entire world. If you model a world with surfaces (rather than volumes) such that no one can tell the difference between the surface version and the volume version, there is no difference.

Of course Descartes is right... the demon could exist, just as God could exist. Yes, purely from a simplistic optical point of view (we'll ignore things like transluscence and "subsurface" scattering), the world is composed of surfaces. But nobody believes the world is composed solely of surfaces because we observe the world where the visual is not separated from the physical. This is why I conclude that we would eventually model scene physics and scene gfx as one. We a have terrific model that describes how light reaches our eyes and how objects interact. Why not use the same principles to model a scene where we strive for realism? I argue further that for sufficient realism in the future, we must use this model.

Besides, I have yet to see computer graphics in a movie where I cannot distinguish them from real objects. "Real" in the sense of a photographed actor/object/costume.

Quote:

Which was in relation to using different primitives for voxels: spheres rather than cubes. That's very different from modelling with CSG primitives. You aren't trying to approximate a surface with a bunch of spheres at regular intervals. Instead, you're approximating a surface by taking mathematical figures and doing CSG operations on them to create a new object. Indeed, a PolygonBall could be a CSG primitve, as long as the mesh itself is closed.
CSG, btw, happens to be another advantage of ray tracing that you missed.

Let me quote myself:
"... but other primitives work if you're allowed to overlap. In this sense you add more information to a volume "element". I guess I'm talking about geometric primitives rather than voxels per se. "

I didn't miss the use of primitives for RT. I think you would eventually want "primitives" small enough to handle details that are small relative to the scene but are too big when the observer is up close.

Incidently, while googling around related to a recent allegro voxel thread, I came across this page describing some thoughts John Carmack had about 3D graphics 5 years ago: http://unrealities.com/web/johncchat.html
Don't know if he still has the same opinions, but some interesting tidbits:

  • Polygons and curved surfaces are both analytic representations that have serious problems of scalability. The "number of visible polygons on screen" problem comes up if you build your world out of polys. Curved surfaces seem to help with that, but not for long... soon you run into problems of "number of visible curves on screen" and you're back to square one.

  • John's hunch is that eventually 3D hardware will be based on some kind of multiresolution representation, where the world is kept in a data structure that fundamentally supports rendering it in near-constant time from any viewpoint.

  • Voxels are one example of such a structure. John mentioned that he actually wrote a voxel renderer and converted an entire Quake 2 level to use it. It wound up being about 3 gigabytes of data! But he said that that's not actually that far off from today's hardware capacity, if you use intelligent streaming techniques to move the relevant portion of the voxel set into and out of memory. And he said it was really nice having only one data structure for the entire world--no more points versus faces versus BSPs... just one octree node (and subnodes of the same type) representing everything.

  • The analogy John likes is the comparison to the switch from vector graphics (which described a 2D screen in terms of points connected by lines) to raster graphics (which described a 2D screen in terms of a semi-continuous 2D mapping of values). Vector graphics didn't take much data to start with, but broke down once 2D images got really rich. Likewise, current analytic polygon/curve techniques can describe simple models with not much data, but once models get really complex, you get into trouble and you wind up wanting something that describes the 3D space in other terms.
  • Korval
    Quote:

    For one, you're equating information with mass, which is incorrect.

    And how do you plan to store the information? You have to have something there, and that something either has mass (most everything) or energy (the very few massless particles).

    Quote:

    Second we don't need 10^66 bits of information to model a scene.

    The fact that you're already calling for 0.5 molar bytes of information just to get a cubic mile of stuff at reasonably high precision is testimate to the rediculous problems of scale.

    Quote:

    Fourth, our memory chips/boards take up more space than one cubic centimeter.

    Which has precisely what to do with calling for a 0.5 molar bytes of information?

    Quote:

    Fifth, it could be sufficient to not model empty space with empty voxels.

    Look at 2D sprites. Notice how the empty space of the rectangle still has to have data for the empty space. Sure, when you're talking about moles of data, you have some pretty good incentives to come up with data reduction methods. But you still have moles of data; the order of magnitude is still accurate.

    Also, let's say we store this data as some kind of sparse matrix. The sparse areas just happen to be the surface elements. Which roughly approximates a sequence of texturemaps on polygons.

    Quote:

    Sixth, one does not need to hold all of LA in "local" memory (RAM in today's paradigm) at once.

    It's got to go somewhere. Whether it's RAM or local harddrive storage, it has to go somewhere. Storing all of LA at 600 dpi would require truly ungodly storage.

    Quote:

    Look closely at the pog. Does it look like a perfect cylinder?

    The basic shape is. Anything more can be done given what I say at the end.

    Quote:

    And bump mapping at this level of detail looks fake.

    Displacement mapping doesn't look fake, becuase it isn't. I'll get to this a bit later.

    Quote:

    You argue mathematical surfaces are sufficient for gfx. Yet mathematical representation of audio (eg arcs and curves), as you mentioned, is too complex; sampling was better. Applying your line of reasoning to 3D objects, one would find that voxels would become less complex than mathematical surfaces at a certain level of detail.

    Yes. I would also argue that JPEG compression, while a nice format for images, is terrible for sound. Also, MP3 compression, while nice for sound, is terrible for images. Both of these statements are true. They are true because sound != images.

    Quote:

    But he said that that's not actually that far off from today's hardware capacity

    That's funny. My computer, 5 years later, still doesn't have 3GB of RAM in it. And I'm really not going to download a 3GB level.

    He was off by a bit. It's going to be quite some time before we're going to really start having 3GB levels of games. Granted, the 32-64-bit transition is going to be what takes the most time, but even so, it's going to be awhile.

    Quote:

    And he said it was really nice having only one data structure for the entire world--no more points versus faces versus BSPs... just one octree node (and subnodes of the same type) representing everything.

    And one of my professors at CMU wrote a functioning ray tracer on a buisness card. The simple solution isn't always the best. Indeed, in many cases, the simple, brute force approach tends to get progressively worse, not better.

    Quote:

    The analogy John likes is the comparison to the switch from vector graphics (which described a 2D screen in terms of points connected by lines) to raster graphics (which described a 2D screen in terms of a semi-continuous 2D mapping of values). Vector graphics didn't take much data to start with, but broke down once 2D images got really rich.

    The problem there is filling in the lines. For a 2D creature, 2D vector graphics looks just fine because no one can see inside the vector image. Such a creature sees a 1D projection of the 2D space, so they can't see around the 2D object and notice the emptiness. For us, we can see the holes. Filling those holes with something was the equivalent of a 2D rect-blit, so we may as well do that.

    For 3D beings, like us, as long as nobody knows your 3D image is just a shell, it is as solid as any voxel image. There's no need for a 3D rect-blit.

    Quote:

    Likewise, current analytic polygon/curve techniques can describe simple models with not much data, but once models get really complex, you get into trouble and you wind up wanting something that describes the 3D space in other terms.

    That "trouble" can already easily be solved. Without resorting to either ray tracing or voxels. Observe.

    Note: I cannot stand this technique. I really, truly, hate the idea of rendering like this. But it is likely to be the way things go in the future for game scan conversion, so here it is.

    How to get microdetailing into rendered images. It is really simple, ultimately. Displacement mapping.

    In the scan conversion world, displacement mapping means to take a low poly model, use vertex/fragment (and, eventually primitive) shaders to tessellate it and a texture to apply offsetting to it, and then feed the high-poly resulting model back into the renderer for the final render of the mesh. Of course, the displacement mapping can be done in screen space, such that you end up with a model that is tessellated to the resolution of the screen. No more, no less. One vertex per pixel (or sample, if antialiasing is used).

    Now, take this a step further. There are two competing methods for hyper-detailed mesh modelling: subdivision surfaces (taking a mesh and using a subdivision algorithm to produce a smoother one) and spline patches. Either one will work with this method. The spline patches are tessellated directly into the high-density triangles. And the subdivision surfaces just require the use of the subdivision algorithm instead of normal mesh tessellation in the displacement mapping routine, though they have a problem with subdividing irregularly.

    Now, here's the important part. The displacement texture, the texture that defines how to shift the new vertex, is procedural. It's can be some Perlin noise or a nice fractal or whatever else looks convincing.

    And, unlike voxels or super-high-detailed polys, you can have multiple buffers of data that you're rendering to and rendering from, so you aren't taking up significant room.

    Efficient, economical, and sufficient to get the point across. The detail is there, and we didn't need to make black holes worth of memory to do it. Put a little thought into a problem, and you can find an elegant solution to the brute-force methods. And, usually, the elegant solution is better.

    BTW, about physics. Microscale particle physics doesn't equal out to what you get on the macroscale. By particle physics, I don't mean hardcore atomic particle physics. I mean the physics of modelling something at 600dpi and expecting to get, for example, friction out of it. It doesn't work. It, also, takes a really long time to do. Not a particularly good mechanism for physical modelling.

    Plucky
    Quote:

    And how do you plan to store the information? You have to have something there, and that something either has mass (most everything) or energy (the very few massless particles).

    Still stuck in the 20th century? It was recently demonstrated that one could use individual atoms to store a qubit(a bit of information based on quantum states) each. Thus a few grams of carbon is enough to provide 10^23 bits of information. One cubic centimeter diamond crystal, solely using carbon atoms, can store this magnitude of information. And the nice propery of qubits is that one could process all possible states simultaneously. Imagine 2^(10^23) states being processed at once. .... Yet 10^66 is still unimaginably large relative to 10^23.

    Quote:

    That's funny. My computer, 5 years later, still doesn't have 3GB of RAM in it.

    That's funny, mine at work does.

    Quote:

    Displacement mapping.... Of course, the displacement mapping can be done in screen space, such that you end up with a model that is tessellated to the resolution of the screen. No more, no less.... And, unlike voxels or super-high-detailed polys, you can have multiple buffers of data that you're rendering to and rendering from, so you aren't taking up significant room.

    I'm glad you mentioned this: of course the same principles can be (and have been) applied to voxels. For example take a CRT monitor. Model it like a set of geometric primitives at the core, and more detailed voxels near the surface. In a physics modeling phase, subdivide the primitives into smaller elements as necessary. (eg including "elements" that make up a modeled cathode ray tube components if such realism was desired. We could even do this "procedurally".) If you split the CRT in two, subdivide again as necessary. So on and so forth.

    Subdivision is not limited to polygons; it works well for voxels. Thus voxel objects would not necessarily require many orders of magnitude more information than polygon objects. As for matching the screen pixel resolution, a simple octree data structure provides the same feature for voxels. (This is not to say that one must use an octree to get this feature.)

    One key difference between our visions is that you propose calculating or otherwise creating detail (eg with fractals or procedurally) to avoid additional data storage overhead. However voxels are not exempt from using this feature or process. However if you want unique details for added realism, with surfaces or with voxels, you need to store more data. A unique world (ie no calculated details) that is a kilometer square filled with unique objects with 40x40 micron surface sections would also require 10^~20 bits of data.

    Voxels is not a fundamental reason for requiring such large storage requirements. Uniqueness, which provide added realism to a certain point, is the primary reason. e.g. An asteroid field. Each asteroid is a simple spheroid. I could procedurally create a bunch of voxels. The size of voxels can be scaled as necessary to match the necessary resolution of a screen pixel. But if the asteroids (or other objects) do not lend itself for procedural creation or some basic uniquess was required, then the required data storage is greater. Perhaps at a certain level (still larger than say a 600 dpi screen pixel), any smaller detail can be procedurally created if uniqueness is unnecessary at this detail.

    Quote:

    I mean the physics of modelling something at 600dpi and expecting to get, for example, friction out of it. It doesn't work.

    If you were trying to give a good example, you chose rather poorly. A quick google on friction and finite element analysis should allay your fears. Micro (ie micron) level physics is fairly well understood. As you (and I) explained, subdivision offers a method to simplify certain physical models. Detail can be added as necessary. At the nano level however you're starting to approach the fuzzy realm between classical and quantum mechanics.

    Quote:

    For us, we can see the holes. Filling those holes with something was the equivalent of a 2D rect-blit, so we may as well do that.... For 3D beings, like us, as long as nobody knows your 3D image is just a shell, it is as solid as any voxel image.

    Again I'll say that for certain level of realism, I believe that modeling the world as surfaces is insufficent because visually, the physical interaction between objects would be insufficiently accurate. We don't observe the world as shells.

    Thread #337929. Printed from Allegro.cc