Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » 2.5D game questions

This thread is locked; no one can reply to it. rss feed Print
 1   2   3 
2.5D game questions
Trezker
Member #1,739
December 2001
avatar

Quote:

Ken is working in a 'pure' voxel engine, and seeing the screenshots I'm not sure if he's using poligons or cubes...

That's really cool, I want liero 3D!

Mars
Member #971
February 2001
avatar

Like those poxels in Worms 3d?

--
This posting is a natural product. The slight variations in spelling and grammar enhance its individual character and beauty and in no way are to be considered flaws or defects.

Krzysztof Kluczek
Member #4,191
January 2004
avatar

Ken`s site said:

In June 2001, Tom Dobrowolski joined my "team" as a programmer. He's currently a student in Poland and he's been writing the game code for my voxlap engine demo in his spare time.

This world is really small. TD is my friend. ;D

Quote:

That's really cool, I want liero 3D!

Really good idea! I`ll have to tell him. :)

Korval
Member #1,538
September 2001
avatar

Quote:

Why not? Just calculate the position of each voxel, it's distance to the camera, and use rectfill to stretch it. Granted it's not the best way to rasterize it

It's not a good way to rasterize them. You may as well use sprite imposters; those will actually look good at certain resolutions.

And how precisely do you convert a cube into a rect so that you don't make cracks in your model?

Quote:

But that doesn't mean it has to be rendered as such.

Yes it does. If you don't, then the model will look worse than it would have normally.

Quote:

No, but the OP mentioned a 2.5D game, which is what Doom/Wolf-3D are

Doom is clearly a 3D game, gameplay-wise. The fact that it has motion along 3 dimentions is enough. While, yes, it has 3D collision issues (not being able to go over or under monsters), it still has the concept of height. As such, it is a 3D game.

Wolf-3D is a bit wierder. Clearly, the intent was to be 3D; hence the name. But they didn't have the technology at the time to make real 3D gameplay. But the intent was still there. So, technically it is 2.5D, but that was never how the game was supposed to turn out.

Quote:

But Blood and Shadow Warrior do

Then bring those games up. And, btw, they aren't 2.5D. The Build Engine is a fully 3D system, like Doom.

Plucky
Member #1,346
May 2001
avatar

Seems like a lot of discussion in want of definitions. Here's what I gathered:
There's a difference between graphical representation of the game world: 2D vs 3D.

There's difference between how the game world space is stored as data: "2D" vs "3D". Perhaps this is where the confusion of the term 2.5D comes from. A world could be stored substantially only in 2D, but the graphics are 3D.

And then there's a difference is degrees of freedom: 4 DOF, 5 DOF, 6 DOF.

As someone said earlier, a voxel is simply a digitalization of a 3D solid. For voxel representation of terrain, only the "topside" is stored, and there is no "bottom" because the earth is orders of magnitude large than the game world. The "wavesurfing" technique takes advantage of the lack of "bottom" to reduce computation.

A voxel engine is not too different than a ray-casting engine. Both trace rays. The former looks for a digital "cube" and the later traditionally looks for a "surface". And there are video cards out there designed solely for ray-casting. My opinion is that eventually we will get to real-time ray-casting, and polygon representation of 3D bodies would give way to a voxel representations.

For games, polygons are better until technology catches back up to voxels. And voxels are catching up albeit slowly. I've been working here and there on an allegro terrain voxel engine. In a 440x330 window on a 1.7 Ghz machine, I get ~30 fps with the following features:
32 bit color, blended (seamless) terrain textures, mipmapped textures, fog, dithered scrolling sky map, +/-45 deg pitch capability, precalculated terrain lighting, terrain mouse picking, simple 2d sprite world locating.

Korval
Member #1,538
September 2001
avatar

Quote:

And there are video cards out there designed solely for ray-casting.

Let us now make a distinction between ray casting and ray tracing.

Let us say that ray casting is the techique used by Doom and so forth to render their worlds. Let us say that ray tracing is the rendering technique that traces rays from a camera in 3D space into a scene to "sample" the scene.

Given this, ray casting hardware does not exist, for the obvious reason that it doesn't need to. Ray casting was fast enough on 486 machines, let alone Pentium quality.

Now, ray tracing hardware does exist, but not in any real, commerically viable, form. And most of them don't deal in voxels.

Quote:

My opinion is that eventually we will get to real-time ray-casting, and polygon representation of 3D bodies would give way to a voxel representations.

Then your opinion is wrong. Once we get real-time ray tracing, polygons will be abandoned for spline surfaces or geometric CSG primitives. You know, things that can actually be round, not just successively approximated by squares. Heightmaps will be their own heightmap primitive, ray traced directly. Also, these things take up far less room than voxels.

The best (read: only) real use for voxels in ray tracing is for fog-banks. Something where each voxel element describes the density and color of fog within it.

Quote:

and polygon representation of 3D bodies would give way to a voxel representations.

That's never going to happen. That's like saying that Photoshop is clearly a better product than Illustrator (Photoshop deals in raster images, Illustrator in vector ones). Now, Photoshop is generally the more useful one because people use raster images all the time. But the fact is, if you want to produce great images, irregardless of resolution, you use Illustrator, not Photoshop.

Voxels are a bad idea for solid geometry. They aren't used in high-end CG production, nor are they going to be. Lighting with voxels is quite poor as well, since cubes don't have curve-approximating normals or faces.

Quote:

+/-45 deg pitch capability

The fact that the rendering technique restricts the pitch alone shows that it is nothing more than a hack.

Quote:

precalculated terrain lighting

We've had precalculated lighting on terrain that ran at 30 on chips with 2 orders of magnitude slower clock speed. Doesn't impress me.

Chris Katko
Member #1,881
January 2002
avatar

Quote:

The fact that the rendering technique restricts the pitch alone shows that it is nothing more than a hack.

Wrong. If you restrict it to 45 +/- movement, you can add some extra optimizations to it. Using your same logic, our textures are hacks because they need to be powers of two.

Outcast used voxels for the terrain, and it looked pretty decent (especially considering the alternatives at the time).

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

Plucky
Member #1,346
May 2001
avatar

Sorry I meant ray-tracing. I'm aware of the technical difference.

Well with voxel representation I was thinking of a cross between a "tiny cube" and a complete mathematical surface. e.g. A point in space that has the following kinds information: Color, ambient light, normal vector, etc. Granted one can argue having a normal vector is like a very small triangle plane, and so it is. But when the triangle is sufficiently small it will look like a small voxel. For a game, if the number of polygons rendered = the number of pixels on the screen, it starts to look like "backward" ray-tracing. Add millions of polygons to cull for such high resolution scenes, and ray tracing could become more efficient.

Quote:

The fact that the rendering technique restricts the pitch alone shows that it is nothing more than a hack.


I can implement full pitch if I spent more time on it. Terrain voxel engines assume up and down orientations for speed. When the pitch becomes > 45 deg, for a subset of the rays, you must switch the up/down assumption. I just haven't coded it. Perhaps you could try coding a voxel engine to see what I mean.

Quote:

We've had precalculated lighting on terrain that ran at 30 on chips with 2 orders of magnitude slower clock speed. Doesn't impress me.

The point was to show that one could add quite a number of features to a voxel engine at decent speed with current processors. Anyways, for a large map, I found it faster to precalculate each section of the map all in one go. It also looked better because it was easy to interpolate the light map rather than having to perform multiple real-time lighting calculations for each pixel.

Korval
Member #1,538
September 2001
avatar

Quote:

Add millions of polygons to cull for such high resolution scenes, and ray tracing could become more efficient.

In what way? Scan converters have certain intrinsic advantages. Chief among them is that they operate purely in immediate-mode. That is, they render only what, and in the specified order, triangles that they are asked to. They don't have to, for each frame, build up a renderlist completely with space partitioning for optimized rendering. This render list takes up a lot of room.

Next, ray tracing samples this list at random locations; scan conversion is much more cache-friendly. Scan conversion is a linear process. Ray tracing is a recursive process. You can hide latency in linear processes by making deep pipelines, like modern GPU's. The only way to really accelerate ray tracing is to throw more processors at it.

The other thing is that ray tracing != voxels. Should ray tracing become faster than scan conversion, one still needs to justify voxels as opposed to spline patches (which never pixelate geometrically, since they are sampled rather than raster) or other model representations that ray tracing allows.

Plucky
Member #1,346
May 2001
avatar

Well you've separated the issues into RT vs rasterization and voxels vs spline patches.

For RT I see that the advantages include: logarithmic rendering cost vs number of triangles with the use of hierarchical structures, straightforward visibility and occlusion culling, relative ease of parallism, and the use of arbitary rays for true shadows, reflections and global illumination.

I agree that there's a memory coherency problem. However some of this can be currently optimized by tracing groups of rays (e.g. using SSE for 4 rays), etc. Perhaps memory on video cards can be hardwired to help this problem in the future.

A set of questions can be: (1) Does Ray Tracing offer superior graphics quality? (2) If so, Can Ray Tracing become fast enough for acceptable game play?

A better question is: will rasterization ever match ray tracing's potential graphics quality? Perhaps a hybrid offers the best solution, though ray tracing seem to mimic physics better.

I just found an interesting short opinion paper on this subject. It describes a few opinions on the subject by industry experts: http://online.cs.nps.navy.mil/DistanceEducation/online.siggraph.org/2002/Panels/01_WhenWillRayTracingReplaceRasterization/cdrom.pdf

I never said (nor implied) ray tracing == voxels. Spline patches aren't the ultimate answer either. (e.g. getting them to fit seamlessly). Perhaps the industry would evolve to a hybrid solution: patches or polygons to describe larger "mathematically friendly" surfaces and voxels to model more intricate details. And perhaps voxels would evolve from just cubes into other primitive volumes like cones, spheroids, etc.

Korval
Member #1,538
September 2001
avatar

Quote:

logarithmic rendering cost vs number of triangles with the use of hierarchical structures

Which doesn't begin to offset the cost of doing ray tracing.

Quote:

straightforward visibility and occlusion culling

And this matters because... why?

Quote:

relative ease of parallism

So?

Quote:

and the use of arbitary rays for true shadows, reflections and global illumination

Which only serves to slow the scene down by requiring the casting of more rays.

Scan converters can fake reflections with environment maps, shadows with shadow maps or volumes, and global illumination with... wait. Ray tracing only gives you global specular illumination. It does nothing for diffuse global illumination. Scan conversion can get specular global illumination by using an environment map with a proper shader. You'd need radiosity, or data from a radiosity render of the scene, to get global diffuse illumination. There are various tricks you can play to approximate it (a diffuse environment map), but you can play those tricks in either ray tracing or scan conversion.

You can, technically, get global diffuse from ray tracing. But you'd have to cast a ray in every direction from an intersected surface. Hardly economical.

So, what about the deficiencies of ray tracing? Like:

1: Antialiasing. There are any number of techniques avaliable to a scan converter for various kinds of antialiasing. From true anisotropic filtering to edge antialiasing, there are many ways for a scan converter to antialias. However, because ray tracing samples rather than scan converts the analog represntation, it just doesn't have the information necessary to perform antialiasing. The only recourse, therefore, is to cast more rays. And un-antialiased ray traced renders look pretty bad.

2: Basic hardware optimization. Scan conversion is an iterative, linear process. There can be early outs, but the only question is when does something exit the pipe, not how long does it loop over steps 1-5. However, ray tracing is a recursive process. A ray must continuously be tested with the scene until a hit is determined. It has to do this loop of undetermined time.

3: Differred rendering. Having to store (and hierarchically optimize) each frame in order to render it is a daunting task at 30+fps. And that has to be done after transformation. Storing this data and building a ray-traceable representation from it isn't cheap, in memory or performance.

4: Primitive hardware optimization. A ray tracer that can't trace arbitrary surfaces isn't a ray tracer. One of the the primary purposes of using a ray tracer is that you can ray trace any surface you can define a ray-surface intersection algorithm for. In order to optimize this, hardware developers would have to develop a primitive specification language that allows the programmer to, much like modern shader languages, create primitive intersection/interpolation/sizing routines. Scan conversion doesn't need this, since it only operates on triangles.

5: Advanced hardware optimization. So, precisely which spatial subdivision scheme would you suggest be employed by the hardware? You can't use all of them; you have to pick one. You could go for octrees, but they don't handle uniform arrangements of objects too well. You could go for a grid-bag, but sparse scenes slow down ray tracing, and they take up quite a bit of memory. Maybe you could go for full-on BSP, but they can have simiar problems with octrees, and they really take time to build. There is no correct answer that fits all. Obviously, scan converters don't have this issue.

6: Multipass. Scan converters always have the opportunity to multipass geometry. This is useful if the required rendering effects exceed the capabilities of the hardware. For example, on my Radeon 9500, if you wanted to use 20 independent texture coordinates, you'd need 3 passes. A hardware ray tracer would need to have similar hardware limits, but multipass is not an option. Ray tracers just don't lend themselves to it.

BTW, to be fair, you forgot to mention that ray tracing provides order-independent transparency.

Scan conversion, as a process, lends itself better to hardware acceleration than ray tracing. Which explains why many hardware ray tracers never really get out of the research room.

Quote:

will rasterization ever match ray tracing's potential graphics quality?

Gollum, not to mention most of Pixar's movies, was not ray traced, so I'd say that the answer is yes. If you put enough hacks into it, it can look really good. And still be faster than ray tracing.

Quote:

using SSE for 4 rays

That makes absolutely no sense. How can you use SSE for 4 rays, when you're using SSE for things like doing a single dot product with a ray direction and some vector? SSE can only do 1 vector operation, not 4. It does 4 scalar ops, but ray tracing requires vector ops.

Quote:

Spline patches aren't the ultimate answer either. (e.g. getting them to fit seamlessly).

As a professional modeller what they'd rather work in: triangles, spline patches, or voxels. Many of them probably don't even know what a voxel is. There's a reason for this.

Quote:

And perhaps voxels would evolve from just cubes into other primitive volumes like cones, spheroids, etc.

And what good is that? You can't create a flat surface (just a bumpy one), and you can't even guarentee that there are no holes between objects. At least cubes can fit perfectly together.

You seem to have this idea that voxels are a superior method for expressing 3D models. They are not. Vector representations of anything are always better than raster representations.

And how would you go about creating a skinned character? With voxels, you'd have to do sprite animation, but with meshes or spline patches, you can just do regular bone animation and weight the mesh/patches to the bones. Having bone animation is a very good thing; you get animation blending, IK, and all manor of other good animation stuff.

Richard Phipps
Member #1,632
November 2001
avatar

Korval do you think that hardware accelerated curved (spline or otherwise) polygons will be the next big thing for the PC and consoles? To that extent that every polygon can be curved and drawn with the same speed as a flat polygon.

He's right about voxels by the way, they are not the future and are limited in application.

Plucky
Member #1,346
May 2001
avatar

First I'm curious as how I'm trying to have at least some discussion (with proposals of different ideas, perhaps hybrid systems, and so forth), and I get a seemingly inflexible, dogmatic responses.

If ray-tracing was so bad and scan conversion so superior, how come Renderman and Maya both have ray tracing modules to enhance certain effects, of which I'm certain effects like Gollum used to some degree? Could it be that the "hacks" (a term which you now proudly use... yet earlier you used the same term as derogatory) are insufficient in many circumstances?

I proposed "voxels" do not have to be cubes. Sure, in cartesian space, a cube makes the most sense as 3D pixels... but other primitives work if you're allowed to overlap. In this sense you add more information to a volume "element". I guess I'm talking about geometric primitives rather than voxels per se.

SSE. Have you thought of multiple SSE registers, each containing only one component from each of the 4 vectors?

Parallism. If parallel computing become the standard?

Speed and acceleration. Frankly it doesn't matter that much if ray tracing is slower than scan conversion. It's true today. And there's no reason why Moore's law won't apply to both in the future. It doesn't matter if you need more sampling rays as long as the number is not out of control. And as you suggested, if scan conversion is allowed its hacks, so does RT. The question not whether ray tracing a complex scene ever get to real time. Of course it will. Could it look better, that's the real question. Apparently the jury is out.

Pretty sad if a professional modeller doesn't know what a voxel is. I guess it's fortunate that none of us are one.

I've thought about skins and complex surfaces... one reason why I first proposed a hybrid. Perhaps you missed it in the rush to deride me. Another thought is the possibility of having a mesh of voxels. Imagine the texture skin. What if each texel was a voxel? What if you can animate each voxel/texel like a node in a mesh? With specific physics and so forth? In other words, apply finite element physics modelling to voxels. Hmmm, this is very intrguing, because you can potentially model animations much more realistically rather than "hacking".

Yes I know, adding a finite element physics model would slow things down ridiculously (at the moment). But just as it seemed fantastical 20 years ago that we'll be able to real time render millions and millions of triangles...

Korval
Member #1,538
September 2001
avatar

Quote:

First I'm curious as how I'm trying to have at least some discussion (with proposals of different ideas, perhaps hybrid systems, and so forth), and I get a seemingly inflexible, dogmatic responses.

Because your "different ideas" don't correlate to anything in the real world?

And my responses are not dogmatic. You will note that I fairly pointed out a (non-trivial) ray tracing advantage that you did not. Did you ever do anything similar for scan conversion?

Quote:

If ray-tracing was so bad and scan conversion so superior, how come Renderman and Maya both have ray tracing modules to enhance certain effects

Because there are some things that ray tracing does very well. Certain special cases where ray tracing comes in handy. Specular comes to mind.

Quote:

of which I'm certain effects like Gollum used to some degree?

Sheer speculation, at best. I bet you also think that Gollum was modeled with voxels too.

Quote:

Could it be that the "hacks" (a term which you now proudly use... yet earlier you used the same term as derogatory) are insufficient in many circumstances?

They are hacks; what do you expect? An environment map has limitations; it assumes that the environment is at infinity. That's the basic assumption of environment mapping. If you violate that assumption, you don't get good results. That's why it is a hack rather than a solution.

However much of a hack it is, it is a useful hack. It is always important to note that it is a hack when using it so that it is not used inappropriately. But that doesn't make it useful in 90+% of cases.

And I'm not proud of resorting to hacks upon hacks for high-performance 3D. I would like to use ray tracing. However, it isn't going to happen, so there's no point pining for it.

Quote:

SSE. Have you thought of multiple SSE registers, each containing only one component from each of the 4 vectors?

Trust me; SSE doesn't work that way. You're not going to speed up 3D operations by using SSE in a different way.

SSE gives you vector operations; that's all. It doesn't give you matrix operations; you have to build them out of scalar operations.

Quote:

Parallism. If parallel computing become the standard?

The standard what?

My computer already has 3 processors. A CPU, a GPU, and an audio DSP. That's pretty parallel, if you ask me ;)

The kind of parallelism that it takes to make ray tracing work would require a large array of processors. Like 64+. We're not going to see that for a while, because programming massively multithreaded apps is both a pain and very difficult to maintain. For the relatively near future, programs are going to stay relatively single threaded. Once we get better programming languages, then we can start to see largely multithreaded applications being developed.

Quote:

Frankly it doesn't matter that much if ray tracing is slower than scan conversion. It's true today. And there's no reason why Moore's law won't apply to both in the future.

On the assumption that it applies to both equally, if ray tracing is slower than scan conversion, this means that it will always be slower. Which means that performance gains in scan conversion can easily be put into improved visual quality. Maybe you're willing to use shadow mapping or shadow volumes. Maybe you're willing to try HDR rendering. Maybe you add dynamic lighting to everything. Maybe you use BRDF functions to improve the quality of illumination on a surface. Maybe you incorperate a Fresnel term into your specular.

Ray tracing can't do these things if it's barely able to keep up. Granted, ray tracing gets shadowing for free, but it doesn't get HDR for free, nor BRDFs or Fresnel specular computations. These cost each method the same ammount, but scan conversion can afford it.

BTW, scan converting GPU's have been exceeding Moore's Law.

Quote:

The question not whether ray tracing a complex scene ever get to real time. Of course it will. Could it look better, that's the real question. Apparently the jury is out.

By the time ray tracing reaches real-time (and this would only be basic ray tracing. A few lights and no non-shadow recursion), scan conversion will still be visually leaps and bounds ahead of it. No matter what, ray tracing is the slower performing rendering solution. It takes more time to produce a ray traced image than to produce the same one with a scan converter. For most visual databases that realtime apps are interested in.

Quote:

Pretty sad if a professional modeller doesn't know what a voxel is.

Isn't that kinda like saying that it's pretty sad if a programmer doesn't know how to program in FORTRAN? Both FORTRAN and voxels are part of their respecive fields, but they are esoteric parts at best. Outliers that only a few specialist know, and only those specialists need to know them.

Quote:

I guess it's fortunate that none of us are one.

I work with several. Some have expressed interest in going from triangles to spline patches, but none have done so in terms of voxels.

Quote:

I've thought about skins and complex surfaces... one reason why I first proposed a hybrid.

But a hybrid of a bad idea and a good one is still a bad idea. It may not be as bad as the original one, but it is certainly less good than the good idea.

Quote:

What if you can animate each voxel/texel like a node in a mesh? With specific physics and so forth? In other words, apply finite element physics modelling to voxels. Hmmm, this is very intrguing, because you can potentially model animations much more realistically rather than "hacking".

And take forever to do so.

Quote:

But just as it seemed fantastical 20 years ago that we'll be able to real time render millions and millions of triangles...

There was never any question that we would get there, eventually. It was a question of when.

Contrary to popular belief, computing speed cannot increase forever. Eventually, you reach the speed of light, and you can't do anything more. You can fake more by going "wide" and parallelizing it, or going "deep" by pipelining. But you can't actually do anything more in a particular timeframe.

To even consider something like "finite element physics modelling" as a reasonable solution to the problem of animation is just ludicruous. It takes up so much performance and memory that it is never a reasonable solution to the problem. It just isn't worth the effort, when a couple of quick hacks can get you 99.9% of the way there that can actually be done today, and not in 50+ years.

A "hack" becomes a practical method when it covers the majority of all important, and even outlier, cases. As long as the alternatives all place undo burden on the system, the "hack" prevails.

Once again, modern CG graphics doesn't need this; animators can just use bone animation and get exceptional results.

Plucky
Member #1,346
May 2001
avatar

Quote:

And I'm not proud of resorting to hacks upon hacks for high-performance 3D. I would like to use ray tracing. However, it isn't going to happen, so there's no point pining for it.

This is what baffles me. Of course real time quality ray tracing won't happen if no one ever pined for it. Fortunately many do. (More than just idiots like me.) You appear to know much about this subject, and you give up. Others who know at least as much as you do appear not to give up.

Quote:

On the assumption that it (Moore) applies to both equally, if ray tracing is slower than scan conversion, this means that it will always be slower.

This logic doesn't totally follow. If RT has logarithmic rendering cost wrt # of triangles, potentially RT can catch up.

Quote:

By the time ray tracing reaches real-time (and this would only be basic ray tracing. A few lights and no non-shadow recursion), scan conversion will still be visually leaps and bounds ahead of it.

I'm not talking about "basic" RT; I mean of good quality, eg Star Wars or LOTR. Seeing demos of real-time RTs, they already appear to meet your basic criteria.

Quote:

Isn't that kinda like saying that it's pretty sad if a programmer doesn't know how to program in FORTRAN?

No, it's kinda like saying it's sad to see an experienced C programmer who never heard of Fortran.

Quote:

Sheer speculation, at best.

A quick google: http://cgw.pennnet.com/Articles/Article_Display.cfm?Section=Articles&Subsection=Display&ARTICLE_ID=196304 "'We stuck with what we were doing, although Ken added a little raytracing to increase the level of detail in the ambient occlusion,' says Greg Butler, sequence supervisor for Gollum."

Quote:

I bet you also think that Gollum was modeled with voxels too.

The personal insults keep coming. Classy.

Quote:

Trust me; SSE doesn't work that way. You're not going to speed up 3D operations by using SSE in a different way.

I'm surprised that you seem unaware of the difference between structures of arrays and array of structures and how they apply to SIMD.

Quote:

The kind of parallelism that it takes to make ray tracing work would require a large array of processors. Like 64+. We're not going to see that for a while, because programming massively multithreaded apps is both a pain and very difficult to maintain. For the relatively near future, programs are going to stay relatively single threaded. Once we get better programming languages, then we can start to see largely multithreaded applications being developed.


RT doesn't need parallelism. It just benefits greatly from it. I see parallelism in the future as standard on "desktops". Apparently you do not.

Quote:

Contrary to popular belief, computing speed cannot increase forever.

No one here thinks this. Would computing speed be able to increase a few more orders of magnitude? I don't see why not. (And it doesn't have to be silicon/semiconductor based either.)

Quote:

Eventually, you reach the speed of light, and you can't do anything more. You can fake more by going "wide" and parallelizing it, or going "deep" by pipelining. But you can't actually do anything more in a particular timeframe.

It's not "faking". It is more per unit time. It's like saying one factory with 2 identical production lines produce no more per unit time than another factory with only one identical line. Perhaps you're trying to say, "More per unit time and per unit space"?

Quote:

To even consider something like "finite element physics modelling" as a reasonable solution to the problem of animation is just ludicruous.

Just as kinematic FEA on any computer 50 years ago was considered ludicrous. I never said it realtime was possible now; I said it could be possible in the future.

Quote:

It just isn't worth the effort, when a couple of quick hacks can get you 99.9% of the way there that can actually be done today.... animators can just use bone animation and get exceptional results.

Bone/muscle/skin animation is of course physics modeling. For example Bone and simple Muscle animation is simply rigid body kinematics. I do see nice effects when an animator hand-tweaks a bulging muscle in a single frame. And it seems like the animator is using some human judegment as to how such a bulge should look. But I haven't seen muscle and skin effects in real time... perhaps real-time effects look more real with real physics behind them. And is skin modeled with physics (e.g stretched or furrowed) or determined by human design? I can imagine that to accurately model skin (more accurate than hacking) a mesh works quite nicely. And a mesh relates well to FEA.
Imagine a vehicle crumpled up in near infinite ways. Imagine material being ablated away by some laser. Imagine being abe to examine up close a gash wound. In real-time, you need rules that provide sufficient realism. For the most part, physical rules work well. Sure physical "hacks" work, but I suspect you get to a point that their realism is insufficient.

In a way I grow tired of this discussion because it's less of a discussion and more explicit naysaying. I take your thoughts seriously; you do not return the respect. Obviously you feel that I'm a complete idiot in having ideas that RT is possible in real time. And that voxels have a place in the future. And that more accurate real-time physics modeling is possible in the future. I'll never convince you otherwise... still I was hoping for some exchange of ideas. Apparently not. :(

Tobias Dammers
Member #2,604
August 2002
avatar

Quote:

Once we get better programming languages, then we can start to see largely multithreaded applications being developed.

I don't think the problem lies in the programming languages we have so far, but rather in the interface.
Would be nice, though, to have a C/C++ compiler that somehow extends the language to make thread-safe coding easier.

Then: Moore's law. Well, of course there are barriers that seem "absolute" right now, like the speed of light. I would like to add two more facts to this discussion:
1) Human brains are still way superior to computers in terms of "intelligence" or "power". Sure, we can't add up numbers as quickly, but the more complex and non-standard the task, the more we outrun computers.
2) Ultimately, the human brain is based on real-world physical and chemical processes, just like computers. This means that the human brain has the same physical limitations a computer has (unless you believe in a non-physical soul or spirit or something like that, but I prefer to go with science on this one).

These facts imply that eventually (I'm talking like maybe hundreds of years), it is possible to reach a level of computing power that is comparable to (or at least in the vicinity of) human intelligence, though I don't believe computers will ever be "more intelligent" than humans.

There was a time when everybody said that 1 GHz could never possibly be reached, because of the speed-of-light issue; as we speak, you can buy 3 GHz machines off the shelf, and Moore's law has proven pretty accurate. There will always be barriers, but eventually, they will be broken. If we can't get more power into a single CPU, then we'll have to think other options, like: Going more parallel (we're just getting started here...), using completely different architectures, alternate ways to represent data - eventually, solutions will be found.

All of which is totally irrelevant to the topic, of course...

---
Me make music: Triofobie
---
"We need Tobias and his awesome trombone, too." - Johan Halmén

Korval
Member #1,538
September 2001
avatar

Quote:

This is what baffles me. Of course real time quality ray tracing won't happen if no one ever pined for it. Fortunately many do. (More than just idiots like me.) You appear to know much about this subject, and you give up. Others who know at least as much as you do appear not to give up.

The fact that one technique, no matter how clever and nice, doesn't seem to be panning out performance wise doesn't mean that I've "given up". On ray tracing as a means to achieve real-time photorealism, probably. On achieving photorealism at all? Nope. It is merely an analysis of a particular rendering technique in comparison to others.

BTW, something else occurred to me. What if you don't want photorealism? What if you're doing something like a cartoon renderer? Ray tracing doesn't hande non-reality very well at all, simply because the basic rendering mechanism is so tuned into reality that unreality becomes that much harder. Outlining, in ray tracing for example, is far harder than doing so in scan conversion (where there are numerous methods).

That others have not given up on ray tracing simply shows a willingness to stick to an idea that may well fail. It may well not pan out, and it probably won't. In the mean time, I'm going to be busy making graphics using a method that is proven to work.

My problem isn't with ray tracing; I think it is an excellent rendering system. My problem is with the absolute belief that ray tracing, real time at that, is the future. There is no guarentee that it is, and the likelyhood is that it isn't.

Quote:

This logic doesn't totally follow. If RT has logarithmic rendering cost wrt # of triangles, potentially RT can catch up.

Of course, there's more to it than that.

As I pointed out, ray tracing has setup costs that scan conversion doesn't. Building that logarithmic hierarchial data structure isn't cheap. Indeed, best case, the setup time itself is O(n) (one operation for each triangle added). So the total cost of ray tracing is O(n) + O(log(n)), which make it O(n).

Quote:

The personal insults keep coming. Classy.

Well, you are the one who believes that a fundamentally flawed technique like voxels should have anything to do with advanced modelling. You basically said that the mathematical definition of a circle isn't as good or accurate as a rasterized digital image of one, or that there is some benifit to using the digital circle over the vector one if there is a choice. It's harder to take you seriously after that one.

Quote:

It's not "faking". It is more per unit time. It's like saying one factory with 2 identical production lines produce no more per unit time than another factory with only one identical line. Perhaps you're trying to say, "More per unit time and per unit space"?

It is faking it because you aren't actually making anything faster. If you go wide and multiprocess, you're losing efficiency. Two processors can't guarenteeably do the same work as 1 processor at twice the clock speed. If you go deep, a branch mispredict murders your performance, and you lose efficiency that way. Sooner or later, you will reach a point of deminishing returns. At some point, you will build the system that just doesn't get faster for performing a single task. You can make it more responsive for multitasking/multiprocessing. But you can't make a single application run any faster.

Yes, that's a long way from now, and likely will require at least one fundamental, PC-breaking shift in technology (no more silicon, for one). But it will happen eventually.

Quote:

Imagine a vehicle crumpled up in near infinite ways. Imagine material being ablated away by some laser.

Neither of which require voxels or massive physics simulations of very small things. A particle system and some basic macro-scale physics is good enough. Heck, in this instance, Verlet integration, the mother of all physics hacks, is probably good enough.

If I were inclined to take your route, for highly accurate modelling, I would do this in one of two ways. One way would be to dynamically break a mesh, such that pieces of it can actually fly off. I would use 3D textures to represent the interiour surface. There may even be several layers of models. Alternatively, I use CSG primitves and use CSG operations to break them. The mesh method is likely slower, but more accurate.

Quote:

And that voxels have a place in the future.

That one is, without question, false. You may not believe me, but raster images really aren't a good idea in 3D.

Quote:

And that more accurate real-time physics modeling is possible in the future.

More like hyper-accurate. Getting close in real-time, sure. Getting it perfect, or doing micro-detailed modelling of physics? No. I can find something else to do with those clock cycles.

Also, you haven't mentioned how you're going to handle the storage constraints of having to store these massive databases of information. You can just say, "well, in the future, we'll have more memory", but I can just as easily say that, "in the future, we'll have bigger databases." Both of these statements are true. We will want bigger textures. We will want more geometry. And so forth.

Bob
Free Market Evangelist
September 2000
avatar

Quote:

I see parallelism in the future as standard on "desktops"

That prediction has been tossed around for the last 40 years or so ;)

Quote:

There was a time when everybody said that 1 GHz could never possibly be reached, because of the speed-of-light issue; as we speak, you can buy 3 GHz machines off the shelf, and Moore's law has proven pretty accurate.

Moore's "law" refers to chip complexity (~ number of transistors), and has nothing to do with clock rate. That said, problems at 1 GHz are of a different nature: Intel had great difficulty getting there (Pentium III recall), and needed a wildly different architecture and process technology to break that barrier.

Another thing is that you cannot increase clock rate and decrease feature sizes indefinitely. Eventually, the electron tunneling effect will kick in, and that's when the real problems start - what if you didn't know for sure if you had current flowing or now?

Quote:

SSE. Have you thought of multiple SSE registers, each containing only one component from each of the 4 vectors?

Assuming the rays don't diverge, then it would be something appropreate to use, although a pain in the ass to write.

--
- Bob
[ -- All my signature links are 404 -- ]

Plucky
Member #1,346
May 2001
avatar

Well at least I got one good thing out of all of this: My opinions have gotten more specific... it started out a bit vague.

What I want is real-time realism, both physical and optical (graphical). We will always strive to get realism to at least the point where we can't tell the difference. For animation, accurate physics would eventually be needed; if the hack was so good that we can't tell the difference real-time between the hack and the real thing, then either the hack could replace the particular physics or the hack is already using accurate physics. For graphics, I follow similar line of logic, that optical theory models what we see pretty well. This is not to say ray-tracing alone is the ultimate. Ray tracing follows specific physical optical models. There are other models for light diffusion and so forth.

Thus I conclude that ray-tracing, because of its analog to physics, would be part of any graphics rendering system in the future.

I make a similar conclusion with voxels. The world is 3-dimensional (duh). Yet graphically we model the world as a bunch of surfaces; we model a 3D world with 2D objects. This limits the physical modeling that can be done to achieve better realism. There shouldn't be an artificial barrier between graphics modeling and physics modeling. One should not limit the other.

Another thought is if we have a surface element that is never larger than a screen pixel of say 40 microns square (~600 dpi). At this resolution, the difference between a surface element and a voxel is blurred... except that one could physically model a voxel better.

Quote:

What if you don't want photorealism?

So? Don't use photorealism techniques. Use sprites for a 2D side scroller, for example.

Quote:

As I pointed out, ray tracing has setup costs that scan conversion doesn't. Building that logarithmic hierarchial data structure isn't cheap. Indeed, best case, the setup time itself is O(n) (one operation for each triangle added). So the total cost of ray tracing is O(n) + O(log(n)), which make it O(n).

Would we need to setup the whole data structure anew for each and every frame? I think not. (e.g. Hierarchical bounding volumes for each object need not change.) Are there techniques to add an object without having to redo the whole structure. I think so.

Quote:

You basically said that the mathematical definition of a circle isn't as good or accurate as a rasterized digital image of one, or that there is some benifit to using the digital circle over the vector one if there is a choice.

The world is not filled with perfect circles. Or other convenient mathematical shapes. Think bolder.

Why don't we describe sound waves as a string of mathematical arcs or surfaces? Why do we discretely digitalize them instead? Complexity?

Quote:

It is faking it because you aren't actually making anything faster. If you go wide and multiprocess, you're losing efficiency. Two processors can't guarenteeably do the same work as 1 processor at twice the clock speed. If you go deep, a branch mispredict murders your performance, and you lose efficiency that way. Sooner or later, you will reach a point of deminishing returns. At some point, you will build the system that just doesn't get faster for performing a single task. You can make it more responsive for multitasking/multiprocessing. But you can't make a single application run any faster.

Think of it this way. We have two black boxes. If you're omniscient, you would know that one has 512 parallel processors, and the other has 65536 parallel processors. But you're not all-knowing... all you know is that one black box can compute faster than the other. Efficiency is immaterial. That each processor has the same clock speed is immaterial. Diminishing returns is immaterial. Task output per unit time of each black box is the only real metric for speed. I see no reason for diminishing returns to reach zero before we can physically and graphically model real-time a system realistically.

Quote:

One way would be to dynamically break a mesh, such that pieces of it can actually fly off. I would use 3D textures to represent the interiour surface. There may even be several layers of models.

Surfaces to model 3D is, well, superficial. A 3D gash looks more realistic up close than a simple 2D texture representation of a gash. Several layers starts to imply 3D... enough of them and then you're talking about a volumetric model.

Quote:

Neither of which require voxels or massive physics simulations of very small things. A particle system and some basic macro-scale physics is good enough.

I disagree. If I can tell the difference (both graphically and physically), then it's not good enough for modeling realism. As I said earler about voxels, they represent the real world better. Surfaces are nice for graphics, but limited. A volume includes surfaces.

Quote:

Getting it perfect, or doing micro-detailed modelling of physics?

It just has to be detailed enough for it to be indistinguishable from reality. We're far from this point.

Quote:

That prediction (parallelism) has been tossed around for the last 40 years or so ;)

Sort of like predicting every few years that controlled fusion will happen in 10 years. :) Yet not many people doubt that we will ever have fusion reactors.

Korval
Member #1,538
September 2001
avatar

Quote:

Would be nice, though, to have a C/C++ compiler that somehow extends the language to make thread-safe coding easier.

That would only help a little.

To be able to take advantage of massively parallel systems, you need a compiler and a language that is inhierently multithreaded. One where mutex's are transparent and race conditions can never happen. Where the compiler can analyze the code and determine which parts should be run on which thread. Stuff like that. C/C++ just aren't well designed for hardcore multithreading.

Quote:

Another thought is if we have a surface element that is never larger than a screen pixel of say 40 microns square (~600 dpi). At this resolution, the difference between a surface element and a voxel is blurred... except that one could physically model a voxel better.

And take up far more memory in the process.

A 100 cubic-yard object would take up no less than 43GB of room, and that's if each texel was only one byte. Of course, you're going to want normals and so forth, so you need more room than that. Maybe 12 bytes. So, you're looking at 531GB.

GTA stands in as a pretty huge game. True Crime:Streets of LA finds a way to model a pretty hefty chunk of LA, and is even bigger. We're talking miles here. One cubic mile would be 231 Tera-Texels, or (with the 12-byte voxel) 2.772 Peta-Bytes. Or 6 orders of magnitude more than what my machine has in RAM.

Note that the actual game itself is able to fit on a 1.5GB disk. No resources is so limitless that one can say, "Hey, let's spend 8 orders of magnitude more than we really have to."

Also, let's not forget that somebody has to take the time to build this massive voxel world. Voxels aren't exactly the easiest thing to model in. It is much easier to use meshes and spline patches, or even CSG primitives.

Games already are pressed to release in 2 years with a set of models and textures using a relatively friendly modelling scheme. This will only get harder and longer as models become more detailed. Adding the layer of voxel complexity isn't going to help.

[edit] I did my computations wrong. It's actually worse. I forgot about the 600-dpi part, and only counted it as 1-dpi.

So the 100-cubic yard area is really 8.7 Exa-Texels in size. And the cubic mile, of course, 54.7 * 10 ^21 Texels. You're approaching Avagadro's number, here, so it's clearly outside the realm of possibility.

Voxels just aren't practical.

Quote:

So? Don't use photorealism techniques. Use sprites for a 2D side scroller, for example.

You seemed to have missed what I was saying, so I'll try again.

What if you want to render something non-photorealistically? You want 3D, and you want animation, but you want each frame to look like anime or a comic book page? Or even a pencil sketch?

This is not a 2D side scroller; we're talking Zelda: Wind Waker-level stuff here. And ray tracing clearly can't get the job done. Game developers want the freedom to do whatever they want, realistic or not, so they can't use a technique that limits that freedom.

Quote:

Would we need to setup the whole data structure anew for each and every frame? I think not. (e.g. Hierarchical bounding volumes for each object need not change.) Are there techniques to add an object without having to redo the whole structure. I think so.

So, you want the hardware (we are still talking about dedicated ray tracing hardware, here) to actually own and handle the meshes and primitives themselves? What happens when I'm instancing a mesh (rendering a mesh multiple times with different transforms in different places)? And you keep avoiding the memory costs of storing all of this data.

Quote:

The world is not filled with perfect circles.

If a modelling technique can't even get a pog right, what good is it?

Quote:

Why don't we describe sound waves as a string of mathematical arcs or surfaces? Why do we discretely digitalize them instead? Complexity?

Because more of the human brain is devoted to sight than audio. Our ears have very good antialiasing qualities to them. Our eyes, however, pick up aliasing instantly.

Also, because sound waves change very rapidly, and it is the rapidity of the change that is vital in reproducing the sound. Any modelling technique that approximated them would quickly be larger in data than just storing the PCM data.

Also note that the analogy itself is invalid. Sound recording is analogous to image recording. That is, taking a picture and storing it on a computer (which is also done as a raster image). The equivalent of graphics rendering in audio would be something like voice synthesis or .mod music synthesis. Which is quite analogous to using real images (that may be touched up) as texturemaps in graphics; something that is done not too infrequently.

Quote:

Think of it this way. We have two black boxes. If you're omniscient, you would know that one has 512 parallel processors, and the other has 65536 parallel processors. But you're not all-knowing... all you know is that one black box can compute faster than the other. Efficiency is immaterial.

Efficiency is very material; indeed, it is vital. If box 2 with 2 orders of magnitude more complexity (not to mention cost) only outperforms box 1 by 5%, I'm sticking with box 1. Indeed, I'd probably pick up box 0.5 or 0.25.

Quote:

I see no reason for diminishing returns to reach zero before we can physically and graphically model real-time a system realistically.

Is there a basis for that statement? Do you know how many processors it takes before you reach the point of diminishing returns?

Quote:

Surfaces to model 3D is, well, superficial.

Let's get a little Descartian. If you can't tell that it's just a surface... does it matter if it is just a surface? If it behaves as expected, does it matter that it isn't using the actual physics to get there?

Thus, if I break a surface dynamically into pieces that are themselves surfaces, can you really tell whether or not they are just surfaces? You don't need to do the "right" thing in order to get the right results. As long as noone can tell you're using magic, magic is just fine.

Quote:

A 3D gash looks more realistic up close than a simple 2D texture representation of a gash.

Once again, you didn't read what I wrote, so I'll try it again.

One could dynamically break apart a mesh, such that pieces of it could actually fly off. One could use a 3D texture to represent the interior surface. There could even be several layers of models.

I even suggested using CSG primitives, which are solid constructs.

Bob
Free Market Evangelist
September 2000
avatar

Quote:

Of course, you're going to want normals and so forth

Well, all you really need is color - everything else can be computed from the object itself - after all, if you have voxels at such a fine precision, then for all intents and purposes, you have the surface itself.

--
- Bob
[ -- All my signature links are 404 -- ]

Plucky
Member #1,346
May 2001
avatar

I changed my mind about this discussion. Korval's commentary has been helpful in that I've grown more confident that my ideas are going in the right direction.

Quote:

So the 100-cubic yard area is really 8.7 Exa-Texels in size. And the cubic mile, of course, 54.7 * 10 ^21 Texels. You're approaching Avagadro's number, here, so it's clearly outside the realm of possibility.

You should be careful with the term "realm of possibility." Do you mean "possible with the technology we have" or "physically possible"? Theoretically one could store 10^66 bits of information in a cubic centimeter. The universe is estimated to have 10^100 bits of information. In theory one could store the entire universe in a sphere a tenth of a light year in diameter. Maybe holographic storage would provide the density and speed to make such ideas practical. Maybe it's the next thing after that.

I'm reminded of those who said before the invention of the integrated circuit that to perform so and so calculation would require a building the size of ____ (eg pentagon) and take ____ (eg 10) years.

Your convictions have a tone that there is a right and wrong. There isn't necessarily a right and wrong. Who knows what technology revolution will happen? What computing breakthrough is in store for us? You may give great arguments why so and so is impractical. But often it's only impractical with the technology or paradigm already in hand.

My conviction that ray tracing and voxels have a place in the future is because the artificial barrier between physics modeling and gfx modeling would eventually be eliminated in modeling a real world scene of which we can see in real-time. Why are these two models separate? Limitations with current technology? Graphics will use the most realistic models that we have: physics. And RT is uses physical principles, and so it seems to follow that RT in some form will be present as a rendering technique in the future. Similarly voxels fit nicely with physical modeling methods.

Quote:

Efficiency is very material; indeed, it is vital. If box 2 with 2 orders of magnitude more complexity (not to mention cost) only outperforms box 1 by 5%, I'm sticking with box 1. Indeed, I'd probably pick up box 0.5 or 0.25.

You originally said that parallelism is "faking" speed increase because "you aren't actually making anything faster". The point of the black box example is that you could measure computational output per unit time. In taking this measurement one does not need to know what is inside the box. If parallelism provide more computational output per unit time, great. Efficiency has nothing to do with which black box produces faster results. A jet fighter guzzling gallons of fuel per second is still faster than a gas-electric hybrid car.

Quote:

If a modelling technique can't even get a pog right, what good is it?

If a circle/sphere cannot accurately model anything real, what good is it?

Quote:

You seemed to have missed what I was saying, so I'll try again....W hat if you want to render something non-photorealistically?

No, I got it. You missed what I'm saying. I never said RT is the only way to go to render graphics. I gave an example of a different rendering method that provides different results. In my example if you're making 2D gfx you don't want a 3D renderer. One can make the same argument with only 3D renderers and kinds of outputs as well, which was what you did. I don't think we're disagreeing. I only wanted to respond to your implication that I had thought RT was the only rendering method available.

Quote:

Also, because sound waves change very rapidly, and it is the rapidity of the change that is vital in reproducing the sound. Any modelling technique that approximated them would quickly be larger in data than just storing the PCM data.

Interesting. Using a mathematically defined surface [edit] or lines[/edit] (which includes a time dimension) is too complex? Require more data than a digital element representation? This was the point I was making. Imagine objects with so much detail to the point that any mathematical model sufficient in its approximation requires data storage that is less efficient that using volumetric elements.

Quote:

Also note that the analogy itself is invalid. Sound recording is analogous to image recording. That is, taking a picture and storing it on a computer (which is also done as a raster image).

Let's continue your line of logic. Recording a 2D image is like recording a "3D" image. For a 2D image, the digital element is called a pixel. For a 3D image, the digital element is called a voxel.

Quote:

Do you know how many processors it takes before you reach the point of diminishing returns?


No, do you? Will there ever be such a point? Nobody (but you apparently) knows precisely what the future of computing is.

Quote:

Let's get a little Descartian. If you can't tell that it's just a surface... does it matter if it is just a surface? If it behaves as expected, does it matter that it isn't using the actual physics to get there? ... One could dynamically break apart a mesh, such that pieces of it could actually fly off. One could use a 3D texture to represent the interior surface. There could even be several layers of models.

My point is that we can tell the difference. Sure you can break up a surface into little parts, and then rearrange them so they are now showing a detailed gash that reaches bone. But I argued earlier that if a "hack" represents this realistic enough, either the hack is a scientific alternative to our current physical theory, or it actually uses physical theory. And for some reason, not many of us are under the illusion that the world is solely composed of surfaces.

If you need several layers, then you're going down the slippery slope of needing many layers for sufficient realism. Enough layers and you might as well use volumetric models.

Quote:

I even suggested using CSG primitives, which are solid constructs.

I laughed at this because I suggested this concept fairly early on. I used the term "geometric primitives" rather than CSG. Remember my blather about cones and spheroids? How one could fit them or overlap them together? Remember how you derided me for it and then ignored it later?

Bob said:

after all, if you have voxels at such a fine precision, then for all intents and purposes, you have the surface itself.

I more or less mentioned a similar concept! If you have a small enough texel, for all intents and purposes you have a voxel. I'm sure the converse (your statement) is true as well.

Bob
Free Market Evangelist
September 2000
avatar

Quote:

I more or less mentioned a similar concept! If you have a small enough texel, for all intents and purposes you have a voxel. I'm sure the converse (your statement) is true as well.

This isn't what I was referring to ;)
You don't need to store surface normals if you have a fine voxel image, because those can be computed from the image itself.

That doesn't imply anything about color or other information. It certainly doesn't imply that texels are voxels (although 3D texels are voxels).

--
- Bob
[ -- All my signature links are 404 -- ]

Plucky
Member #1,346
May 2001
avatar

Quote:

You don't need to store surface normals if you have a fine voxel image, because those can be computed from the image itself.

I had suggested it because Korval said that cubes (voxels) "don't have curve-approximating normals or faces". So you're saying that if the voxels were fine enough, one could derive the normal from adjacent voxel elements (sort of like a poor-man's calculus)?

I had assumed color was self-evident as a probable voxel attribute.

Bob
Free Market Evangelist
September 2000
avatar

Quote:

So you're saying that if the voxels were fine enough, one could derive the normal from adjacent voxel elements (sort of like a poor-man's calculus)?

Yes. This is similar to generating normal vectors from a height map or bump map.

--
- Bob
[ -- All my signature links are 404 -- ]

 1   2   3 


Go to: