Considering what you can do with a single computer these days make me wonder...
What can you do with a whole bunch of them if you redistribute the task through a network?
Maybe a super computer-game? Something with more realistic graphics and physics?
Could, say, 20 1.5GHz CPUs pull something like that off?
I know that this idea is naive, that it could be hard to redistribute the task efficiently, that network speed would be a limiting factor... I'm not even sure whether 20 CPUs could do better graphics than a modern GPU.
But I really don't understand much.
What do you think?
these things come to mind:
- get higher resolution with the same quality
- increase the number of triangles
- better physics
- less/no frame drops
but most importantly
- generate enough perspectives to display on a 3D screen
but most importantly
- generate enough perspectives to display on a 3D screen
Meaning 2?
no, not one of those crappy 3d screens. I'm talkin about the real ones! The kind where you can view the image from multiple angles my moving your head around what you see.
Ahem Those are really funky (read: crappy) looking. You get some of each other image faded onto the one your looking at, and not in a nice way. It's not easily fixable, either.
The question is whether there is any gain.
How fast would Half-Life 2 run in software mode?
I wouldn't be surprised if I needed 20 CPUs just to run it decently.
well, they'll make it work nicely eventually. We just have to wait until the HD market starts falling.
If you had a good super computer, you could make an Oblivion game run at top-speed, with twice as good graphics, all the effects, real updating textures, such as armor and weapons getting damaged, and have incredibly detailed movements, not just scripted animations.
Interesting topic but I think there are two different issues here's: 1) network efficiency and 2) are games suitable for parallelization? Rendering is parallizable, just look at a gpu, and physics are parallizable, just look at a ppu/gpu. What about AI, gameplay, sound, etc...?
It's pretty hard to find good info on parallel algorithms on the internet unless you feel like trolling the acm/ieee databases. If anyone has any good links, I'd love to see them.
What's the problem...
You can do this stuff on a single CPU.
They're relatively cheap anyways.
Something that comes to my mind when thinking about graphically advanced games is arcade games... I'm not much into them now, but I suppose they're not what they used to be? Couldn't an arcade game with a very powerful computer be feasible? Or maybe they already exist, I don't know, I haven't been to an arcade in ages...
Well, you can always have CPU farm instead of networked computers. I'd love to see some really huge RTS, where each CPU was computing only some region of map for instance, or a bunch of units.
I went to 2 arcades in the past week, and both of them had games that looked like they were from the previous generation. I was really disappointed, and suprised. I was hoping to see some really impressive stuff.
Arcade games have used multiple cpus even back to the 8-bit days. One obvious parallisation (sic) opportunity is a dedicated board for sound.
MS flight simulator can be scaled to multiple computers.
Graphics etc. can only be taken so far before latency and locking becomes an issue.
The really cool thing would be AI. Humans don't necessarily think every frame. It takes us a few seconds and we're constantly adapting slowly. Perhaps you could have several really smart AI's. Use the most liberal algorithms. Have some fun.
Tacky, but imagine playing Day Of Defeat (pre-source of course) against a whole team of AI players. would make for some interesting gameplay without all the "stfu n00b".
Since games have to run a game tick (logic update, input update, physics, render) about every 33ms (at worst), network latency and synchronization of all the machines could be a major problem. Every tick all the CPUs would have to get ready to do their stuff, do their stuff, and then combine all their work on a single machine. I bet it could be done with enough prediction by each CPU, but the gains would be minimal.
I think your money is better spent building a single super-computer. A massively powerful dual core machine, dual GPU of the highest caliber with enough cooling liquid to fill the Grand Canyon.
Back on topic, what could be done? A single player game with AI populations that number in the thousands all running at once, a massive fully simulated world with weather effects, insane physics running constantly in all parts of the world, graphics on-par with Crysis on an array of 25 monitors, and realistically rendered audio effects that make you want to both poop your pants and suck that poop back in AT THE SAME TIME.
This is almost the same. You have 12 linux computers running Quake 3 on 24 monitors. Is that what you wanted?
You could always use a GPU. The GeForce 8800 GTX, for example, has 128 processors running at 1.35 GHz. Get a second one for graphics
cough Amdahl's Law cough
no, not one of those crappy 3d screens. I'm talkin about the real ones! The kind where you can view the image from multiple angles my moving your head around what you see.
I saw a few different implementations of this at GDC. My favorite was one that was basically just UT2k7 with popped-out graphics when you wear special glasses. Beyond that, my favorite was this one that you would on your head like a visor, like that guy from Star Trek.
2) are games suitable for parallelization? Rendering is parallizable, just look at a gpu, and physics are parallizable, just look at a ppu/gpu. What about AI, gameplay, sound, etc...?
I went to a lecture at GDC about optimizing for multicore clients. Let me dig up my notes.
"Games today are GPU-limited, so instead of focusing on graphical eye-candy, focus on computational eye-candy."
"Particles are a great candidate!"
"Ragdoll, debris..."
"PUT CHARACTER'S FEET ON THE GROUND"
(he was pissed because games still portray walking on stairs very oddly)
"Animate faces, animate cloth and hair"
"Dynamic tesselation of models to reduce the 'pop' effect from level-of-detail transitions"
And he gave us a link that I never checked out: http://gamasutra.com/features/20060531/gruen_01.shtml
I also went to a multi-threaded physics roundtable. The idea was that you basically solve each object in a separate thread and then bring them back together before entering the next stage of physics processing.
Like this:
Physics
/ | \ collision detection
\ | /
---
/ | \game logic
\ | /
---
/ | \solver
\ | /
---
Edit: see http://en.wikipedia.org/wiki/Data_parallelism.
Then they talked about some PS3-specific stuff that I didn't understand...
The really cool thing would be AI. Humans don't necessarily think every frame.
Certainly if you follow the architecture outlined in Mike McShaffry's book Game Coding Complete, this would be easy to do and work very well in parallel.
gnolam: I think that if you let CPU to manage his own piece of map it could be ok. I'm not saying faster, but you could have bigger maps. Basically I mean something like SETI@home does.
Yes, this would work well. The speed of A-star etc. is a strong limiting factor in the number of units you can have in a RTS. The complexity of the general AI is also cpu-bound. This could easily be assigned to a second cpu, which only needs to update the main thread with the results of AI decisions.
20 1.5GHz CPUs is not that fast. Let's say you try doing something that runs entirely on SSE, like ray tracing. I'll also assume those are P4 CPUs. In autumn you can get 3GHz Core2 quadcores. They have twice the SSE throughput of any other CPU. Basically a single 3GHz quadcore can replace 16 of those 1.5GHz P4's. Add in some performance increase from other core improvements and that single CPU would replace the cluster of 20 CPUs
Ray tracing on 8 3GHz Core2 cores is rather fast. Achieving >30FPS at 1024x768 with 2x AA shouldn't be a problem for most scenes.
For other tasks that don't use SSE that much just throw in another CPU in a two socket server board and you have as much power, if not more than those 20 1.5GHz ones. Only FSB bandwidth could be a problem.
What you can do with it? Quite a few things but most gamedevs will simply go for more eyecandy since it is simple to scale it, scaling physics or AI is not. If you would go for non-eyecandy stuff first thing would probably be more detailed physics. I'd like to have nicely deformable maps. In crysis you can cut down trees but I don't think you can dig a cave for yourself. Next would be half decent AI.
Of course with that much computing power there should be enough RAM to go with it. I'm not sure how much but certainly more than 2-4GiB.
I just cant stop to think what if one would make game for 2ghz 1024mb ram computer that uses graphical and sound details like games in 1995 but would use all that extra stuff for plot stuff, AI and so on. That would be awesome game if you would ask of me. I mean all those realistic graphics and physics wont save you from shitty plot (like 99% of modern games are).
I think I should have made myself clearer: I really don't care whether a super-computer or a server or whatever would do better than 20 1.5GHz CPUs.
I don't have the money to buy a super-computer and build a computer-game for it, but I do have a network of 20 computers with 1.5GHz CPUs at school, which are already there and are gonna cost me nothing to experiment with.
I'm wondering whether I could create a game considerably more advanced than what we have today and test it on those 20 computers.
My uncle has an internet-cafe with good computers.
Imagine what if I could pull this off! I could run the super-game on those computers and charge money from rich people to try it out.
Is this possible? Is it actually worth a try?
I have absolutely no experience with this stuff and I was hoping that someone could tell me whether my idea is realistic...
Well, you could run advanced agent-based AI on the all of the computers but one (and the actual game on one). Could be fun to have a stalker/oblivion/gothic3-esque game with every single NPC properly simulated 100% of the time (with needs, interacting with each other.. living their virtual lives)...
Hopefully latency would not be a problem (not all that much data would have to be transmitted, and it is on a LAN).
But as for usefulness....
I've read half the replies before I got bored and wanted to add my own 2 cents (so it may have already been answered...)
I would say that your current machine (where the game is being played) would need to do basically 100% of the graphics math. Reasoning is that you can do a few gigaflops (gigaphlops?) per second on many newer machines, which means you can update your player's "move-left" command quickly enough that there doesn't appear to be [much] lag. You don't want to tranfer that amount of data across a network; there would be too much lag.
However, the AI part, or updating non-visible portions of the screen COULD be calculated elsewhere. I know that Google uses clustered machines to do its searches and webcrawling, and that large game servers can often cluster their machines together to take care of all the different players' positions and game rules.
I'm saying that having the server keep track of all the non-rendering portions of the game might be the way to go: if you have multiple computers crunching the numbers, then the one the player is sitting at would take care of movement and graphics updating, and the other machine could be loading the next "grid" of the current map and feeding that data directly into the player's machine (so no disk access needed, and all the levels could be pre-calculated, further reducing CPU time).
EDIT:
Jonatan Hedborg, ugh! Imagine if the Sims ever got wind of something like that. Entire villages being created and interacting with 100% accurate AI simulations with each other...
Is this possible? Is it actually worth a try?
Yes to both if you have enough time and take it as a learning experience.
Though by the time you finish it you'll have a lot more power in your PC
Then again you could simply expand the world and use up all availiable computing recourses.