Isn't this the case with the traditional game loop anyway? Doesn't gameplay logic get executed at more frames per second on machines that can render the graphics at more frames per second?
"it depends" (tm)
This is holy war territory among game developers.
One school of thought (mostly PC programmers) believes your logic update should be parameterized on the time since the last update, so you can call it as fast as possible for any given machine.
- Can easily lock to any arbitrary refresh rate.
- Can efficiently drop updates if running below refresh rate.
- Makes game logic code more complicated.
- Unless you are a math genius, makes game logic give different results depending on effectively random performance details. This may or may not actually be noticable to the player, but even the smallest deviations are pain when implementing things like replays and networking.
The other school of thought (mostly console programmers) believes you should just pick an update rate and stick to it.
- Keeps code nice and simple.
- Everything is 100% deterministic.
- If the rate you picked is different to the monitor refresh, won't be perfectly smooth. Of course this isn't a problem on consoles where you know the TV goes at 60. You could always just set the PC monitor to that known rate, though.
Personally I prefer the fixed rate approach, because I'm a big fan of keeping things as simple as possible, but game devs will argue for ever about this.
Even if you're going for a variable rate update, though, I still wouldn't ever run multiple updates in between render cycles. Just wait until the next refresh, then run a singe update passing it the appropriate time delta.
OK, I'm convinced now that multithreading gameplay and rendering isn't such a good idea on a single-core CPU. It has merit for two cores though. Perhaps gameplay+physics in one thread and rendering in another. When four+ core CPUs become common, then we can have gameplay + physics + rendering + one spare for the OS to play with.
This is one of the biggest issues commercial game devs are struggling with at the moment, as they figure out how to work with the 3-core Xbox. Splitting gameplay and rendering pretty consistently seems to give good results, although depending on the engine architecture the amount of pain involved ranges from a fair amount to an incredible amount :-) Splitting gameplay and physics generally seems to be less successful: a handful of people are managing to get ok results there, but it really depends on the game if this is feasible. Pathfinding is an obvious candidate for doing in the background perhaps even over multiple frames, and people are also getting good results moving CPU graphics effect computations (fluid simulation, cloth, hair, particles) onto the third core.
But isn't it also true that just because you have a dual-core system doesn't mean that your two threads will run on those two cores? eg. the OS could decide to put both your threads on the same core.
That's where it gets fun. You could just trust the OS, and you might be lucky, but who knows. Setting an explicit thread affinity makes things a lot more robust, but then you have to know what kind of processor you are running on, and that's currently quite a pain (not easy to tell the difference between dual core and some hyperthreading configurations, for instance). I have a feeling Vista is adding some new API's to make this easier though.
This thread somehow confuses me... so one should run the logic at the refresh rate of the monitor to ensure fluent graphics? I usually lock my logic to 50 Hz, measure the time needed for drawing and wait the rest of it.
That's what I tend to do to, although I usually go for 60. Not perfect if the monitor happens to be set to something other than your chosen rate, but hey. Opinions differ as to whether the extra smoothness of being able to adapt to different refresh rates is worth the extra complexity and unpredictableness that this introduces into your code.
There is one more interesting thing about Q3A.
it has SMP support under Linux and probably under windows too. When I run it without SMP support I get ~350FPS at max quality. When I run the SMP version it drops down to "only" ~300FPS. I wonder why is that. Might it be that with slower SMP CPU's it makes sense to multithread it and my super-fast dualcore P4 just spends most of its time switching between threads?
Interesting fact, but hard to say why without a detailed investigation.
It's very possible that the threading overhead is just bigger than the gains they are getting from the parallelism, but there are other possible explanations. For instance maybe on your machine the CPU is so fast that the game ends up totally bottlenecked by GPU performance, and the use of multithreading might be forcing the driver to disable some potentially unsafe GPU level optimisations?
No idea really, but it does confirm that even on a dual core machine, it can be surprisingly hard to actually gain that much of a benefit from threading. It's certainly nowhere near the 2x speed that you might naively expect.
IIRC in some version of Q3A you could run faster and jump higher at some specific FPS. I think it was around 80-something.
That sucks. But it is hard to avoid such quirks in variable time update games.