Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » A5 - Proper Allegro event structure

This thread is locked; no one can reply to it. rss feed Print
 1   2 
A5 - Proper Allegro event structure
Chris Katko
Member #1,881
January 2002
avatar

What's the proper structure so that rendering occurs as often as possible?

I've found that using a timer to fire off update_graphics() at GAME_RATE_HZ is terrible because if the computer CANNOT keep up with that rate, the event queue becomes backlogged with events. The queue is filling up faster than the computer can deal with them. So the game becomes more and more "lagged." 3 seconds, 5 seconds, 15 seconds, and continuing.

Instead of using al_wait_for_event(), is there an alternate function to run the loop:

while(true)
{
if(event_is_queued()) { switch(events) {} }

// otherwise

draw();
}

I mean, I guess it makes sense to only update the game when data actually changes. (Is there some sort of Allegro 5 custom event allowed?) But like I said above, if the drawing takes too long and the ANOTHER event is required, the queue backlog ends up increasing toward infinite.

Logic updates should ALWAYS be dealt with and drawing should use up whatever slack is left.

Also, what happens if logic updates are too big to even occur on time? A game can't explode just because it lags below LOGIC_RATE_HZ for 50ms.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs

Polybios
Member #12,293
October 2010

The usual (basic) way is to only draw if all events have been processed.

Elias
Member #358
May 2000

The official loop I've seen is usually like this:

while (true) {
    al_wait_for_event(&event);
    if (event.type == TIMER) {
        logic_tick();
        redraw = true;
    }
    if (al_is_event_queue_empty() && redraw)
        draw();
        redraw = false;
    }
}

This is what I use:

#SelectExpand
1redraw = true; 2while (true) { 3 if (redraw) { 4 draw(); 5 redraw = false; 6 } 7 while (true) { 8 al_wait_for_event(&event); 9 if (event.type == TIMER) { 10 logic_tick(); 11 redraw = true; 12 } 13 if (al_is_event_queue_empty()) 14 break; 15 } 16}

The only difference is that mine does one initial redraw in case the first logic_tick takes a long time (e.g. loading pictures for a minute).

Which is your other point, yes, logic ticks can take a long time. This is not handled in a good way by the above. Assuming the first logic_tick takes one second and your timer is set to 60 FPS. It means the event queue will store everything from the first second, including 60 timer events, then handle all of that at once before redrawing again. If the first tick takes a minute, the game will still work and all - but you actually miss the entire first minute of gameplay.

In my case where the long tick was caused by loading data I fixed it by timing loading - after 1/60th of a second I end the tick and continue loading in the next tick. And I render a "Loading - Please wait" text in the draw() function and no game logic is taking place yet in those ticks.

--
"Either help out or stop whining" - Evert

Mark Oates
Member #1,146
March 2001
avatar

I've found that using a timer to fire off update_graphics() at GAME_RATE_HZ is terrible because if the computer CANNOT keep up with that rate, the event queue becomes backlogged with events. The queue is filling up faster than the computer can deal with them. So the game becomes more and more "lagged." 3 seconds, 5 seconds, 15 seconds, and continuing.

Ok, yes, this is a known phenomenon. The way I've read to fix this is after an ALLEGRO_EVENT_TIMER then do a al_peek_next_event(). If that next event is also a timer event, then al_drop_next_event(). That will prevent your queue from backing up and attempting to draw over and over.

This is what it looks like in AllegroFlare: https://github.com/MarkOates/allegro_flare/blob/master/src/framework.cpp#L184-L193

Ariesnl
Member #2,902
November 2002
avatar

You could also calculate a lag time between frames and move things accordingly. This gives more smooth movements. You'll still need timed triggers for certain things that are more quantized though

- Wisdom is the art of using knowledge
- String theory: There's music in everything

Elias
Member #358
May 2000

Uh, why would you ever drop timer events? Won't this completely break just about any game?

--
"Either help out or stop whining" - Evert

Mark Oates
Member #1,146
March 2001
avatar

I drop just the primary timer event.

That's peculiar, I can't seem find any of the discussions where people recommended this approach. :-/ I distinctly remember a thread with X-G in it.

Elias
Member #358
May 2000

I mean, if in my old-style platformer I move 60 pixels / second, and you end up dropping a timer event once a second - you'll have slowed him down to 59 pixels / second. Ok, I guess that's not really breaking it unless it was multiplayer - but I also don't see any gain. And having a second timer event get queued up during vsync sounds like something that's likely to happen to me.

--
"Either help out or stop whining" - Evert

RPG Hacker
Member #12,492
January 2011
avatar

Elias said:

I mean, if in my old-style platformer I move 60 pixels / second, and you end up dropping a timer event once a second - you'll have slowed him down to 59 pixels / second.

That's why you normally use delta time for any movement in your games. When using delta time, a single missed frame won't affect how your game runs too much. Basically, in your example, the sprite would move 1 pixel for 58 frames and 2 pixels for 1 frame, still totaling to the same 60 pixels per second. Hardcoding things in your game to the frame rate, especially sprite movement, is really a bad idea for 90% of games. The only situation where you should ever do that is if you wanted to make absolutely sure the game runs the same on every single platform and therefore had to avoid any rounding errors. However, with PC systems having very different hardware and specs, getting a game to run the same on every system is pretty much impossible, anyways. Only in games that are absolutely based around time rankings and that therefore absolutely need frame-perfect timing for fairness reasons, I would ever consider hard-locking my game speed to the frame rate.

Also, the reason you should drop certain events, as described by Mark, from my experience, is this: Sometimes the game coul get a frame or two behind because the system could currently be busy or the game could currently be rendering a complex scene. However, there is a high chance that the system would eventually stop being busy or that the game would get to a less complex scene. When that happened, it would have already accumulated a good number of frames. Since it didn't have to wait for any events to occur, it would just start processing each event right away. So instead of, for example, waiting your typical 16.67 ms to process the next frame, it would proceed each pending frame as soon as possible. This would completely mess up the game's timing and would make it get blazingly fast for a few frames. For exmaple: If your game normally ran at 60 FPS, but dropped down to 10 FPS momentarily, it would try to process the missing 50 frames as fast as possible afterwards, causing your game to run sped-up for about a second or so. And yeah, I have already experienced this in games and it's not only noticable, but a real game breaker. This should be avoided at any cost. Not using delta time would just make the problem even worse, since, for example, you would get 50 frames worth of game updates in a much shorter time span than usual. With delta time, when done right, or when V-Sync is enabled, you'd probably not notice anything. However, your game would become quite unpredictable and this is always bad style, whatever way you look at it. So yeah, clearing your queue regularly is definitely recommended.

Mark Oates
Member #1,146
March 2001
avatar

My experience with trying deltatime hasn't been so good. :-/

I implemented it back in the days of OpenLayer and found that when frames became unpredictable, and trying to offset all the velocities with deltatime, it would cause erratic behavior in the sprites.

For example, in our 2D platform game let's say you apply gravity ever frame, which should result in a curved trajectory when the player is moving horizontally after falling off a cliff. With deltatime, and at a framerate of 10fps, you could end up having your sprite fly off horizontally when multiplying the initial velocity by a multiplier of 6. Also, something that further amplified that problem was when also doing collision detection with the time-step method, which divided into less predictable, finer grain slices of time.

I guess one upside of not using deltatime (if you could call it an upside) is that velocity curves will remain predictable even though you get slowdown.

I really like using deltatime globally because it's great for doing cool effects like slow motion and stuff like that.

Elias
Member #358
May 2000

That's why you normally use delta time for any movement in your games.

No, delta time is always bad - since most physics algorithms simply don't work with it. And in the example of missing a lot of frames delta times just makes things much much worse - you would get something like a one second or one minute delta, but no input for that time. And so there would be no good way to deal with it, except just ignoring the entire delta maybe.

Quote:

If your game normally ran at 60 FPS, but dropped down to 10 FPS momentarily, it would try to process the missing 50 frames

That's a bit confusing, because "dropped down to 10 FPS" is normally understood as the rendering dropping down to 10 FPS and not the game events. It's what actually happens in a lot of games and causes no issues at all with the standard timer event way (except it won't be as smooth as 60 FPS of course, but no way around that). Even if you have wildly varying frame rates your game ticks still run at a (somewhat unsteady) 60 ticks a second, no matter what.

The issue I was more thinking of is if you can't keep up with those 60 ticks a second (like when loading a new level, or the app having been paused externally) - and then I feel simply pausing the game and displaying a message to the user is the best idea.

--
"Either help out or stop whining" - Evert

Chris Katko
Member #1,881
January 2002
avatar

Elias said:

No, delta time is always bad - since most physics algorithms simply don't work with it.

I was actually going to mention that!

I don't know if delta-time works pragmatically. But physics-wise? You're linearly approximating a derivative/integral (I forget which--it's late), except you're allowing the linear "steps" to be variable.

I had that outright EXPLODE in my program when I was building a 2-D Kerbel Space Program with realistic gravity (as opposed to KSP which hardcodes orbits and doesn't support Lagrange points). The faster I ran the program, the harder the gravity was "felt" and planets would collide into the sun! I kept having to wildly change the "constants" of mass and initial acceleration to keep everything orbiting correctly.

[edit]

This graph might help (still not perfect):

{"name":"derivative1.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/6\/c679959245233507c0208ab523d99435.png","w":1024,"h":623,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/6\/c679959245233507c0208ab523d99435"}derivative1.png

The longer the time delta, the longer the bars.

Moreover, when you have a system that depends on other variables (ESPECIALLY dividing and dividing by a difference!), error propagates very fast. The error could oscillate back and forth between say, two planets affecting each other. Or, the error go unbounded toward infinity.

It MAY be the best thing for a video game on a practical level--I'm not sure. But it's definitely got a flaw!

One last example: If delta_time is too long, and you're not careful to check, an object may SKIP PASSED the wall it was supposed to collide with.

{"name":"CollisionDetection%20SkippingFixed.gif","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/8\/8\/88b3c9bbf76347a1b2b4ef582eee6175.gif","w":500,"h":215,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/8\/8\/88b3c9bbf76347a1b2b4ef582eee6175"}CollisionDetection%20SkippingFixed.gif

[edit 2]

This article looks very promising!

http://gafferongames.com/game-physics/fix-your-timestep/

Quote:

Now lets take it one step further. What if you want exact reproducibility from one run to the next given the same inputs? This comes in handy when trying to network your physics simulation using deterministic lockstep, but it’s also generally a nice thing to know that your simulation behaves exactly the same from one run to the next without any potential for different behavior depending on the render framerate.

But you ask why is it necessary to have fully fixed delta time to do this? Surely the semi-fixed delta time with the small remainder step is “good enough”? And yes, you are right. It is good enough in most cases but it is not exactly the same. It takes only a basic understanding of floating point numbers to realize that (vdt) + (vdt) is not necessarily equal to v2dt due to to the limited precision of floating point arithmetic, so it follows that in order to get exactly the same result (and I mean exact down to the floating point bits) it is necessary to use a fixed delta time value.

So what we want is the best of both worlds: a fixed delta time value for the simulation plus the ability to render at different framerates. These two things seem completely at odds, and they are – unless we can find a way to decouple the simulation and rendering framerates.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs

RPG Hacker
Member #12,492
January 2011
avatar

Sure, I'm not saying that delta time doesn't have its caveats and that you don't have to be careful, but basically there are solutions or workarounds for all of those problems and delta time is the only method I know to ensure that a game still runs at the same speed and still appears smooth, even when the FPS drops a little (without delta time, you'd immediately notice the stuttering). With a fixed time step, you HAVE to play your game at maximum FPS just so you can play it at all. Just having the performance drop by a few FPS could already make your game uplayable or at least very, very slow. And as someone who grew up in a PAL region, I can say that, yes, this does matter. I always hated playing old NES games, that were meant for 60 FPS, on a PAL system in only 50 FPS. This pretty much killed every game for me right away that didn't get speed fixes for the PAL version. Kid Icarus, for example. That game was so slow and awful in Europe. With delta time, however, you CAN play your game at 60 FPS, but you can just as well play your game at 30 FPS without noticing a major difference. Your game had to perform way worse than 30 FPS before it should ever break the physics completely, but at that point, it would probably run on so low FPS that it'd be unplayable, anyways (like, what is the point of playing a game at 5 or 10 FPS?).

Elias said:

No, delta time is always bad

Well, that statement is outright wrong. I do admit that delta time can be bad, especially when not used correctly, but stating that it's ALWAYS bad is definitely wrong. I've worked on three different triple A titles from different developers so far and all of them used delta time, for good reason. Granted, Physics can become a problem when not handled correctly, but there are easy ways to work around that and for none of these games it was a problem (well, two of these games just used Havok, anyways, but the third one used a custom Physics implementation and had no problems with the delta time, whatsoever. Delta time isn't even that hard to get right, honestly.

Quote:

And in the example of missing a lot of frames delta times just makes things much much worse - you would get something like a one second or one minute delta, but no input for that time. And so there would be no good way to deal with it, except just ignoring the entire delta maybe.

No, actually there is a quite easy solution to this and it's also one of the workarounds I just mentioned. Just don't let your delta time get to a second or a minute. There is nothing to be gained from that, anyways. Just clamp your delta time to three frames maximum or something like that. For example: If your aiming for a 60 FPS game, that would be 16.66 ms a frame, so clamp your delta time to a maximum of 16.66 ms * 3 ≈ 50 ms. Sure, when doing that, the game isn't purely using delta time anymore, but honestly, if your game ever gets to a point where it constantly hits that 50 ms mark, it is already running at < 20 FPS, so it's probably unplayable on that particular system, anyways. Of course the problem stated by Chris can be fixed just as easily by just locking the refersh rate of your game to 60 FPS (there isn't really that much to be gained from way higher framerates, anyways).

Quote:

That's a bit confusing, because "dropped down to 10 FPS" is normally understood as the rendering dropping down to 10 FPS and not the game events. It's what actually happens in a lot of games and causes no issues at all with the standard timer event way (except it won't be as smooth as 60 FPS of course, but no way around that). Even if you have wildly varying frame rates your game ticks still run at a (somewhat unsteady) 60 ticks a second, no matter what.

First of all, this only causes no issues WHEN you're using delta time. Either that, or when you're completely separating update logic and rendering logic by running them on separate threads. Yes, IF you were doing that, then, for example, your update thread could (hopefully) easily run at 60 FPS all the time and only your rendering thread would drop down to 10 FPS, but not affect your game speed at all. However, this method would introduce a whole new layer of complexity and no game I know does that this. I once tried implementing this myself, after seeing it recommended on here, but I soon came to the conclusion that it's way too complex and too impractical to use and that the benefits don't outweigh the problems. This method seems easy enough to implement at first, but sooner or later youl'll come across some problems that are harder to solve. While I do use multi-threaded rendering in my current game engine (using Direct3D 12), I don't ever decouple my rendering from my updates. Instead, I just make the update thread process the next frame while the rendering thread is currently rendering the previous frame. Both threads are essentially still working at the same intervals, though. This is way easier to handle and still gives you a major speed boost.

Secondly, what you're describing in your example is how it SHOULD work in a game. And when using just timers and events, yes, that's probably also how it DOES work. However, you're overlooking a simple fact. We're talking about Allegro here, and Allegro doesn't use single events for timers, it uses event queues (which is, after all, what this whole thread is about). Basically, in Allegro, your timer keeps filling up your event queue at the same pace, no matter how fast your game is running. So if your game, for example, hits a bottleneck and takes 166,7 ms to render a single frame, there will already be 10 more pending timer events in the queue. Now if you don't clear the queue, what happens is that the game keeps processing frames as long as there are still pending events in the queue. It is supposed to wait around 16.67 ms before prcoessing new frames, but since there are already 10 pending events in the event queue, it won't wait those 16.67 ms, but instead will process each frame as fast as possible for at least the next 10 frames (if not more events are added to the queue in the mean-time). This will make your game speed up for a moment, until the queue is finally cleared and the game can perform at normal speed again. One exception to this rule, as mentioned earlier, is using V-Sync. V-Sync locks your rendering to the refresh rate of your monitor, so it FORCES the game to wait those 16,67 ms every time. In that case, even pending events in the event queue won't matter, because V-Sync itself makes the game wait before processing each new event from the queue. Since, from what I know, V-Sync is enabled by default in Allegro and since some graphics cards only support V-Sync to begin with, I can imagine that you won't notice any problem when playing your game and using this method. However, play the game on a system with V-Sync disabled, wait for a bottleneck and you should certainly experience the problem I'm talking about.

Quote:

The issue I was more thinking of is if you can't keep up with those 60 ticks a second (like when loading a new level, or the app having been paused externally) - and then I feel simply pausing the game and displaying a message to the user is the best idea.

Sure, auto-pausing the game when your app becomes inactive is a good idea, anyawys. However, as just mentioned, clamping your time step to a certain value will also completely prevent this problem from happening.

Elias
Member #358
May 2000

So if your game, for example, hits a bottleneck and takes 166,7 ms to render a single frame, there will already be 10 more pending timer events in the queue. Now if you don't clear the queue, what happens is that the game keeps processing frames as long as there are still pending events in the queue. It is supposed to wait around 16.67 ms before prcoessing new frames, but since there are already 10 pending events in the event queue, it won't wait those 16.67 ms, but instead will process each frame as fast as possible for at least the next 10 frames (if not more events are added to the queue in the mean-time). This will make your game speed up for a moment, until the queue is finally cleared and the game can perform at normal speed again.

Which is exactly what you want. The user input is coming at the same, unmodified speed, no matter what. So the game always has to run at the same speed as well. Otherwise you have to actually pause it - not randomly just drop some time. It does not really speed up for a moment either - it's making up for time lost to make sure the overall game is running at the same speed at all times. Anything else will lead to more or less noticeable problems.

With multiplayer it actually will be quite hard to use variable frame length and/or dropped time without causing desynchronization, so I just don't see the point.

--
"Either help out or stop whining" - Evert

-koro-
Member #16,207
February 2016

I never understood the "delta time is always bad" viewpoint.

Elias said:

No, delta time is always bad - since most physics algorithms simply don't work with it.

I'm not sure what you mean by that last part. Linearizing at a variable step is a "physics algorithm". If by "won't work" you mean the results are not going to be realistic, well this is true of most physics algorithms. If you're doing it numerically, it's not going to be realistic in the long term unless the system you're simulating is extremely simple. But in the short term the result should be reasonable, so it looks and feels right. And moreover a variable delta increases the precision, so the results should be more accurate using variable delta than using a fixed one.

I don't know if delta-time works pragmatically. But physics-wise? You're linearly approximating a derivative/integral (I forget which--it's late), except you're allowing the linear "steps" to be variable.

This is true, you're linearizing. And yes, the steps are variable. How is this worse than a non-variable step? If you get a big delay from some part of the code that forces you to skip many steps, you have to deal with this in both cases. The advantage of variable steps is that you can improve the quality of the linearization on the fly.

Some will say that linearizing is bad, you should integrate properly. That's unfeasible for even the simplest mechanics. Unless you're doing very very simple things (like simulating gravity only), you will not be able to integrate anything explicitly. Sure you can use better numeric integration methods, but all of these methods depend on having sufficiently small steps, just like the linearization. The issues that appear with this method will appear with any such method.

Of course there are clear downsides of variable delta, and I think the single biggest one is unpredictability. If you have a fixed delta (and assuming you don't skip any step) the mechanics is perfectly predictable. If you have variable delta, you run on two different computers (or even run twice on the same computer) and the result could be very different. This sucks if you are doing something multiplayer (but not that much; you can have a client predict movement using variable step and periodically make corrections using the data from the host. If you do this with a reasonable frequency it shouldn't be noticeable, and I think this is what most FPS's do -I could be wrong).
Anyway if predictability is not so important, variable delta is a decent option IMO.

Chris Katko said:

One last example: If delta_time is too long, and you're not careful to check, an object may SKIP PASSED the wall it was supposed to collide with.

As other people mentioned, you should cap the delta_time so it doesn't get too big. The idea is being more precise than with a fixed step, not less, so for instance instead of using a step of 1.0/60 sec you use a variable step of at most 1.0/60 sec. And the wall skipping thing could happen even for a fixed step. But one advantage of linearizing things between steps is that you can integrate exactly the linear part, which could be useful to find "exact" collisions (you know you're moving on a line, you have a wall, just find explicitly the intersection time of that linear movement with the wall). Of course you have to deal with bounding boxes and stuff like that but it is still doable.

RPG Hacker
Member #12,492
January 2011
avatar

Elias said:

Which is exactly what you want. The user input is coming at the same, unmodified speed, no matter what.

This isn't true, either. Depending on how you implement your input polling, either you will have input from the time when the timer events were actually written to the queue, or you will have input from the time when you're processing the events in the queue. In the first case, what will happen is that the user input will be processed way later than when it was actually received. This will only lead to confusion for the player, since certain input events could be processed many frames after they were received by the game. This would just cause a noticable delay, which never feels good and should be avoided. As a player, I want my input to be handled, at latest, one frame after giving it, not after that.

The latter case actually make more sense, since input events would be processed, at latest, one frame after receiving them. However, I still don't see how this would profit from a fixed time step at all. Input doesn't matter to a player if he can't see the results of that input. So if, for example, you were to process ten frames in the time of one regular frame and only present the final image to the player, it would be the exact same result as if you had just used delta time and processed the last input in the next frame, the only difference being that the delta time method actually processes a single frame only and therefore saves a lot of CPU time. However, from my experience, this isn't even usually done, which brings me to your next point.

Quote:

It does not really speed up for a moment either - it's making up for time lost to make sure the overall game is running at the same speed at all times.

This is only true when you drop rendering of any frames while your queue isn't empty, which, judging by your code posted above, you're actually doing. However, as stated above, I've seen enough examples of games where people actually render all delayed frames, and THIS is what causes visible speed-up of the game for a few seconds (sorry for the confusion, I guess I should have made clearer that I wasn't refering specifically to your code all this time9. Aside from that, as I just mentioned: When using fixed time steps, you're really processing ten frames in the time you'd usually dedicate to just a single frame. When using delta time, however, you're always just processing a single frame. I don't think I have to point out that the latter method requires way less performance. Let's look at weak systems for a moment. Not mediocre systems, not good systems, but weak systems. On strong systems, yes, it's very likely that the system will be able to catch up with the delayed frames and get in sync again. But what about weaker systems? It's unlikely that they will ever be able to catch up to all of the delayed frames, the queue will keep filling and filling and the game will run visibly slowed down because it isn't able to clear the queue (assuming you'd actually render each frame instead of dropping). In fact, with your code, the game would appear to be completely frozen, since your "is queue empty" condition would NEVER be true. So on weak systems, your game would definitely be unplayable, or at best, run slowed down when not dropping any frames. And what happens, when we use delta time? Since with delta time, the game only has to process as many frames as it can, it COULD actually still run somewhat decently even on weaker systems. Sure, with reduced framerate (20 FPS or less, if you're unlucky), but the game wouldn't freeze or slow down. At at that framerate, every decent game should still be able to not break your physics. Below that, I'd consider a game unplayable, anyways, and wouldn't bother about breaking physics.

So really, the only benefit there is to a fixed time step is that it's the most accurate. This comes with many, many caveats, though. On fast systems, you won't gain any benefit over delta time aside from accuracy. At best, both implementations will behave the same. At worst, fixed time steps will actually introduce short, uncontrollable speed-ups into your game (only when not dropping any render frames). On weak systems, delta time is superior to fixed time steps in every situation. With fixed time steps, your game will will either freeze on weak systems (when dropping render frames) or slow-down immensely (when not dropping render frames). With delta time, games could actually be playable on weak systems, just with reduced framerate.

And really, for most games, especially single-player games, the gain in accuracy from fixed time steps is neglecitble. The only environments in which I would ever consider using fixed time steps for single-player games are consoles, since they're way more predicatble and less prone to "fluctuations". Here, a game could actually benefit from fixed time steps, for PC games, that's rarely the case, though.

Quote:

With multiplayer it actually will be quite hard to use variable frame length and/or dropped time without causing desynchronization, so I just don't see the point.

I disagree again. First of all, I never had this problem when working on network games (I once did some competitive racing battle game using network multiplayer). Secondly, relying on network actually staying in sync for a network multiplayer implementation is bad practice to begin with. If your network got busy for just a few seconds, your multiplayer would get out-of-sync and become unplayable. I had this problem so, so many times when doing network multiplayer sessions in ZSNES. Bascially, our games would just randomly get out-of-sync and we would each see something different happening on our screens. Whenever that happened, the game would not get back in sync again. Yeah, it's never a good idea to rely on your network to stay in sync. Especially once we're talking about internet multiplayer. When you play with a person on the other end of the globe, it's pretty much guaranteed that you'll get a ping of MORE than 16.67 ms, so playing a 60 FPS game in sync over the internet is pretty much impossible right away. This is why no professional game ever does that, none of them rely on stability and synchronity of your internet connection for their multiplayer to work. Instead, they use certain tricks to make it appear as though they were running synchronously, even though they're really not. Let's take Call of Duty, for example. If the games were trying to run in sync, it'd be virtually impossible to even shoot anyone. For example: On your side, you'd visibily shoot your opponent yet he would still survive, because on his side, he's already somewhere completely different. What the games do instead is using predictions. Basically, they try to predict where a player currently is or where a player is heading. Sure, this sometimes leads to deaths that feel unfair, because a player dies, even though on his side of the game, the shot clearly missed (had this happen so, so many times in Splatoon). On the other hand, whenver you visibly shoot your opponent, you can be sure that you actually hit him, even when, on his side, he is already somewhere completely different. In cases like these, you simply have to sacrifice some fairness in order to make your game playable at all. And this is just one example. Running multiplayer games in sync via network rarely works and isn't even a good idea to begin with. It just means that your game will run really, really unstable and will be disturbed by eving the smallest peak in network activity.

Elias
Member #358
May 2000

-koro- said:

And moreover a variable delta increases the precision, so the results should be more accurate using variable delta than using a fixed one.

Well, read the article Chris posted, for example. "Accurate" can mean different things, but for a game in my experience I'm interested in always having the same accuracy. I don't want the person with a fast CPU to have a different game than the person with the slow CPU. I want them to be exactly the same - except on the faster CPU things should look better, because I can render each timestep (or at least interpolate to each) - however if the same input was provided to each of my first 600 logic game ticks, the game will be at exactly the same state after 10 seconds. I don't want one player to have raced twice as far just because they had a faster (or slower) CPU.

So really, the only benefit there is to a fixed time step is that it's the most accurate.

I worked with discrete physics simulation at my last job, so I may be biased to how important exactly accuracy (or should say robustness) is (for a smallish game) - but for a big and complex system it is paramount.

Quote:

With delta time, games could actually be playable on weak systems, just with reduced framerate.

Except, the bottleneck for games usually is render framerate, and not logic framerate.

--
"Either help out or stop whining" - Evert

Mark Oates
Member #1,146
March 2001
avatar

Somebody should make a table showing the advantages and disadvantages of each technique.

RPG Hacker
Member #12,492
January 2011
avatar

Elias said:

Except, the bottleneck for games usually is render framerate, and not logic framerate.

We're specifially talking about weak systems here, though. On weak systems, both can easily become bottlenecks. What you said only applies to decent systems, where, yes, usually the rendering is the bottleneck. It also highly depends on the type of game.

Somebody should make a table showing the advantages and disadvantages of each technique.

Summing up everything in this thread, it's something like this (feel free to add any points I forgot):

Fixed time step

Advantages

  • Always accurate.

  • Always predictable, when each frame gets the same input.

  • Slightly easier to program.

Disadvantages

  • Whenever the game logic gets behind, the game has to process more and more delayed frames in the same time span, essentially wasting a lot of CPU resources on frames that could just be skipped with variable time steps. On good systems, this shouldn't be too problematic, but should, at the very least, lead to irritating frame skips that get longer the more frames the game has to catch up to. Depending on how input and rendering are handled, worse side effects could happen. On weak systems, depending on how rendering is handled, the game would either slow down immensely or appear to be frozen. In both cases, it would likely be unplayable or at the very least feel very unsatisfying (reference: PAL games in the 90s and early 2000s).

  • Even just small drops in FPS would dramatically and noticably influence the visual appeal of the game.

Variable time step

Advantages

  • Better scalability, as the game adepts to the system's capabilities.

  • Since the game only processes as many frames as it can, no performance is wasted.

  • Small to mediocre FPS drops don't affect the game's playability too much and the game will always run at approximately the same felt speed, even when a lot of frames are dropped.

  • Small to mediocre drops in FPS (down to about 30 FPS) don't reduce the visual appeal of the game at all, except for making it look slightly less smooth.

Disadvantages

  • Less acurate due to rounding errors.

  • Less predictable, due to rounding errors.

  • Slightly more complex to program.

  • When implemented badly, could blow up physics or have even worse side effects.

Mark Oates
Member #1,146
March 2001
avatar

You mentioned something, what are the "fixes" to the disadvantages of the variable timestep?

RPG Hacker
Member #12,492
January 2011
avatar

One of them was already mentioned earlier: Clamping your delta time to a certain minimum (e.g. 1/60) and maximum (e.g. 3/60) value. That way, while the game won't be able to keep up it's game speed in all occasions (in this example, less than 20 FPS would slow the game down, for example), it will also prevent any kind of physics explosions, caused by extreme FPS drops or climbs.

Another thing I havee seen some games do was to decouple your physics from your game completely, making the physics run on a separate thread. While this is, presumably, mostly done for performance reasons, it will also prevent your physics to be affected by the variable time step. Though to be fair: I suppose when you're doing this, there is really little point in using variable time steps to begin with, since the physics are what's most affected by the time step.

I guess if you were a REALLY skilled programmer, you could also code a system that uses variable time steps, but avoids any floating point calculations and therefore rounding errors. Now I haven't tried this myself, but I imagine that it COULD work when doing certain considerations, exploiting certain behaviors and using some clever math. Can't give you more details on this, since, as I said, I haven't really tried this myself yet, but one possible solution could be to always round your time step to the closest multiple of 1/60 and then making use of this fact in your calculations. I can imagine that this would still look mostly smooth (even though only using estimated delta time), but you could still skip calculations on a bunch of frames completely. But yeah, this is really just a rough idea in my head right now. I need to look further into this.

-koro-
Member #16,207
February 2016

Less acurate due to rounding errors.
Less predictable, due to rounding errors.

I don't see how rounding errors make any difference. You'll have rounding errors whether you use a fixed time step or a variable one. I'm not sure what you mean by "accurate" but if you're thinking "more realistic" a variable step should increase, and not decrease, the accuracy (assuming you keep this step smaller than the fixed counterpart).

Chris Katko
Member #1,881
January 2002
avatar

RPG Hacker: With fixed time, you drop draw frames, not logic frames. As long as logic is relatively insignificant (MOST but not all games), it's very easy to keep them up to date.

To contrast: On a game such as Factorio, with literally tens of thousands (if not millions of calculations!) each logic frame may be a huge chuck of the CPU budget.

Games with short areas of free physics can get away with variable updates. When a bullet can only move so fast, affect few things, and has such a limited lifespan, it's not going to affect much. But in a game like my planet simulator, variable speeds end up changing the error in free-running physics systems.

That is, a bullet "changes state" so often (a discrete switch, like a finite-state machine, where a guy is either falling, jumping, or standing, etc) basically stops the error from propagating further. It's a new equation every time the character hits a wall, or falls, or changes direction.

But my planets are running forever. If they get closer to the sun... they KEEP getting closer... and closer... and closer. The error spirals out of control and the planets collide with the sun. There is no discrete state change to correct and stop the error from increasing out of control.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs

RPG Hacker
Member #12,492
January 2011
avatar

I absolutely get your point, Chris, and do agree with it. In the case of games like you're describing (and games like Elias was describing earlier), accurate physics DO matter and they're one prime example for games that probably SHOULD use a fixed time step. This basically applies to every game that needs accurate physics to function. However, I assert that this type of game is most likely the minority of games. Most games really only use physics for simple movement and platforming, and for those games, delta time not only works perfectly, but usually also has the greater benefits. Well, when I say "most games", I really mean "most triple A games" and probably also "most indie games", not counting any mobile or casual games. Those are an entire species of their own. Most mobile games, for example, probably get away with just running in app mode and don't need any time step to begin with (thinking of quiz-based games, for example). I personally work in the triple A department, so variable time steps are what I see the most and have most experience with.

Mark Oates
Member #1,146
March 2001
avatar

you could also code a system that uses variable time steps, but avoids any floating point calculations and therefore rounding errors.

That's interesting. Your physics would would be quantum movements only - like, physics with integers.

You would have a range of allowed physics, so only 100 mps to 1 mm/sec, and objects could only move a minimum of 1 mm/sec regardless of frames. That would be tricky once you get to trig, but all the calculations would be in "known" space.

If you also set constraints on time, only 20fps to 80fps, then you could restrict that space down even further into a finite known "space". While the movement would be quantized, it's possible that you would add some extra filters... You could interpolate these quantum "notches" with floating point to make the visual a little more convincing, even though the underlying calculations are in a quantum space.

Wasn't that one of the purposes of using fixed point?

 1   2 


Go to: