Which is exactly what you want. The user input is coming at the same, unmodified speed, no matter what.
This isn't true, either. Depending on how you implement your input polling, either you will have input from the time when the timer events were actually written to the queue, or you will have input from the time when you're processing the events in the queue. In the first case, what will happen is that the user input will be processed way later than when it was actually received. This will only lead to confusion for the player, since certain input events could be processed many frames after they were received by the game. This would just cause a noticable delay, which never feels good and should be avoided. As a player, I want my input to be handled, at latest, one frame after giving it, not after that.
The latter case actually make more sense, since input events would be processed, at latest, one frame after receiving them. However, I still don't see how this would profit from a fixed time step at all. Input doesn't matter to a player if he can't see the results of that input. So if, for example, you were to process ten frames in the time of one regular frame and only present the final image to the player, it would be the exact same result as if you had just used delta time and processed the last input in the next frame, the only difference being that the delta time method actually processes a single frame only and therefore saves a lot of CPU time. However, from my experience, this isn't even usually done, which brings me to your next point.
It does not really speed up for a moment either - it's making up for time lost to make sure the overall game is running at the same speed at all times.
This is only true when you drop rendering of any frames while your queue isn't empty, which, judging by your code posted above, you're actually doing. However, as stated above, I've seen enough examples of games where people actually render all delayed frames, and THIS is what causes visible speed-up of the game for a few seconds (sorry for the confusion, I guess I should have made clearer that I wasn't refering specifically to your code all this time9. Aside from that, as I just mentioned: When using fixed time steps, you're really processing ten frames in the time you'd usually dedicate to just a single frame. When using delta time, however, you're always just processing a single frame. I don't think I have to point out that the latter method requires way less performance. Let's look at weak systems for a moment. Not mediocre systems, not good systems, but weak systems. On strong systems, yes, it's very likely that the system will be able to catch up with the delayed frames and get in sync again. But what about weaker systems? It's unlikely that they will ever be able to catch up to all of the delayed frames, the queue will keep filling and filling and the game will run visibly slowed down because it isn't able to clear the queue (assuming you'd actually render each frame instead of dropping). In fact, with your code, the game would appear to be completely frozen, since your "is queue empty" condition would NEVER be true. So on weak systems, your game would definitely be unplayable, or at best, run slowed down when not dropping any frames. And what happens, when we use delta time? Since with delta time, the game only has to process as many frames as it can, it COULD actually still run somewhat decently even on weaker systems. Sure, with reduced framerate (20 FPS or less, if you're unlucky), but the game wouldn't freeze or slow down. At at that framerate, every decent game should still be able to not break your physics. Below that, I'd consider a game unplayable, anyways, and wouldn't bother about breaking physics.
So really, the only benefit there is to a fixed time step is that it's the most accurate. This comes with many, many caveats, though. On fast systems, you won't gain any benefit over delta time aside from accuracy. At best, both implementations will behave the same. At worst, fixed time steps will actually introduce short, uncontrollable speed-ups into your game (only when not dropping any render frames). On weak systems, delta time is superior to fixed time steps in every situation. With fixed time steps, your game will will either freeze on weak systems (when dropping render frames) or slow-down immensely (when not dropping render frames). With delta time, games could actually be playable on weak systems, just with reduced framerate.
And really, for most games, especially single-player games, the gain in accuracy from fixed time steps is neglecitble. The only environments in which I would ever consider using fixed time steps for single-player games are consoles, since they're way more predicatble and less prone to "fluctuations". Here, a game could actually benefit from fixed time steps, for PC games, that's rarely the case, though.
With multiplayer it actually will be quite hard to use variable frame length and/or dropped time without causing desynchronization, so I just don't see the point.
I disagree again. First of all, I never had this problem when working on network games (I once did some competitive racing battle game using network multiplayer). Secondly, relying on network actually staying in sync for a network multiplayer implementation is bad practice to begin with. If your network got busy for just a few seconds, your multiplayer would get out-of-sync and become unplayable. I had this problem so, so many times when doing network multiplayer sessions in ZSNES. Bascially, our games would just randomly get out-of-sync and we would each see something different happening on our screens. Whenever that happened, the game would not get back in sync again. Yeah, it's never a good idea to rely on your network to stay in sync. Especially once we're talking about internet multiplayer. When you play with a person on the other end of the globe, it's pretty much guaranteed that you'll get a ping of MORE than 16.67 ms, so playing a 60 FPS game in sync over the internet is pretty much impossible right away. This is why no professional game ever does that, none of them rely on stability and synchronity of your internet connection for their multiplayer to work. Instead, they use certain tricks to make it appear as though they were running synchronously, even though they're really not. Let's take Call of Duty, for example. If the games were trying to run in sync, it'd be virtually impossible to even shoot anyone. For example: On your side, you'd visibily shoot your opponent yet he would still survive, because on his side, he's already somewhere completely different. What the games do instead is using predictions. Basically, they try to predict where a player currently is or where a player is heading. Sure, this sometimes leads to deaths that feel unfair, because a player dies, even though on his side of the game, the shot clearly missed (had this happen so, so many times in Splatoon). On the other hand, whenver you visibly shoot your opponent, you can be sure that you actually hit him, even when, on his side, he is already somewhere completely different. In cases like these, you simply have to sacrifice some fairness in order to make your game playable at all. And this is just one example. Running multiplayer games in sync via network rarely works and isn't even a good idea to begin with. It just means that your game will run really, really unstable and will be disturbed by eving the smallest peak in network activity.