Limit of usage of the timers
CascoOscuro

By the post you can think that i'm asking about how many timers a can use in a certain moment, but no, don't care.

My question is that i need a timer routine that must be executed each milisecond. The routine is very simple, it only increases a var (very tipical). It's done for animation speed stuff.

Well, i've implemented it and i don't noticed any slow of perfomance, but, there are risks to happen what i say?
Thinking in the code, the game logic routines could be done in one microsecond (more or less) in the worst of cases, i think. Drawing routine should be done in various microseconds, so, at first, looks like there isn't problems.

Am I wrong?

Thanks in advance.

miran

Your timer routine will most likely not be executed each milisecond. Instead it will be executed ten times in a row once every 10 miliseconds...

tobing

Depends on the platform I would say. On Windows, you can improve accuracy of the timers by calling timeBeginPeriod(1); when the program is starting and timeEndPeriod(1); when the program is ending.

FMC
tobing said:

Depends on the platform I would say. On Windows, you can improve accuracy of the timers by calling timeBeginPeriod(1); when the program is starting and timeEndPeriod(1); when the program is ending.

It isn't the first time i hear this and it seems to work; if it's so simple why isn't it added to allegro?

CascoOscuro

The platform is for now windows, but i want to port it to Linux, (if fblend lets me do it).
For this purpose i could do the timer routine after every end of main loop execution, because frame rate is limited.
thank you for all.

tobing
Mithrandir said:

It isn't the first time i hear this and it seems to work; if it's so simple why isn't it added to allegro?

I have submitted that as a change request for the next version. Until now I don't know if they will add it or not...

gnolam
Quote:

My question is that i need a timer routine that must be executed each milisecond. The routine is very simple, it only increases a var (very tipical). It's done for animation speed stuff.

Actually, I fail to see how you would need a millisecond-accurate timer for animation...

tobing

You might not use such a fast counter for doing animations. I'm using counters of that type for internal performance measurements of certain code sections (and for that I'm acutally using a timer which counts 10000 per sec).

Chris Katko
Quote:

It isn't the first time i hear this and it seems to work; if it's so simple why isn't it added to allegro?

Because as far as I've heard, it isn't reliable.

CascoOscuro
Quote:

hy must a trip to Europe be so expensive?

to slow down animations speed applying modular mathematics.

Evert
Quote:

Because as far as I've heard, it isn't reliable.

Can you elaborate?
I was planning on adding it, but I'd like to know what I'm adding if there are possible problems with it...

tobing
Quote:

Because as far as I've heard, it isn't reliable.

That's something I would also like to know. My experiences don't second that...

Bob

You probably want to call timeGetDevCaps() first, to get the range of periods that are supported on the OS.

tobing

First I thought that would be a good idea. Then, why not simply calling timeBeginPeriod(1); ? If the capabilities are not sufficient (which is not very probable on modern PCs) then the result will be what is closest to timer precision of 1ms. So there's no need to check capabilities. Of course this is different if the application actually depends on precision, but then it's the applications' responsibility to check for that.

Ron Ofir
CascoOscuro's quote said:

hy must a trip to Europe be so expensive?

What does that have to do with anything??? :P And what program are you making that involves modular mathematics?

CascoOscuro

Oh, sorry! I forgot the "copy" before "paste":-/. That's a quote from another post.

The quote was about gnolam said above.

Well... imagine that you want a run animation for a enemy, and game has bullet time (for example), with a limited frames.

So the animation routine is such as:

void CCharacter::anim()
{
   if(!(Timer % SlowDowner))
     Frame++;
}

Of course it's more complex than this.

Kitty Cat
Quote:

First I thought that would be a good idea. Then, why not simply calling timeBeginPeriod(1); ?

It probably effects other apps too. I would imagine it'd make them less responsive if your program is "checked on" more often.

tobing
Quote:

I would imagine it'd make them less responsive if your program is "checked on" more often.

I think I don't understand. What do you mean with that?

BTW my proposed change would only affect the rest()-function to be more precise. In other cases the application could call timeBeginTime(1) as needed or applicable. One could think about calling timeBeginTime(1) only if rest() is called with value 1 - or something like rest(n) internally calls timeBeginPeriod(n) and only if n is nonzero. I would think that Evert is already trying this out and playing around with such things...

Evert
Quote:

I would think that Evert is already trying this out and playing around with such things...

Evert doesn't use Windows though, so he can't really do much experimentation...

tobing

Ok. So then I'll do some more investigations, maybe I'll send you some modification then.

Kitty Cat
Quote:

I think I don't understand. What do you mean with that?

When you call rest/Sleep normally, Windows puts the process to sleep and waits until the next time the scheduler would get back to your program before waking back up. However, by setting timeBeginPeriod(1), it checks your program more often, which means other programs would get interrupted more often. If the scheduler granularity could be more than 10ms, I think it would be.. but it's not for a reason.

tobing

I see. Well, my experiments so far haven't shown any difference wrt to this behaviour, calling timeBeginPeriod(1) or not. Only, if I call it, then my own program needs less CPU in total, so there's actually more CPU available for other applications.

Kitty Cat
Quote:

Only, if I call it, then my own program needs less CPU in total

That contradicts what it does, though. To check on your app more often requires more CPU. Especially as it relates to rest, where I made the analogy about a person waking up every hour for 8 hours while in bed, instead of doing the 8 hours all in one shot.

tobing

I don't know - the rest() call is not using the CPU. It is the remainder of my game loop, but I don't know why this is so. The observation is that adding a call to timeBeginTime(1) within the implementation of rest(1) makes the complete loop smoother, and overall less CPU is used. I think it might be because of smoother checking timers, so when timers are not accurate you might spend more time in some loop than required. So this might be some side-effect of calling those functions...

Tobias Dammers
Quote:

Well... imagine that you want a run animation for a enemy, and game has bullet time (for example), with a limited frames.

Still, you won't need timers faster than a few times the frame rate. 1ms is just insane. If you want to alter the game speed, don't alter the timer rate, but rather the timer delta. Example:
Say, in "normal" time, you have a timer running at 100Hz (10 ms). This is still roughly twice the frequency needed for smooth animation. With each tick, the timer is increased by 10. For each screen update, you update your logic according to the timer delta (either use variable-delta logic, or keep calling the logic update function until the timer delta is back to 0). To produce bullet time, say, with a factor of 10, you increase the timer by 1 instead of 10. The rest of the program remains unchanged. Your logic will now update at 1/10 the speed, without the need for insanely high timer rates.
The thing is, even if all you do in the timer function is increase a variable by 1, calling it 1000 times per second does induce unneeded overhead.

tobing

The overhead from these function calls is so small that I can't measure it.

I don't use these high precision timers for game speed, but to give some time back to the OS, more precisely the time remaining to the next frame when all the work has been done for this frame. In that context it is very reasonable to ask for precision of 1ms.

Kitty Cat
Quote:

In that context it is very reasonable to ask for precision of 1ms.

Not really.. not unless your game is running at more than 100LPS (log frames per second). 0.01th of a second for jitter isn't that big of a deal.

That said, you don't even have to rest at all for a CPU intensive game.

tobing

True, but for a CPU intensive game only. My game is expected to use only very little CPU, even at higher logical speed. I just like to be nice with my game, and I like to save my laptop from using power when it is not really necessary.

Kitty Cat

Usually when a game isn't CPU intensive, it doesn't need that extra precision. The time jitter is reduced/removed by the speedy execution.

tobing

Theoretically, yes. That's what I thought before. Then I just saw that my program would use much more CPU than expected, and that's where I started my investigations. So my finding is that calling timeBeginPeriod(1) when using rest(1) solved the problem essentially.

Evert

How about using something equivalent to

timeBeginPeriod(n);
rest(n);
timeEndPeriod(n);

or perhaps something like timeBeginPeriod((n+1)/2)? In that way, you don't increase the granularity of the timer to more than it needs to be and keep the benefits that it offers... I think. Of course, the most common case would be rest(1), where this wouldn't actually change anything.

tobing

I don't know what other side effects the timeBeginPeriod(n) would have. I would definitely not like to have it influence other timers around. So I think that calling timeBeginPeriod(1) only increases precision, so it doesn't harm. Calling it with other values might decrease precision in case the application has called timeBeginPeriod(1) before. I would not appreciate such a behaviour...

Edit: Before calling timeBeginPeriod you have to use timeGetDevCaps to determine valid values. The following code is from MSDN:

#define TARGET_RESOLUTION 1         // 1-millisecond target resolution

TIMECAPS tc;
UINT     wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) 
{
    // Error; application can't continue.
}

wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
timeBeginPeriod(wTimerRes);

I haven't looked into how timers are created in allegro, but if it is using timeSetEvent it will be affected by timeBeginPeriod also. So it might also be a good idea to put a call of timeBeginPeriod(1) into allegro's init_timer() function?

Evert
Quote:

I don't know what other side effects the timeBeginPeriod(n) would have. I would definitely not like to have it influence other timers around.

It shouldn't affect other applications at any rate... I'm not sure about other threads though, but that's something that would have to be tested.

Quote:

So I think that calling timeBeginPeriod(1) only increases precision, so it doesn't harm. Calling it with other values might decrease precision in case the application has called timeBeginPeriod(1) before. I would not appreciate such a behaviour...

Remember that this is for rest() only though. If you call rest(10), does it really make sense to set the resolution to 1 if 10 (or 5) would probably do just as well for that case?

Quote:

but if it is using timeSetEvent it will be affected by timeBeginPeriod also. So it might also be a good idea to put a call of timeBeginPeriod(1) into allegro's init_timer() function?

Probably, yes. I don't know how this is done in Windows though... I can probably look at the code (but not test it) this evening.

tobing

I'll look into the code this weekend, and I'll test calling timeBeginPeriod(1) from within init_timer().

OK. So here's my findings:

When I call timebeginPeriod(1) in the beginning of my program, or inside install_timer() then overall CPU usage increases (sometimes drastically). So some objections made are valid: requesting higher accuracy does cost something.

So then I tried some other places to put this call, but the only place where it does good - and really decreases CPU usage - is inside of the implementation of rest(), which is exactly what I requested some time ago.

Evert: Should I send you the code again, this time with some improved error handling? It is not much, only checking the return value of timeBeginPeriod() to decide if timeEndPeriod() has to be called or not.

Evert
Quote:

Evert: Should I send you the code again, this time with some improved error handling? It is not much, only checking the return value of timeBeginPeriod() to decide if timeEndPeriod() has to be called or not.

Please do!
Also, if you could check if Allegro uses timeSetEvent and would be affected by calling timeBeginPeriod()?

tobing

The function 'timeSetEvent' is not used in allegro. What is used is a call to 'SetEvent' using 'timer_stop_event = CreateEvent(...)' in wtimer.c:222. As far as my program has shown, the call to timeBeginPeriod(1); added in tim_win32_rest(...) does not affect that. Well, calling timeBeginPeriod(1); from my main program has some effects, so that might have undesirable influence here.

This is the updated code section in wtimer.c:

1/* tim_win32_rest:
2 * Rests the specified amount of milliseconds.
3 */
4static void tim_win32_rest(unsigned int time, AL_METHOD(void, callback, (void)))
5{
6 unsigned int start;
7 unsigned int ms = time;
8 
9 const MMRESULT rc = timeBeginPeriod(1);
10 if (callback) {
11 start = timeGetTime();
12 while (timeGetTime() - start < ms)
13 (*callback)();
14 }
15 else {
16 Sleep(ms);
17 }
18 if( rc == TIMERR_NOERROR )
19 timeEndPeriod(1);
20}

Maybe I should also send this to the mailing list (AD)?

Evert
Quote:

Maybe I should also send this to the mailing list (AD)?

Please do. Or, you can use the tracker on Allegro's sourceforge page.

tobing

Done.

Elias

Why not just put time_begin_period into allegro_init? Surely calling it only once at startup would cause less overhead than calling it everytime rest() is called.. or am I missing something?

tobing

I have tried that. The overall performance decreases actually, which surprised me of course. Explanantion is probably that the call does have some side effects, which cause more overhead in other functions. So that's the reason I have left it at putting the calls into rest() only.

Elias

Ah, makes sense then. Probably should add a comment telling about that..

tobing

Finally I got some test project together, which contains my current game framework including the game loop. This can serve as performance benchmark (how many balls do use how much cpu time), but also as an example of a game framework and a game loop. Basically, the game loop is taken from Lennart Steinke's book 'Spieleprogrammierung' in bhv Verlag.

There's a MSVC7.1 project file included, but no makefile (the last makefile I have written is 8 years ago...), and you have to include the allegro library to the linker statement (because I'm using my allegro project file, which inplies automatic linkage in the Visual Studio). If someone has a working makefile, you might post it here, so I can include it in the archive eventually...

Thread #456146. Printed from Allegro.cc