Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » time in milliseconds;

This thread is locked; no one can reply to it. rss feed Print
time in milliseconds;
Hyena_
Member #8,852
July 2007
avatar

I have to find a way to get time intervals in milliseconds or even microseconds. It would be very good if the solution suits for both: win and unix.

I am trying to do some tracking system for my functions so I could get some feedback about how much CPU each of my functions uses. Any other ways to do it?

It is very important that the solutions works on unix but could run in windows too with the help of cygwin1.dll

kazzmir
Member #1,786
December 2001
avatar

Quote:

I am trying to do some tracking system for my functions so I could get some feedback about how much CPU each of my functions uses. Any other ways to do it?

A standard way to profile your code is to add "profiling" information via the -pg flag to gcc. At the exit of your program a file called gmon.out will be created which you can then use gprof to analyze the time taken by each function in your program.

$ gcc -pg myprogram.c -c -o myprogram.o
$ gcc -pg myprogram.o -o myprogram
$ ./myprogram
$ ls gmon.out
gmon.out
$ gprof myprogram | less
...look at stuff...

Note that you need -pg for the compilation part and finally linking the program. Profiling adds a huge overhead so your program may slow down by 2-3X, though.

But if you just want functions to give you timing information you can use gettimeofday on unix, I don't think it works on windows, but I'm not sure about cygwin.

struct timeval start, end;
gettimeofday( &start, NULL );
...
gettimeofday( &end, NULL );
unsigned long long micros = (end.tv_sec*1000000+end.tv_usec) - (start.tv_sec*1000000 + start.tv_usec );

Rodrigo Monteiro
Member #4,373
February 2004
avatar

If you want to do this with extreme precision and don't want to use a profiler, then I suggest that you look into the ReadTSC() function listed in the "Subroutine library" here:

http://www.agner.org/optimize/

I'm not sure how cross-platform it is, but it's written in assembly. It reads the number of clock cycles elapsed between executions, and theoretically has low overhead. Just don't use it to measure realtime clocks, as modern processors can dynamically change their clock speed, which would ruin everything.

[EDIT] Apparently I can't read. It does work across platforms, as long as it's x86-32 or x86-64.

_____________________________
a.k.a amz, a.k.a. ArchMage ZeratuL
[Aegisub] - [TINS05: Lord of the Stars] [SH05: Bunkermaster] [SH07: Fury of the Sky God] [TINS08: All Yar Base Arr Belong To Us (with codnik)] [SH09: Xtreme Programming: The Literal Game]

Trent Gamblin
Member #261
April 2000
avatar

On Windows there's timeGetTime() in winmm.lib which returns milliseconds, if gettimeofday doesn't pan out for you.

Goalie Ca
Member #2,579
July 2002
avatar

Very few functions take more than a few microseconds to execute. You need a high performance timer. boost::timer will do that for you but you should really stick to using a profiler.

-------------
Bah weep granah weep nini bong!

Hyena_
Member #8,852
July 2007
avatar

Thanks everyone, this information is very useful.

I tried unix gettimeofday already and it worked under windows with cygwin. Only thing I wonder now is if it works correctly.

This is what I wrote for testing..

   char buf[MSL];
   struct timeval start, finish;
   int i;
   unsigned long long micros;

   gettimeofday(&start,NULL);

   for (i=0;i<10000000;i++);

   gettimeofday(&finish,NULL);
   micros = (finish.tv_sec*1000000+finish.tv_usec) - (start.tv_sec*1000000 + start.tv_usec );

   sprintf(buf,"Program takes an average of %lld microseconds.\n",micros);

And in the output I got 15000 microseconds in the beginning.
After some spamming I got 0 microseconds once.
Then started getting 16000 microseconds.

The question is why do I get 0 microseconds sometimes?
Is this output realistic? Does "for (i=0;i<10000000;i++);" really take about 15000 microseconds?

....................

About that standard profiling way, I probably stick to the "milliseconds" because it seems a bit more comfortable.

kazzmir
Member #1,786
December 2001
avatar

If you compile with -O2 that for loop should be optimized out. Anyway, use %llu for printing an unsigned long long.

Without -O2 I get
Program takes an average of 45299 microseconds.

With -O2 I get
Program takes an average of 0 microseconds.

Remember, 15,000 microseconds is 0.015 seconds, which isn't that long.

Goalie Ca
Member #2,579
July 2002
avatar

You must absolutely consider OS scheduling. It will drastically change your numbers. Plus. the optimizations can remove dead code.

edit: the granularity of the timer is also different. I believe windows HPC uses the system cpu counters which are incremented every N clock cycles. The default timeOfDay is the low resolution timer (supported by every windows install) and is only in milliseconds.

-------------
Bah weep granah weep nini bong!

X-G
Member #856
December 2000
avatar

Beware: RTDSC is buggy on AMD multicores and causes major timing glitches.

--
Since 2008-Jun-18, democracy in Sweden is dead. | 悪霊退散!悪霊退散!怨霊、物の怪、困った時は ドーマン!セーマン!ドーマン!セーマン! 直ぐに呼びましょう陰陽師レッツゴー!

GullRaDriel
Member #3,861
September 2003
avatar

X-G said:

Beware: RTDSC is buggy on AMD multicores and causes major timing glitches.

I am allowing myself to add some more information to what X-G said: You need a windows driver and an update from amd.com to get it working nice.

"Code is like shit - it only smells if it is not yours"
Allegro Wiki, full of examples and articles !!

Hyena_
Member #8,852
July 2007
avatar

I have to mention the program has to be in pure C. So I can't use libs that expect C++.

I'm still not sure whether my code works or not. Right now I am using that gettimeofday function.

I am suspecting that it isn't still working. I always get a number with a lot of zeros as microseconds. It works on real seconds, for example if I just wait 10 seconds and then check if the program got it right but when it comes to functions, the microseconds all end with a bunch of zeroes. Plus, it doesn't give the right percentage if I find a function's length in time and then a sub-function's length.

1Function () {
2 start
3 Subfunction();
4 end
5 a=... // the length of subfunction
6 ...
7 ...
8 ...
9}
10 
11{
12 start
13 Function();
14 end
15 b=... // the length of greater function
16 percent=(a*100)/b; // and it gives usually 100% or even 200 or something :S
17}

Edit:
Ok, I found out that on unix it works. It's just cygwin isn't able to make it work on windows.

pro-mole
Member #9,607
March 2008
avatar

For Pure C in Unix, use gettimeofday and timeval variable from sys/time.h. For more reference, I suggest http://rabbit.eng.miami.edu/info/functions/time.html#gtod

Now, for Win, I'll need more research...

--
Professional Mole
Universitary Blogger
Game Programmer

Go to: