![]() |
|
time in milliseconds; |
Hyena_
Member #8,852
July 2007
![]() |
I have to find a way to get time intervals in milliseconds or even microseconds. It would be very good if the solution suits for both: win and unix. I am trying to do some tracking system for my functions so I could get some feedback about how much CPU each of my functions uses. Any other ways to do it? It is very important that the solutions works on unix but could run in windows too with the help of cygwin1.dll
|
kazzmir
Member #1,786
December 2001
![]() |
Quote: I am trying to do some tracking system for my functions so I could get some feedback about how much CPU each of my functions uses. Any other ways to do it? A standard way to profile your code is to add "profiling" information via the -pg flag to gcc. At the exit of your program a file called gmon.out will be created which you can then use gprof to analyze the time taken by each function in your program. $ gcc -pg myprogram.c -c -o myprogram.o $ gcc -pg myprogram.o -o myprogram $ ./myprogram $ ls gmon.out gmon.out $ gprof myprogram | less ...look at stuff... Note that you need -pg for the compilation part and finally linking the program. Profiling adds a huge overhead so your program may slow down by 2-3X, though. But if you just want functions to give you timing information you can use gettimeofday on unix, I don't think it works on windows, but I'm not sure about cygwin. struct timeval start, end; gettimeofday( &start, NULL ); ... gettimeofday( &end, NULL ); unsigned long long micros = (end.tv_sec*1000000+end.tv_usec) - (start.tv_sec*1000000 + start.tv_usec );
|
Rodrigo Monteiro
Member #4,373
February 2004
![]() |
If you want to do this with extreme precision and don't want to use a profiler, then I suggest that you look into the ReadTSC() function listed in the "Subroutine library" here: http://www.agner.org/optimize/ I'm not sure how cross-platform it is, but it's written in assembly. It reads the number of clock cycles elapsed between executions, and theoretically has low overhead. Just don't use it to measure realtime clocks, as modern processors can dynamically change their clock speed, which would ruin everything. [EDIT] Apparently I can't read. It does work across platforms, as long as it's x86-32 or x86-64. _____________________________ |
Trent Gamblin
Member #261
April 2000
![]() |
On Windows there's timeGetTime() in winmm.lib which returns milliseconds, if gettimeofday doesn't pan out for you.
|
Goalie Ca
Member #2,579
July 2002
![]() |
Very few functions take more than a few microseconds to execute. You need a high performance timer. boost::timer will do that for you but you should really stick to using a profiler. ------------- |
Hyena_
Member #8,852
July 2007
![]() |
Thanks everyone, this information is very useful. I tried unix gettimeofday already and it worked under windows with cygwin. Only thing I wonder now is if it works correctly. This is what I wrote for testing.. char buf[MSL]; struct timeval start, finish; int i; unsigned long long micros; gettimeofday(&start,NULL); for (i=0;i<10000000;i++); gettimeofday(&finish,NULL); micros = (finish.tv_sec*1000000+finish.tv_usec) - (start.tv_sec*1000000 + start.tv_usec ); sprintf(buf,"Program takes an average of %lld microseconds.\n",micros);
And in the output I got 15000 microseconds in the beginning. The question is why do I get 0 microseconds sometimes? .................... About that standard profiling way, I probably stick to the "milliseconds" because it seems a bit more comfortable.
|
kazzmir
Member #1,786
December 2001
![]() |
If you compile with -O2 that for loop should be optimized out. Anyway, use %llu for printing an unsigned long long. Without -O2 I get With -O2 I get Remember, 15,000 microseconds is 0.015 seconds, which isn't that long. |
Goalie Ca
Member #2,579
July 2002
![]() |
You must absolutely consider OS scheduling. It will drastically change your numbers. Plus. the optimizations can remove dead code. edit: the granularity of the timer is also different. I believe windows HPC uses the system cpu counters which are incremented every N clock cycles. The default timeOfDay is the low resolution timer (supported by every windows install) and is only in milliseconds. ------------- |
X-G
Member #856
December 2000
![]() |
Beware: RTDSC is buggy on AMD multicores and causes major timing glitches. -- |
GullRaDriel
Member #3,861
September 2003
![]() |
X-G said: Beware: RTDSC is buggy on AMD multicores and causes major timing glitches. I am allowing myself to add some more information to what X-G said: You need a windows driver and an update from amd.com to get it working nice. "Code is like shit - it only smells if it is not yours" |
Hyena_
Member #8,852
July 2007
![]() |
I have to mention the program has to be in pure C. So I can't use libs that expect C++. I'm still not sure whether my code works or not. Right now I am using that gettimeofday function. I am suspecting that it isn't still working. I always get a number with a lot of zeros as microseconds. It works on real seconds, for example if I just wait 10 seconds and then check if the program got it right but when it comes to functions, the microseconds all end with a bunch of zeroes. Plus, it doesn't give the right percentage if I find a function's length in time and then a sub-function's length.
Edit:
|
pro-mole
Member #9,607
March 2008
![]() |
For Pure C in Unix, use gettimeofday and timeval variable from sys/time.h. For more reference, I suggest http://rabbit.eng.miami.edu/info/functions/time.html#gtod Now, for Win, I'll need more research... -- |
|