Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » Is clock() really not giving the correct time?

This thread is locked; no one can reply to it. rss feed Print
Is clock() really not giving the correct time?
Albin Engström
Member #8,110
December 2006
avatar

In the thread about multi-threading GullRaDriel said that the clock() function dosen't return the actual time spent since the application started.

Let me get this clear:
1: Is this clock() function found in clock.h identical to the one in time.h?
2: Did he mean that clock returns the number of vibrations the "oscillator" (whatever that crystal thing is actually called) had performed and not "real time".
3: Did he mean that the clock function isn't giving a correct frequency when called?
If so, is he correct?

Sorry I'm making a thread about this but I felt it was necessary. :P

Thank you.

Evert
Member #794
November 2000
avatar

Quote:

1: Is this clock() function found in clock.h identical to the one in time.h?

It's in the standard library; where you get the prototype from (it should be time.h) doesn't matter.

Quote:

2: Did he mean that clock returns the number of vibrations the "oscillator" (whatever that crystal thing is actually called) had performed and not "real time".
3: Did he mean that the clock function isn't giving a correct frequency when called?
If so, is he correct?

I think that's best answered by a quote from the man page; I leave it up to you to interpret that:

Quote:

The clock() function determines the amount of processor time used since
the invocation of the calling process, measured in CLOCKS_PER_SECs of a
second.

Arthur Kalliokoski
Second in Command
February 2005
avatar

In other words, if you started two windowed "clock displays" on your desktop at the same time, and they used clock(), they'd both run half speed. That's assuming no other processes used any cpu time. If your top(1) or TaskManager said that one of the clocks used 30% cpu time, and you'd started the "clock" at 1:00, then at wall time of 1:10 your "clock" would say 1:03.

“Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty. This is known as "bad luck.”

― Robert A. Heinlein

Albin Engström
Member #8,110
December 2006
avatar

Evert said:

It's in the standard library; where you get the prototype from (it should be time.h) doesn't matter.

That's oen thing that bothers me, I don't have the clock.h header, is it a part of a linux/mac only?

Quote:

The clock() function determines the amount of processor time used since
the invocation of the calling process, measured in CLOCKS_PER_SECs of a
second.

Interesting words.. however, I like to inteprent them as that clock returns the amount of time since the process started, what I initially thought.

Arthur Kalliokoski said:

In other words, if you started two windowed "clock displays" on your desktop at the same time, and they used clock(), they'd both run half speed. That's assuming no other processes used any cpu time. If your top(1) or TaskManager said that one of the clocks used 30% cpu time, and you'd started the "clock" at 1:00, then at wall time of 1:10 your "clock" would say 1:03.

Have you tried doing that? Beacuse when I do it with 5 "timers" they all run fine, I can even use sleep and they'll gladly jump 10000s of "clocks" to get to the correct time.

:-/

Elias
Member #358
May 2000

Sounds like either your clock() or sleep() implementation is broken then. Usually sleep() should not take CPU time.

[Edit:]
If I execute this:

1#include <time.h>
2#include <stdio.h>
3 
4int main(void)
5{
6 clock_t x;
7 int i;
8 volatile int v = 0;
9
10 x = clock();
11 sleep(1);
12 printf("1 second sleep: %f\n", (clock() - x) / (double)CLOCKS_PER_SEC);
13
14 x = clock();
15 for (i = 0; i < 1000000000; i++) v++;
16 printf("1 billion adds: %f\n", (clock() - x) / (double)CLOCKS_PER_SEC);
17 return 0;
18}

I get:

1 second sleep: 0.000000
1 billion adds: 2.610000

--
"Either help out or stop whining" - Evert

Albin Engström
Member #8,110
December 2006
avatar

Sorry, i was using Sleep().

I don't even have sleep, are you using linux?

Anyway.

Here's the code:

1 
2#include <allegro.h>
3#include <winalleg.h>
4#include <time.h>
5 
6 
7void init();
8void deinit();
9 
10int main()
11{
12 init();
13 
14 BITMAP *buffer;
15 buffer = create_bitmap(320, 240);
16 //set_display_switch_mode(SWITCH_BACKGROUND);
17 while (!key[KEY_ESC])
18 {
19 clear_to_color(buffer, makecol(255, 255, 255));
20 textprintf_ex(buffer, font, 5, 5, makecol(255, 100, 200), -1, "%d", clock());
21 if(key[KEY_TAB])Sleep(1000);
22 
23 draw_sprite(screen, buffer, 0, 0);
24 }
25 
26 deinit();
27 return 0;
28}
29END_OF_MAIN()
30 
31void init()
32{
33 
34 int depth, res;
35 allegro_init();
36 depth = desktop_color_depth();
37 if (depth == 0) depth = 32;
38 set_color_depth(depth);
39 res = set_gfx_mode(GFX_AUTODETECT_WINDOWED, 320, 240, 0, 0);
40 if (res != 0)
41 {
42 allegro_message(allegro_error);
43 exit(-1);
44 }
45 
46 install_timer();
47 install_keyboard();
48 install_mouse();
49 
50}
51 
52void deinit()
53{
54 clear_keybuf();
55 
56}

One thing I noticed about your code is that sleep(1) sleeps for one millisecond and not one second. (if sleep acts like Sleep).

When I change it to Sleep(1000), the value beacomes 1 and not 0.0

tobing
Member #5,213
November 2004
avatar

Sleep is Windows, sleep is linux/unix. Time is measured in milliseconds. Remember, sleep (or Sleep) are not guaranteed to be exact, I think it is like 'at least' the given number of milliseconds. The sleep functions measure elapsed time.

On linux/unix, clock measures process time. On Windows, there's no difference between process time and elapsed time, so that's a big difference between linux and Windows. Then, last but not least, timing on Windows is quite far from exact...

Albin Engström
Member #8,110
December 2006
avatar

I see, that's good to know when I port my games to linux. :P

But how do you get the elapsed time on linux?

tobing
Member #5,213
November 2004
avatar

There are functions besides clock(), like _ftime (that's the Windows version of that function), try to look what you have in time.h and look in the man pages or other help of those functions. I don't have linux myself, so I can't tell exactly what the right function names are.

Arthur Kalliokoski
Second in Command
February 2005
avatar

Quote:

But how do you get the elapsed time on linux?

http://linux.die.net/man/2/gettimeofday

“Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty. This is known as "bad luck.”

― Robert A. Heinlein

Albin Engström
Member #8,110
December 2006
avatar

Thanks! :)

Evert
Member #794
November 2000
avatar

Quote:

That's oen thing that bothers me, I don't have the clock.h header, is it a part of a linux/mac only?

No idea, but it seems that the standard header you should be using is time.h, so I don't see why it would matter one way or the other whether you have a "clock.h" or not.

EDIT: I do seem to have a "clock.h", which judging from browsing through it very quickly, seems to be a kernel-interface header file.

Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

Interesting words.. however, I like to inteprent them as that clock returns the amount of time since the process started, what I initially thought.

But a program is not running all the time. What clock returns is actual used cpu time. Programs share the cpu with other processes, so they are not running every ms that it appears they are. clock will not let you keep track of ACTUAL time the program has been running.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Albin Engström
Member #8,110
December 2006
avatar

Evert said:

No idea, but it seems that the standard header you should be using is time.h, so I don't see why it would matter one way or the other whether you have a "clock.h" or not.

I just like to know where I'm standing.

Thomas Fjellstrom said:

But a program is not running all the time. What clock returns is actual used cpu time. Programs share the cpu with other processes, so they are not running every ms that it appears they are. clock will not let you keep track of ACTUAL time the program has been running.

I think we've established that it is true on linux but not on windows.

And it certenly dosen't seem that way on windows when I use it to time multiple applications(see above for code).

Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

I think we've established that it is true on linux but not on windows.

Use a different sleep then. One that doesn't actually just while(1); besides, clock isn't particularly reliable, as you can tell.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Speedo
Member #9,783
May 2008

Quote:

Use a different sleep then. One that doesn't actually just while(1); besides, clock isn't particularly reliable, as you can tell.

His Sleep function is fine. As was already stated clock behaves differently on windows and *nix.

Run-Time Library Reference
clock
Calculates the wall-clock time used by the calling process.
http://msdn.microsoft.com/en-us/library/4e2ess30(VS.80).aspx

GullRaDriel
Member #3,861
September 2003
avatar

Albin, want some timing related topic ?

QPC, GTOD, RDTSC is a topic I started a while ago for compiling high performance timing ressources coming from various people.

Wall Clock Time explanation.

I admit I was not enough precise when telling that clock() isn't returning the right amount of time.

In fact, it's returning the "Wall Clock Time" under Windows, and it's returning the processor usage duration under linux && unix likes, plus that under Linux it does not return the cumulated children consumption.

Example:

1#include <sys/time.h>
2 
3#ifdef WIN32
4 #include <windows.h>
5 #define cross_sleep( time ) Sleep( 1000 * time )
6#else
7 #define cross_sleep( time ) sleep( time )
8#endif
9 
10int main()
11{
12 clock_t ca , cb ;
13 double c;
14 
15 ca = clock();
16 cross_sleep( 10 );
17 cb = clock();
18 
19 c=(double)(cb-ca);
20 c/=(double)CLOCKS_PER_SEC;
21 
22 printf( "%g seconds\n" , c );
23}

That code output "10 seconds" under windows, and 0 under our "SunOS mercure 5.10 Generic_127127-11 sun4u sparc SUNW,Sun-Fire-280R"

"Code is like shit - it only smells if it is not yours"
Allegro Wiki, full of examples and articles !!

Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

His Sleep function is fine. As was already stated clock behaves differently on windows and *nix.

Sleep != sleep, and most likely windows's clock function is broken. Wouldn't be the first one.

Quote:

RETURN VALUE
The value returned is the CPU time used so far as a clock_t; to get the number of seconds used, divide by CLOCKS_PER_SEC. If the
processor time used is not available or its value cannot be represented, the function returns the value (clock_t) -1.

CONFORMING TO
C89, C99, POSIX.1-2001. POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution.

NOTES
The C standard allows for arbitrary values at the start of the program; subtract the value returned from a call to clock() at the
start of the program to get maximum portability.

Note that the time can wrap around. On a 32-bit system where CLOCKS_PER_SEC equals 1000000 this function will return the same value
approximately every 72 minutes.

On several other implementations, the value returned by clock() also includes the times of any children whose status has been col‐
lected via wait(2) (or another wait-type call). Linux does not include the times of waited-for children in the value returned by
clock(). The times(2) function, which explicitly returns (separate) information about the caller and its children, may be prefer‐
able.

CPU time does not include time spent not running.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Albin Engström
Member #8,110
December 2006
avatar

GullRaDriel said:

Albin, want some timing related topic ?

QPC, GTOD, RDTSC is a topic I started a while ago for compiling high performance timing ressources coming from various people.

Wall Clock Time [en.wikipedia.org] explanation.

I admit I was not enough precise when telling that clock() isn't returning the right amount of time.

In fact, it's returning the "Wall Clock Time" under Windows, and it's returning the processor usage duration under linux && unix likes, plus that under Linux it does not return the cumulated children consumption.

Thanks for the links, they'll come in handy now that I have to update my own timing code.

"cumulated children consumption."?

Does this have something to do with multithreading?

Thanks :).

GullRaDriel
Member #3,861
September 2003
avatar

Albin said:

Does this have something to do with multithreading?

The documentation says so. You'll only have the main program clock consumption, and if you want the total you should make yourself a way to clock each child/thread and send the value back to the main program (and sum them).

"Code is like shit - it only smells if it is not yours"
Allegro Wiki, full of examples and articles !!

Go to: