Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » GetTickCount() != clock() ???

This thread is locked; no one can reply to it. rss feed Print
GetTickCount() != clock() ???
LSd016
Member #3,561
May 2003

Should there be any differences between these two? When I decided to switch to clock(), for a cross-platform compatibility and stuff, the FPS counter showed up as if the FPS was on a normal level - 87, but the whole game was actually running at 10 FPS or so.

#include <winalleg.h> // i uncomment this to use clock()

#ifndef GetTickCount
inline long GetTickCount(void)
  {
  return clock();
  }
#endif

____________________________________________
[update soon]

Evert
Member #794
November 2000
avatar

Quote:

Should there be any differences between these two?

Yes.

msdn said:

GetTickCount

The GetTickCount function retrieves the number of milliseconds that have elapsed since the system was started. It is limited to the resolution of the system timer.

clock()'s man page said:

SYNOPSIS
#include <time.h>

clock_t clock(void);

DESCRIPTION
The clock() function returns the amount of CPU time (in
microseconds) used since the first call to clock() in the
calling process. The time reported is the sum of the user
and system times of the calling process and its terminated
child processes for which it has executed the wait(2) func-
tion, the pclose(3C) function, or the system(3C) function.

LSd016
Member #3,561
May 2003

microsecond? 1/1000000 second? The libc help I got with djgpp says "ticks", and also, in time.h, i found this:

#define CLOCKS_PER_SEC ((clock_t)1000)

Wouldn't this mean they're miliseconds?

____________________________________________
[update soon]

Evert
Member #794
November 2000
avatar

It depends on the definition of clock_t. The above quoted manpage is for solaris. The Linux manpage says

Quote:

DESCRIPTION
The clock() function returns an approximation of processor
time used by the program.

RETURN VALUE
The value returned is the CPU time used so far as a
clock_t; to get the number of seconds used, divide by
CLOCKS_PER_SEC. If the processor time used is not avail-
able or its value cannot be represented, the function
returns the value (clock_t)-1.

CONFORMING TO
ANSI C. POSIX requires that CLOCKS_PER_SEC equals 1000000
independent of the actual resolution.

However, the most important difference (as quoted in my first post) is

Quote:

The GetTickCount function retrieves the number of milliseconds that have elapsed since the system was started

vs.

Quote:

The clock() function returns the amount of CPU time (in microseconds) used since the first call to clock() in the calling process.

It seems that they don't do (exactly) the same thing.

EDIT: any particular reason you don't want to use Allegro's timer routines?

CGamesPlay
Member #2,559
July 2002
avatar

Evert: because they're for a totally different purpose? Really, the timer routines call for a different design than using functions calls.If you're synchronizing somethin, the timer routines may be the way to go, but as for timing something, I'd go against a callback-based approach.

--
Tomasu: Every time you read this: hugging!

Ryan Patterson - <http://cgamesplay.com/>

Evert
Member #794
November 2000
avatar

Quote:

If you're synchronizing somethin, the timer routines may be the way to go, but as for timing something, I'd go against a callback-based approach.

You can time it by having a counter running in the background and comparing the value of the counter at different times. It's pretty much the same as using clock(), but probably less accurate than a library function that is more low-level.

Go to: