Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » Emulating GetTickCount() functionality for an NES emulator under Windows.

This thread is locked; no one can reply to it. rss feed Print
Emulating GetTickCount() functionality for an NES emulator under Windows.
Iron Helix
Member #3,686
July 2003

OK, I used to use Allegro years ago under DJGPP. Recently, I've come back to it.

I'm using VisualStudio 6.0 right now, with static Allegro library.
I'm porting my old (previously) closed-source NES emulator over to Allegro. I ran into a problem, though:

GetTickCount() was used to do the timing (it actually used kind of hacked timing/input/output routines, though the rest of the emulator was much better quality. Which is part of why I'm porting it to Allegro).

GetTickCount() is defined in windows.h. It is the only function from windows.h that my emulator needs. Windows.h can't be included because it also wants to define a class named BITMAP, which as we all know is used in Allegro.

Now, granted, it would be best to change over to Allegro's timers so as to make the cross-platform compatability/porting better.

Basically, I'm wondering what is the optimal way to setup timers such that the functionality of GetTickCount() can be reliably simulated cross-platform.

GetTickCount() is only used to throttle the framerate, which is a target of either 50 or 60 depending on the system. Thus, the function implemented as a timer callback should have a good resolution (maybe 1/500 at least), not take up many system resources, and be called when the emulator is doing heavy work. Will I need to do any multithreading?

Any ideas?

gillius
Member #119
April 2000

Use the standard frame limiting scheme in the Allegro FAQ, but set the timer to fire 50 or 60 times per second.

Using Allegro timers to simulate GetTickCount (for the purpose of measuring time) is very bad. The fastest you can set the Allegro timers is currently 10 ms under modern OSes. I don't know how fast or far they drift.

Gillius
Gillius's Programming -- https://gillius.org/

StevenVI
Member #562
July 2000
avatar

Alternatively, you can write your own timer class like I have done. Under Windows you can use QueryPerformanceCounter, and under Linux you can use gettimeofday. Very easy to implement :).

Some links:
QueryPerformanceCounter on MSDN
Under Linux, just type "man gettimeofday" in a console.

-Steve

__________________________________________________
Skoobalon Software
[ Lander! v2.5 ] [ Zonic the Hog v1.1 ] [ Raid 2 v1.0 ]

Matthew Leverton
Supreme Loser
January 1999
avatar

For future reference, you may want to see the "Windows Specific" section in the docs. You can do this:

#include <allegro.h>
#include <winalleg.h>

This will get "windows.h" automatically included for you, resolving the BITMAP (etc) errors.

But obviously, that would not get you cross-platform performance... How fast do your timers really need to be? I believe Allegro's can be accurate in the 5-10ms range. If you need more than that, you may just need to roll your own for each platform as Steve as suggested, using some simple #defines.

If you were to use timers, what you could do is this:

void my_timer()
{
  tick++;
}

And then set up a timer to call that X amount of times per seconds.

gillius
Member #119
April 2000

Allegro's timers have a resolution of 10ms in Windows and Linux and an accuracy of less than that. In pure DOS Allegro's timers I think are sub-millisecond...

But I don't think measuring time with Allegro timers was ever a good solution. Why have the overhead of having a thread running like that when you can just query the time with sub-MICROsecond precision in Linux and very near microsecond precision in Windows? rdtsc is a good approach too -- I've heard QueryPerformanceCounter does not always report a time >= the last time it reports -- it can jump back. But I think gettimeofday can jump backwards in time as well...

I haven't quite addressed that in GNE but I do have a Timer class. I use Harry's method now of QueryPerformanceCounter + gettimeofday. If you want some code I can paste it, but it already exists on this fourm.

If you can search for keywords in code on this forum, search for Timer::getCurrentTime or "class Time".

Gillius
Gillius's Programming -- https://gillius.org/

Thomas Harte
Member #33
April 2000
avatar

Quote:

rdtsc is a good approach too

No it isn't. It's a very bad idea - only people with root priviledge in Linux and Administrator priviledges in Windows NT/2000/XP will be able to use your program!

gillius
Member #119
April 2000

I didn't know that? It's just an assembly command, but does it require "administrator" mode or whatever on the CPU to do it? I know there is some priviledge functionality on CPUs now, but why is rdtsc a security risk?

Gillius
Gillius's Programming -- https://gillius.org/

Plucky
Member #1,346
May 2001
avatar

Since Pentium Pro, iirc rdtsc is no longer a ring 0 level instruction, but ring 3, ie all users can use it.

Mark Robson
Member #2,150
April 2002

Using GetTickCount() is a very bad idea, it's not really very accurate, and isn't updated every millisecond.

As another poster points out, windows queryperformancecounter or Unixy gettimeofday is the way to go.

RDTSC would be ok, except that not all CPUs are clocked at a constant rate (think speedstep laptops), and not all CPUs support it (well, most intel compatible do really).

gillius
Member #119
April 2000

I saw this library that encompassed the general solution. It used rdtsc when possible on single processor machines, and fall back to QueryPerformanceCounter, providing corrections if needed if they jump back (I think by providing the old value?). That's where I first heard about the jumping back.

The single machine was because the author didn't know if each CPU's rdtsc instruction would match up.

The library uses QueryPerformanceCounter every few seconds (I think?) to discover the clock speed of the rdtsc instruction... For the speedstepping case I think the author said it periodically checks QueryPerformanceCounter to make sure it's on track?

Although the library wasn't written because of these issues, or to be portable, it was written because the author claims that QueryPerformanceCounter is "slow" in that it takes a lot of time to call.

I don't know... it's fast enough for me and even when printing values with cout I still see differences of only a few microseconds when I call it over and over. (actually on second thought, I timed it twice, recorded, then printed AFTER I timed... so it takes a few microsec to call QPC).

Gillius
Gillius's Programming -- https://gillius.org/

Go to: