Sleep
Billybob

How does sleep work, in its many variations?
The kind of sleep I'm talking about is any function that causes the thread to go into a state of sleep for a specified period of time, during which time it consumes little to no CPU.
sleep(1);
Platforms: Windows, Mac, Linux, more (maybe, don't care that much).

Any information on how these types of functions work internally would be helpful.

X-G

Er, it does exactly what you said. It tells the scheduler not to activate the thread for a certain number of milliseconds. how do i shot web?

Paladin

Doesn't it work in the same way as the

rest(1);

function?

kazzmir

I think he means he wants to know how the code works for sleep(). I looked through sleep() in glibc, glibc/sysdeps/unix/clock_nanosleep.c and glibc/sysdeps/unix/sysv/linux/sleep.c, but it looks like a system call is made at some point so the code isnt very interesting and its very ugly :p.

X-G

Naturally it's a system call. Scheduling is done by the OS kernel. If he wants to know more he has to study the scheduler code for whatever OS he's interested in.

Myrdos

Some sleep tidbits:

It's been a while since I looked at this, but I know sleep's resolution varies from OS to OS. I think it's something like 20 milliseconds under Windows, and as low as 10 milliseconds under Linux. That is to say, if you ask for a sleep of 2 milliseconds your thread may not be scheduled to run again until at least 10 milliseconds have passed.

Under Linux there's also the nanosleep function, which is what I tend to use.

Finally, if you call sleep with a time of 0, it yields the thread with no minimum delay before it can reactivate - see also pthread_yield.

[EDIT]Here we go:

man nanosleep said:

The current implementation of nanosleep() is based on the normal kernel timer mechanism, which has a resolution of 1/HZ s (i.e, 10 ms on Linux/i386 and 1 ms on Linux/Alpha). Therefore, nanosleep() pauses always for at least the specified time, however it can take up to 10 ms longer than specified until the process becomes runnable again. For the same reason, the value returned in case of a delivered signal in *rem is usually rounded to the next larger multiple of 1/HZ s.

Thomas Fjellstrom
Quote:

(i.e, 10 ms on Linux/i386 and 1 ms on Linux/Alpha).

heh, my HZ is set to 1000, instead of the old default of 100.

Billybob

Darn. I was hoping it didn't dig right into the OS like that. Oh well.
I wanted to try and write a sleep function that used a more accurate timer, but I guess that's not possible. :'(

CGamesPlay

250 Hz here, which is 4 ms granularity. Tomasu has 1 ms...

The only issues I have with speed are my CPU frequency modulates down to 375 MHz and up to 3 GHz. I think UT 2004 caluclates the clock speed when it is running at about 1275 MHz, so one can listen to the audio in game and hear it change pitch as the frequency scales. :P

X-G

Quote:

I guess that's not possible.

Unfortunately, that is true. Process scheduling is very tightly tied to the core of the operating system.

Myrdos

There are options available - I knew a guy who used Adeos with a Linux system to do very precise timing for mobile robotics control. I took a look at it myself, but it seems very complex. Or you could go with a Real Time OS, or something like the RealTime Application Interface for Linux. (Though I don't endorse it, as I haven't tried it. :P)

But none of these solutions can guarantee the timing - it's all best effort. Anything that uses a scheduler (like an OS) isn't hard real time, because if you burden the system, it can't keep up with the schedule, and your task doesn't happen when it's supposed to. Anything at the millisecond-level is very prone to error (think a couple hundred milliseconds here) depending on what you're running on other threads/processes. Though it can be quite accurate if your system is unburdened.

Example: Using a non-real time Debian Linux system, I was trying to control a robot's speed using PWM. (Turn the power on and off hundreds of times per second to set the overall speed.) It was fine, but whenever I compressed an image from the camera into JPEG, the PWMing thread wouldn't be scheduled for a few hundred millisecs. If the power was off, the robot jerked to a halt. If the power was on, WHAM! into the wall.

Billybob

I don't see how it's unstable. All the scheduler has to do is check the time to see if the thread should wake up. If so, wake it up. The time being determined by a high resolution timer. What's so difficult and unstable about that?

Richard Phipps

You are forgetting that other programs are running too, which may mean the scheduler misses a few milliseconds due to those programs working.

A J

It instructs the scheduler to get out its pocket watch and wave it in front of the thread, saying "Your going into a deep sleep", then sometimes when its in a deep hypnotic trance, the scheudle tells it to act like a chicken, and has a good laugh.

Myrdos
Quote:

What's so difficult and unstable about that?

Heheheh. Try it and see what happens!

Here are two programs: ptimer and burden. Ptimer tries to cout a message every 500 milliseconds. Burden endlessly busy waits. If you run ptimer on it's own, it performs as expected. If you run ptimer and then start burden, look at the elapsed time:

ptimer said:

C:\Documents and Settings\David McCallum\My Documents\tdemo>ptimer
Time elapsed: 1532815 Time: 0::0
Time elapsed: 500 Time: 1532::815
Time elapsed: 500 Time: 1533::315
Time elapsed: 500 Time: 1533::815
Time elapsed: 500 Time: 1534::315
Time elapsed: 500 Time: 1534::815
Time elapsed: 500 Time: 1535::315
Time elapsed: 500 Time: 1535::815
Time elapsed: 545 Time: 1536::315 //started burden here
Time elapsed: 558 Time: 1536::860
Time elapsed: 500 Time: 1537::418
Time elapsed: 563 Time: 1537::918
Time elapsed: 563 Time: 1538::481
Time elapsed: 562 Time: 1539::44
Time elapsed: 563 Time: 1539::606
Time elapsed: 578 Time: 1540::169
Time elapsed: 562 Time: 1540::747
Time elapsed: 563 Time: 1541::309
Time elapsed: 579 Time: 1541::872
Time elapsed: 562 Time: 1542::451
Time elapsed: 578 Time: 1543::13
Time elapsed: 562 Time: 1543::591
Time elapsed: 563 Time: 1544::153
Time elapsed: 563 Time: 1544::716
Time elapsed: 562 Time: 1545::279
Time elapsed: 563 Time: 1545::841
Time elapsed: 562 Time: 1546::404
Time elapsed: 563 Time: 1546::966
Time elapsed: 562 Time: 1547::529
Time elapsed: 563 Time: 1548::91 //turned off burden here
Time elapsed: 500 Time: 1548::654
Time elapsed: 500 Time: 1549::154
Time elapsed: 500 Time: 1549::654
Time elapsed: 500 Time: 1550::154
Time elapsed: 500 Time: 1550::654
Time elapsed: 500 Time: 1551::154
Time elapsed: 500 Time: 1551::654

I notice that if you put the window focus back to ptimer, it keeps a lot better time than if you focus on burden. Under linux, I sometimes ended up with elapsed times of more than two seconds using just one burden program!

CGamesPlay
Quote:

Under linux, I sometimes ended up with elapsed times of more than two seconds using just one burden program!

Woah now! Before people start makign crazy assumptions about Linux multitasking and comparing it with pre-Mac OS X multitasking, let's ask what kernel version you have and what CONFIG_PREMEMPT you have:

$ gzcat /proc/config.gz |grep PREEMPT
# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_BKL=y

Myrdos
a.out said:

myrdos@travelmate:~/temp$ ./a.out
Time elapsed: 65535280 Time: 0::0
Time elapsed: 500 Time: 1152548294::280
Time elapsed: 500 Time: 1152548294::780
Time elapsed: 500 Time: 1152548295::280
Time elapsed: 500 Time: 1152548295::780
Time elapsed: 500 Time: 1152548296::280
Time elapsed: 500 Time: 1152548296::780
Time elapsed: 827 Time: 1152548297::280 //started burden here
Time elapsed: 564 Time: 1152548298::107
Time elapsed: 500 Time: 1152548298::671
Time elapsed: 532 Time: 1152548299::171
Time elapsed: 500 Time: 1152548299::703
Time elapsed: 528 Time: 1152548300::203
Time elapsed: 500 Time: 1152548300::731
Time elapsed: 528 Time: 1152548301::231
Time elapsed: 500 Time: 1152548301::759
Time elapsed: 528 Time: 1152548302::259
Time elapsed: 500 Time: 1152548302::787
Time elapsed: 524 Time: 1152548303::287
Time elapsed: 500 Time: 1152548303::811
Time elapsed: 524 Time: 1152548304::311 //turned off burden here
Time elapsed: 500 Time: 1152548304::835
Time elapsed: 500 Time: 1152548305::335
Time elapsed: 500 Time: 1152548305::835
Time elapsed: 500 Time: 1152548306::335
Time elapsed: 500 Time: 1152548306::835
Time elapsed: 500 Time: 1152548307::335
Time elapsed: 500 Time: 1152548307::835

The big spike in expected times happens when burden is first started. Afterwards, it seems to be closer to 500 milliseconds than in Windows.

$ gzcat /proc/config.gz |grep PREEMPT said:

bash: gzcat: command not found

$ cat /proc/config.gz |grep PREEMPT said:

cat: /proc/config.gz: No such file or directory

uname -a said:

Linux travelmate 2.6.15-25-386 #1 PREEMPT Wed Jun 14 11:25:49 UTC 2006 i686 GNU/Linux

CGamesPlay

The fact that PREEMPT is in your kernel version is odd. I suspect it is voluntary preemption, which means that while the program is being loaded into memory (which is a system call), no other tasks can execute.

Myrdos

Well, show me how it runs on your computer then:

http://junction.bafsoft.com/timerdemo.zip

g++ heavyburden.cpp -o burden -Wall
g++ precisetimer.cpp -Wall

[EDIT]Voluntary preemption!? That hasn't been used since forever! I sincerely doubt that's what's going on here.

CGamesPlay
cgames@ryan ~/test/tdemo $ ./a.out
Time elapsed: 65535174 Time: 0::0
Time elapsed: 500 Time: 1152549937::174
Time elapsed: 500 Time: 1152549937::674
Time elapsed: 500 Time: 1152549938::174
Time elapsed: 500 Time: 1152549938::674
Time elapsed: 500 Time: 1152549939::174
Time elapsed: 500 Time: 1152549939::674
Time elapsed: 500 Time: 1152549940::174
Time elapsed: 500 Time: 1152549940::674
Time elapsed: 500 Time: 1152549941::174
Time elapsed: 500 Time: 1152549941::674
Time elapsed: 500 Time: 1152549942::174
Time elapsed: 500 Time: 1152549942::674
Time elapsed: 500 Time: 1152549943::174

Okay, so the test wasn't actually fair because I have two processors. Let me run two burdens now:

1cgames@ryan ~/test/tdemo $ ./a.out > ptimer.log & sleep 2; ./burden & ./burden & sleep 2; killall burden; sleep 2; killall a.out
2[1] 13157
3[2] 13159
4[3] 13160
5[2]- Terminated ./burden
6[3]+ Terminated ./burden
7cgames@ryan ~/test/tdemo $
8[1]+ Terminated ./a.out >ptimer.log
9cgames@ryan ~/test/tdemo $ cat ptimer.log
10Time elapsed: 65535524 Time: 0::0
11Time elapsed: 500 Time: 1152550242::524
12Time elapsed: 500 Time: 1152550243::24
13Time elapsed: 500 Time: 1152550243::524
14Time elapsed: 500 Time: 1152550244::24
15Time elapsed: 500 Time: 1152550244::524
16Time elapsed: 500 Time: 1152550245::24
17Time elapsed: 500 Time: 1152550245::524
18Time elapsed: 500 Time: 1152550246::24
19Time elapsed: 500 Time: 1152550246::524
20Time elapsed: 500 Time: 1152550247::24
21Time elapsed: 500 Time: 1152550247::524
22Time elapsed: 500 Time: 1152550248::24
23cgames@ryan ~/test/tdemo $ ./a.out > ptimer.log & sleep 2; ./burden & ./burden & sleep 2; killall burden; sleep 2; killall a.out
24[1] 13171
25[2] 13173
26[3] 13174
27[2]- Terminated ./burden
28[3]+ Terminated ./burden
29cgames@ryan ~/test/tdemo $
30[1]+ Terminated ./a.out >ptimer.log
31cgames@ryan ~/test/tdemo $ cat ptimer.log
32Time elapsed: 65535225 Time: 0::0
33Time elapsed: 500 Time: 1152550281::225
34Time elapsed: 500 Time: 1152550281::725
35Time elapsed: 500 Time: 1152550282::225
36Time elapsed: 500 Time: 1152550282::725
37Time elapsed: 817 Time: 1152550283::225
38Time elapsed: 504 Time: 1152550284::42
39Time elapsed: 500 Time: 1152550284::546
40Time elapsed: 500 Time: 1152550285::46
41Time elapsed: 500 Time: 1152550285::546
42Time elapsed: 500 Time: 1152550286::46
43Time elapsed: 500 Time: 1152550286::546

[append]

Quote:

Voluntary preemption!? That hasn't been used since forever!

What are you talking about? It's still in the kernel options...

Myrdos
Quote:

What are you talking about?

I mean that it doesn't work worth a darn, and no modern desktop OS is going to enable it by default. (I had no idea that the option to enable it even existed -- can you even run normal programs that don't manually relinquish the processor in such a system?) We are talking about the same thing, right? A scheduler that can't force a process to suspend execution and perform a context switch to let another process run? I heard very early Macs used such a system, but it bombed because every developer thought their application needed more CPU time than the others.

Quote:

Okay, so the test wasn't actually fair because I have two processors.

Sweet.

CGamesPlay
Quote:

We are talking about the same thing, right?

Heh, no. There are 3 preemption models in the kernel:

None: whenever a system call is executing, no task will execute until the system task finishes.
Voluntary: the kernel has some points in the code where it will voluntarily relinquish CPU. When this happens, other tasks execute.
Forced (don't know what this one is actually called, but it is what I have enabled): the kernel can be preempted anywhere where it is safe.

An additional kernel option is "preempt the big kernel lock", which adds even more opportunity for preemption.

Billybob

Well I guess I shouldn't talk, I've never designed a scheduler. Seems to me like this wouldn't be an issue, but then again if they could fix it they would.

CGamesPlay

Seems to you that what wouldn't be an issue? What, you want every process to always sleep for exactly the time it wants? Not only is that not possible, it also requires far more CPU just to keep track of the processes. But if you want down to 1 ms accuracy, just adjust your kernel config (Linux; Windows you can't do anything about it).

A J

You can adjust scheduling to 1 ms accuracy on windows.

CGamesPlay

You going to provide any more information on the subject?

A J

Are you asking me too ?

win32::timeBeginPeriod()

CGamesPlay

Very handy. Thanks for the info. Is there any way we might be able to integrate this into Allegro? Perhaps if someone calls rest(1), call timeBeginPeriod(1) the first time it happens?

A J

there are subtle problems that may occur whilst using this timeBeginPeriod(1)'feature'.. M$ mentions it occasionaly in there "docs" but never really gets to the point, perhaps as its a very advanced topic.

But from the limited data i have on it, i think the problems relates to possible drift, in that you dont always get exactly 1ms, which is M$ way of saying "you can ask for 1ms, and we can tell you its 1ms, but we were too stoopid to actually code it properly, so you may get something around 1ms". I think they also never mentioned it too much in the older literature because on older/slower hardware running at 1ms res caused excessive overhead and acute starvation, thereby making what programmers thought was helping actually worse; however on the newer/faster Ghz machines 1ms should be fine, i have ran 1ms on an AMD2500+ for over a year without problems, and that is a fairly old/slow machine by todays standards.

I also have test code that i use to prove it on each machine i use, and on the Ghz machines, its pretty close to 1ms.

want/need code.. just ask (i have to dig it out, so will not post yet until needed).

Thread #586376. Printed from Allegro.cc