Hello,
Which is better (for efficiency, speed, etc): having two programs run side by side or one program with multiple threads doing the same job as the two applications? In either case the two programs/libraries will be allegro and a threaded network library. And I guess, are there any other implications, e.g. as two separate programs will allegro have to rest more in the main loop?
I'm trying to weigh up the pros/cons between a networked game that initialises itself both as the server and as a client (as a player), and using the same library but as a separate running application for the server and the client (e.g. like bzflag). Ignoring the details of determining the controlling user, etc.
As for the network library I haven't decided yet, but I'm more drawn to raknet. But enet interestes me, as does the library on bafs server - daynse or something - forgot it's name.
Well how the scheduler works regarding processes and threads depends on the platform. As Windows and Linux behave differently in this part. But are you really concerned with what's more efficient? (As both metods will be efficient enough.)
What's the easiest and most convenient way for you? Personally I would separate the two as I find it more logical. A computer can act as a game server even when I don't want to play the game on that computer. But that's just me. Both ways work.
Another option you might want to consider is to have one application and two threads, but using cooperative multithreading instead of preemptive multithreading. This is sometimes called fibres. You get some of the advantages of threads (easy communication between threads, and you only need one copy of the game data in memory) without all the headaches of preemptive multithreading. Of course, sometimes you do need preemption or maybe you'd like your program to make use of multiple CPUs.
tbh, I was more drawn to the separate server as then all I have to code in a game is client connections. But will an Allegro game be ok with this as from experience allegro tends to use up 100% of the cpu most of the time?
if your game doesn't run 100 cycles per second (like mine does ) but 50 or 60 you can put a rest(1) in the game loop and it won't take all the cpu.
allegro tends to use up 100% of the cpu most of the time
It's not Allegro, it's the game programmer's code.
edit: It's a matter of Busy waiting
Yeah, it's definitely the busy waiting -- make sure not only to not use busy waiting in your game code, but also not in your game server's code (it's really easy to do that in a network app, it's just so damn convienient!) -- Also most of the allegro timer examples I see implement busy waiting. A simple way to fix it, is when you lock your game to, say, 60fps, and the screen has been updated, and it's not time yet to call your game's logic methods... call:
rest(10);
http://www.allegro.cc/manual/api/timer-routines/rest
This is assuming that you have less than 100 lps/fps, if you have more, switch it down to 5... etc.
It's much easier, IMHO to do it as two seperate applications, If you're stuck in C++ (possibly for linux or portability or comfort) I'd make the game server a console app (many comercial games do this) -- Preferably I'd do the game server in C# or VB.net because you get a very convient timer class, easy to build forms (much nicer than a console interface) and managed sockets/pinging/web services(making it really damn easy to keep track of who is running an internet server)/etc. .NET sockets can even be put into an event based mode, where your code doesn't have to check for new connections or for new data -- it's told when it has it. "Bye Bye" to server-side busy waiting! Then you can simply use a timer that ticks every so often that does server side game logic, and you're mostly done...
I was planning on making the server a simple console app using raknet, but then I thought I could always create it with allegro in conjunction with the 'allegro console' library to a few basic scrolling text areas to display server information and to allow server commands, etc.
Well, if you keep busy waiting, your game will attempt to run as fast as the computer allows. People who are playing your game on a computer worse than your testing machine will be thankful.
From my experience with some professionnal windows servers, it was really infuriating to see a software server run a week-end update for 8 hours, while the CPU (busy time) peaked at 23%. The middleware was acting multi-task-friendly at all times, and there was no way for a programmer or administrator to tell it to run as fast as you can, dummy
Upgrading the hardware was no help, CPU usage would reduce further, speed would be identical...
Well, if you keep busy waiting, your game will attempt to run as fast as the computer allows.
You only preform a rest if there is time left over -- if your game only does lps/fps of 50 or so, and the computer is capable of much more, you don't just sit there in a busy loop, and on old computers, if it has extra time, you should be resting too as this will free time slices up for other threads (maybe they have aim or winamp running in the background) -- this will speed your game up in the end since the OS is going to jerk your time slice away at some point, it's just better to let it know when you're ready to have it jerked away.
A game server/game is different from a network application -- I'm not saying "give up your game loop cycles" I'm saying give up your extra cycles that would be spent in a wait loop doing nothing anyway as that slows the overall computer down -- if you're game runs at full speed on my pc at 25% usage, it should only take 25%, and then on someone elses where it needs 50% it will only take %50 -- etc, this is easily accomplished.
Offtopic: If you find a generic application running too slow or fast you can promote/demote the thread manually via the task manager.
You could use a high-precision timer to determine how much time is remaining after all your logic (and display), before the beginning a new cycle. This information could let your program decide if it should yield once, wait (and how long), or busy-wait.
On the paper, it looks to me like a great solution both for low-end and high-end PCs
But the cpu-friendly "wait" can't work with just Allegro rest().
I cannot speak from experience as I've not tested recently, but the docs and posts I've seen in the last 6 months all point to the same issue: The Allegro timers are not accurate on Windows.
The 5 ms granularity means:
Coding rest(1) is totally identical to coding rest(5)
The way I understand it, depending on random events, a rest(5) will wait between almost-0 ms (if the call happened a few ticks before the internal 5-ms-aligned timer) and 9.99ms if the call happened a few ticks after.
Or maybe the minimum is actually 5.00ms, but still, I'm pretty sure you can get a 9.99ms wait when you ask for a 1ms delay. And I'm talking of the case when your game is the only task running!
If your screen has a vertical frequency of 85Hz, there's 11.67ms between each vertical retrace. If you put a rest(1), you can only be sure to "catch" all retraces if your whole logic and update can be done in 1.68ms. If your game runs slower than that(ie: < 595 FPS) you have a risk of missing a retrace. Even if the risk is low, the test is passed 85 times per second...
I don't think missing a retrace is important, but it's the speed inconsistency that bothers me.
Note: Please feel free to correct me if I'm wrong, I'd love to see a solution that would automatically :
A) run as fast as possible on slow PCs - 100% CPU but good result
B) run "cool" on powerful PCs, taking as little resources as actually needed.
C) save power (-> heat ->fan noise) on laptop computers
Use GetTicks / gettimeofday for a high precision timer.
The trick here is the difference between a sleep and a yield. If you "yield" your thread every logic update, you're still using 100% CPU. Whenever the OS sees a yield, it sees if any other processes want to run. If no, or their load is very light, it reschedules your program.
If your program sleeps, it "yields", and tells the OS not to bother rescheduling it for x milliseconds. So you want to "yield" if your program needs as much CPU as it can get, and sleep otherwise. Sleeping for 0 milliseconds is a good way to yield.
The following code implements such a system, and maintains a constant number of logic updates per second while allowing FPS to drop on slower machines. It never lets the FPS drop to more than 1, so that very old machines can at least get a little feedback. The basic idea is this: If we're redrawing graphics and haven't updated logic, sleep. If we're redrawing graphics and the logic has changed, yield.
Globally:
1 | //Number of logic updates per second |
2 | #define CYCLES_PER_SEC 50 |
3 | |
4 | volatile int _logicTime = 0;//incremented CYCLES_PER_SEC times each second |
5 | volatile bool _engineWarning = false;//display a warning if the computer is too slow to run this game |
6 | |
7 | //These variables let us calculate FPS: |
8 | int graphicsCounter;//keeps track of how many times we drew to the screen |
9 | int logicCounter;//keeps track of how many times we updated the logic |
10 | int currentFPS;//the current frames per second |
11 | |
12 | void ticker(void) |
13 | { |
14 | if (_logicTime > CYCLES_PER_SEC) |
15 | { |
16 | _engineWarning = true; |
17 | _logicTime = 0; |
18 | }//if logicTime |
19 | else |
20 | _logicTime++; |
21 | }//ticker |
22 | END_OF_FUNCTION(ticker); |
Initialization:
LOCK_VARIABLE(_logicTime); LOCK_VARIABLE(_engineWarning); LOCK_FUNCTION(ticker); if (install_int_ex(ticker, BPS_TO_TIMER(CYCLES_PER_SEC)) < 0) cout << "Couldn't start ticker function" << endl;
Game Engine:
1 | int logicLimiter = 0;//if we aren't drawing anything to the screen and still can't |
2 | //keep up, draw graphics at least once per second anyways |
3 | bool logicChanged = false;//true if we updated the logic and should redraw the graphics |
4 | |
5 | while (!stopClient) |
6 | { |
7 | logicLimiter = 0; |
8 | logicChanged = false; |
9 | |
10 | while ((_logicTime > 0) && (logicLimiter < CYCLES_PER_SEC)) |
11 | { |
12 | doLogic(); |
13 | |
14 | logicChanged = true; |
15 | logicCounter++; |
16 | _logicTime--; |
17 | |
18 | logicLimiter++; |
19 | rest(0); |
20 | }//while logicTime |
21 | |
22 | if (logicChanged) |
23 | { |
24 | drawGraphics(); |
25 | rest(0); |
26 | graphicsCounter++; |
27 | }//if logicChanged |
28 | else |
29 | rest(20); |
30 | }//while !stopClient |
In the logic part of your code, to calculate FPS:
if (logicCounter >= CYCLES_PER_SEC) { currentFPS = graphicsCounter; _gfxTime = 0; graphicsCounter = 0; logicCounter = 0; }//if logicCounter
In the logic part of your code, display a warning if logic updates were skipped:
if (_engineWarning) { _engineWarning = false; cout << "WARNING: logic updates were skipped. Your computer is probably too slow to run this game." << endl; }//if _engineWarning
This is for a 2D game, if you were doing a 3D game you might want to update your graphics even if there was no logic update. (But you would still sleep.)
(quoted)
rest(20) will have a duration of between 20 and 25ms, even if nothing is running in background.
When your game's thread is back "on top", it will run 1 or 2 times in a row the ticker, since 20ms or more have passed.
Control will then switch back to the "main thread" and your regulation mechanism will run 2 times in a row the logic to catch up.
--> The speed is irregular. You aim for 50 updates for second, and you get them, except instead of:
1 (wait 20ms) 2 (wait 20ms) 3 (wait 20ms) 4 (wait 20ms)
you often get:
1 2 (wait 40ms) 3 4 (wait 40ms)
Or I'm just completely paranoid...:-/
Audric: I have found rest to be very accurate on an unburdened system. See here for more, and some stats from Windows and Linux.
The "inaccuracy" of sleeping on an unburdened system has nothing to do with Allegro's timers. It's only that the OS won't reschedule a sleeping program to run until some minimum time has elapsed. If I sleep for 20 ms, I tend to get 20 ms (if OS is unburdened). If I sleep for 1 ms, I tend to get at least 10-20 ms, depending on the OS.
But it doesn't matter in the slightest. The beauty of a 'proper' game engine, where logic and graphics are separated, is that you can have quite large variations in the time between two consecutive logic updates. No one will notice as long as you have the correct number of updates each second.
The kind of precise timing you're longing for doesn't exist in desktop OSes. If you think you can get millisecond-accurate scheduling, guess again.
Thanks DMC, there have been so many threads about this, I'm still confused. I'll convert my work-in-progress to your method for testing. (I was simply using vsync + draw + 1 to N logic updates to catch up on a tick timer)
Neil, sorry of the derailing :/ So, back on the topic of server and client apps...
By the way, the server can be a blind console, so it wouldn't require Allegro.
By the way, the server can be a blind console, so it wouldn't require Allegro.
I would make it that way if I were doing it in C, since there's no reason for it to even need allegro, it also becomes exextremelyortable (not that allegro isn't portable -- but you could compile it on anything that has sockets and C, even a system that allegro hasn't been ported to... etc)
The basic idea is this: If we're redrawing graphics and haven't updated logic, sleep. If we're redrawing graphics and the logic has changed, yield.
Sorry for keeping the train wreck going, but this doesn't make any sense to me; If the logic needs and update, why would we sleep or yield? shouldn't we only yield as an alternative to busy waiting? If the application has things it needs to do, I don't understand why we should sleep or yield; unless our application is running too fast we shouldn't. (the whole reason we have a busy wait loop, to slow down the program -- we don't busy wait for no reason, only when the program is running to fast and has extra time...)
If you don't yield when you decide it's the right moment, the OS will take control when IT decides. It's better to yield yourself than risk getting caught in the middle of a blit to screen.
(edit: scrapped a justification on rest(), irrelevant)
right, but you have to decide when the right moment is, and the right moment is never when you have work that needs to be done, it's when you have extra time (and you almost always have extra time).
So if the screen needs to render, or logic needs to update, you should do that before considering waiting -- if the system is too slow, you never call wait/etc and the OS will yank away your thread when it needs to, and you can't help it -- adding wait/rest would only make your program suffer more -- if you're computer is fast enough, and it is sitting in a busy wait loop (pushing cpu usage to 100%) you should yield/rest then... no?
I don't understand.
I created a new thread for the CPU usage discussion here. (I think we've derailed this one enough)
In my mmrpg I use different exes one is client and one is server. In the code I use a variable so the code knows witch main.cpp file is interpreting it. For testing, I open the server and client on the same computer. Because the game is not threaded some data is lost. This is because when the client or server is sending data it can not receive data at the same time.
piccolo: you fail.
I don't not understand your comment. The system I explained works quite well.
For testing, I open the server and client on the same computer. Because the game is not threaded some data is lost. This is because when the client or server is sending data it can not receive data at the same time.
[quote Me]
piccolo: you fail.
</quote>
Its not because they are not threaded. Your OS is multitask. You do not have set the right switch mode ( RTM ) .
I can give you my code ( which is not the best, by far ) where a running server in background send and receive data from multiple client on the same computer. I will not since you got searched the set_display_switch function in da fuck'in manual !
Hope it helps.
EDIT: some various edits.
_
o thats what you meant i was misleading in typing.
Because the game is not threaded some data is lost. This is because when the client or server is sending data it can not receive data at the same time.
That should of had its only line with a space.
The lack of threading is why data is lost. That is what i meant to say.
EDIT:
This can also be fixed by adding a traffic light system into the network protocol.
I think if you do this is will be slower then threading for a highly active network.
Funny. I didn't have any problems with data loss when I wrote a networked game (or source only for non-Windows users) that didn't use threads.
Are you using UDP or TCP?
I'm using tcp.
The main data that is lost is move request. When one client send a move request the rest are lost.
That shouldn't happen. TCP doesn't do that. My game uses TCP as well, and I have had zero problems with data loss. You're obviously doing something wrong, or misinterpreting some results somewhere. You should start a thread (EDIT: no, not with pthreads) for this problem and share some code
piccolo, that's not because of multithreading, your program is broken.
I've written console application chat programs that were not multithreaded, and chatted with several users on a network, and/or several fake users on the same machine as the server -- it all worked fine, no multithreading, no data loss, TCP of course.
edit: the way tcp is implemented is that it has it's own buffer, so your application does not need to be multithreaded -- if you wait 5 seconds to read the data, it should still be there. /edit
UDP on the other hand, you can expect packet loss, especially if you're saturating the network with UDP packets...
hmmm ok. I though that because the game had a game loop and a network loop that are not multi-threaded and runs one after the other,data will be lost. because while in the game loop you can not receive stuff that is meant for the network loop.
Thats why i said you would have to network both loops so they run at the same time.
Or let the sender know when the server is in the network loop. Using a stoplight system.
It sounds like a bad design to me...
Why cant you just poll the network for messages in your game loop?
all data is pooled in the network loop. then processed in the game loop.
Because when his app is backgrounding it is not running yet.
The app just 'hang' until it get focusing again.
If only he had rode me correctly and did some search as I told.... I finally second Michael Jensen about the design thing.
Try to add this in your initialization routine:
if (set_display_switch_mode(SWITCH_BACKGROUND) != 0) { fprintf( stderr , "Warning: can not change switch mode to SWITCH_BACKGROUND "); if( set_display_switch_mode(SWITCH_BACKAMNESIA) !=0) { fprintf(stderr,"Error: can not change switch mode to BACKAMNESIA"); return FALSE; } }
i telling you I'm using that already.
1 | |
2 | #include "game.h" |
3 | //using namespace std; |
4 | //#include <iostream> |
5 | |
6 | int initAll() |
7 | { |
8 | allegro_init(); |
9 | install_keyboard(); |
10 | install_timer(); |
11 | install_mouse(); |
12 | |
13 | |
14 | |
15 | computerId=0; |
16 | |
17 | |
18 | // install a MIDI sound driver |
19 | if (install_sound(DIGI_AUTODETECT, MIDI_AUTODETECT, NULL) != 0) |
20 | { |
21 | return 1; |
22 | } |
23 | ////////////////////##### i put this in for fullsreen F2 togoll |
24 | |
25 | set_color_depth(24); |
26 | if (set_gfx_mode(GFX_AUTODETECT_WINDOWED,640,480,0,0)!=0) |
27 | { |
28 | return -1; |
29 | } |
30 | set_display_switch_mode(SWITCH_BACKGROUND);//allows game to run in back ground |
31 | /* |
32 | BITMAP *buffer; |
33 | |
34 | |
35 | buffer=create_bitmap(SCREEN_W,SCREEN_H); |
36 | */ |
i telling you I'm using that already.
You are just now telling me that.
You do not check its return value. How do you know you are really using a good switch mode ?
Is it configured on both client & server ?
SWITCH_BACKAMNESIA works fine in fullscreen while SWITCH_BACKGROUND works in windowed mode.
If all the previous things are ok, it is your implementation who is buggy.
yes its the same one client and server. i think im going to make 3 more network core instrutions.
#1. requesting send
#2. send ok
#3 send not ok
#4 (maybe a) send complete
So, uh, you are going to duplicate what TCP is already doing, only worse?
Just make it work like you have it. TCP does not drop packages (it drops the connection if a certain package takes to long to reach it's destination).
You do not check its return value.
Alors ?
JH is right, TCP will not drop your packets -- I've run a console server app, and an allegro app (of my own design) side by side on the same machine, and not had any problems thus far, and have even tested in on several machines, and networks...
You start the console app, let it serve clients, start a client, and connect. -- If the connection becomes unstable, the TCP connection will disconnect, the-end.
It will never drop, lose, re-order, or scramble your packets; if it absolutely needs to, it just terminates your connection.
The OS is multithreaded so your apps don't need to be, though having them not take 100% of the cpu usage in a wait loop as discussed earlier is nice. If the OS wants to do something, it does it, without asking permission, it will yank the thread away, process the TCP/IP stack, and then give the thread back in it's own sweet damn time.
There's something wrong with your program or your understanding of network programming.
Edit:
set_display_switch_mode has nothing to do with other programs running in the background, it only effects how your graphical application behaves in regards to when it has focus -- I suppose it could make the program actually suspend while it's minimized if it's set wrong; If your dedicated server is graphical (and it shouldn't be) this might be why your program doesn't work (ex: you're trying to run two graphical programs at once that are both vying for system/vram bitmaps and/or locks to such... -- that's kind of a no-brainer), and there may be nothing you can do, but to design it differently. -- there's no reason a dedicated server should be graphical... if you want to view the game in progress, etc, you should create a graphical client that connects to the dedicated console server.
Edit: Also in windows, I'm not sure that you can run two seperate applications at once that use the same implementation of allegro timers. But I might be mistaken.
Edit: And I'm pretty sure that any calls to vsync() will block indefinitely while your app is minimized, etc, etc... this is a bad design, just dont do it -- use a console app, a service, or a windows GUI app as your dedicated server.