Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » A threaded loading screen

Credits go to pkrcel, SiegeLord, and Thomas Fjellstrom for helping out!
This thread is locked; no one can reply to it. rss feed Print
A threaded loading screen
André Silva
Member #11,991
May 2010
avatar

Hey everybody. I want my game to have a loading screen: I want the game to show a simple animation, while, in another thread, the content is being loaded. Naturally, this is a lot better than the game freezing and only resuming when the contents are done loading.

It's trivial: create an Allegro thread that will load the content, while the main thread has an event queue on a timer that draws the animations. So far so good. There is one problem, however.
Let's consider that the thread that does the loading is a secondary thread, created with al_create_thread, whereas the animation logic+drawing is done on the main thread.

The current bitmap for the main thread is the display's backbuffer, naturally. I need it to be so so that I may draw the animation. The loading thread loads a lot of different files, including images... which will not be tied to the display, because the display's backbuffer is only "current" for my main thread, not for my loading thread.
This means that my bitmaps will be (gulp) memory bitmaps.

How can I get around this? I tried having a mutex that must be locked/unlocked by either thread, for when they want to draw the animation to the display (main) or when they want to load the bitmap (loader thread). This visibly increased the loading time, and it does not work -- my bitmaps are still memory bitmaps.

What should I do so that my main thread can draw, while my loading thread loads the appropriate video bitmaps? I'm sure there's a decent way to do this, as loading screens are a fundamental part of any game.

SiegeLord
Member #7,827
October 2006
avatar

The current bitmap for the main thread is the display's backbuffer, naturally. I need it to be so so that I may draw the animation. The loading thread loads a lot of different files, including images... which will not be tied to the display, because the display's backbuffer is only "current" for my main thread, not for my loading thread.
This means that my bitmaps will be (gulp) memory bitmaps.

That's how it is meant to work. You load memory bitmaps in a secondary thread, and then convert them (using al_convert_bitmap) on the main thread. Note that this is a 5.1.x-only feature, in 5.0.x you can use al_clone_bitmap instead for a slightly less efficient alternative. There's a ex_loading_thread example in the 5.1 distribution.

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

André Silva
Member #11,991
May 2010
avatar

Thanks for the reply. I forgot to mention that I'm on 5.0.10. So converting after they're done will be my main way to do it... I'll give it a go, and then share my experience. Thanks!

EDIT: I looked into it, and sure enough, that solution is the way to go. It works well for bitmaps that will be used after the loading screen... but not bitmaps that need to be used during...!

Most of the loading process is the program loading images and other files. But a big part of it is also the creation of the level buffer images (my level's geometry is static and complex, so it's better to save it all in a few buffer images and then draw those). Those buffer bitmaps are created during the loading process (and as such, are memory bitmaps), and then, primitives and (video or memory) texture bitmaps are drawn on top of them to make the geometry.

The texture bitmaps I draw on top of them can either be memory bitmaps (if I only convert my bitmaps AFTER the loading process), or video bitmaps (if I convert the new bitmaps every frame tick on the loading screen animation, just like the 5.1.X example does). But either way, won't the process of creating the level buffer images be VERY slow, because they are memory bitmaps? What can I do to overcome it?

Thomas Fjellstrom
Member #476
June 2000
avatar

It may actually be quicker to build up those big bitmaps in memory, as you don't have to worry about uploading all of that pixel data to the gpu several times, just to copy it around in the gpu.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

pkrcel
Member #14,001
February 2012

I thought that memory-to-memory was WAY quickier than memory-to-gpu and back.

Aren't memory bitmaps slow to be actually drawn on screen but fast enuff to be blended in memory and only then converted?

It is unlikely that Google shares your distaste for capitalism. - Derezo
If one had the eternity of time, one would do things later. - Johan Halmén

André Silva
Member #11,991
May 2010
avatar

In all my years of Allegro 5, I always tried avoiding memory bitmaps like the plague. The manual doesn't do a good job explaining why and when you'd want to use memory bitmaps or video bitmaps, so I always went for the ones that didn't make my game lag. Obviously, there's more to this than I'm aware. But I'll certainly profile a before-and-after (level buffers as video bitmaps, and then as memory) later today.

EDIT: I tried different combinations, and if the buffer images are video bitmaps, and the textures I draw on them are also video, it takes 0.28 seconds to create the buffers.
If the buffers are memory bitmaps, and the textures are video, 0.98 seconds. If the buffers are video, and the textures are memory, 3.26 seconds. Finally, if both the buffers and textures are memory bitmaps, 0.68 seconds.

This just leaves me more confused... I don't know how to trust video or memory bitmaps any more, and just thinking about how this will mess up my game's loading process makes me uneasy...

pkrcel
Member #14,001
February 2012

Well, I'm not very surprised by your results.

My (limited) understanding is that in general you have to avoid transferring back and forth from memory to GPU, and there is a asymmetry in that (I guess also Allegro does its part in there).

The least expensive path in your case seems:

  • assets on disk ->

  • assets on memory ->

  • memory buffers ->

in a dedicated thread, and then

  • upload to GPU of said buffers

in the main (i.e. the thread owning the display) thread.

What has you so worried? I'm curious because this interests me also and I haven't yet banged my head on the wall.

If you load the loading screen resources (which I suppose are minimal) direct to GPU in the main thread and then have the loader thread do it's job in the backgurond, you can periodcally (timer?) query a shared resource to know the progress, and upon finishing you join the loader thread and al_convert progressively the buffers once they can be owned by the main thread.

I don't know how long could it take to al_convert those bunch of buffers thou, but I guess it's fast (it's a batch upload).

It is unlikely that Google shares your distaste for capitalism. - Derezo
If one had the eternity of time, one would do things later. - Johan Halmén

Thomas Fjellstrom
Member #476
June 2000
avatar

Things to watch out for:

  • Reading from textures into ram.

  • Writing into textures from ram.

Both of those can take more time that a person might otherwise expect (without knowing how gpus work).

If you're drawing complex scenes to a texture, you'll want to use video bitmaps regardless. For simple 2d scenes which are just some bitmaps/sprites drawn to a surface, AND you want to do it in a separate thread, its probably best just to keep it all in ram. But as your test showed, video bitmaps were faster, so if you're doing it in your main drawing, then just use video bitmaps.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

André Silva
Member #11,991
May 2010
avatar

pkrcel said:

The least expensive path in your case seems:

But according to my tests, if the geometry textures (assets) are memory bitmaps, and the level rendered geometry buffers are memory bitmaps too, I get suboptimal speeds. It's not a crushing difference compared to video bitmaps onto video bitmaps, but it's still not quite the least expensive path.

Quote:

What has you so worried? I'm curious because this interests me also and I haven't yet banged my head on the wall.

I'm more worried about knowing exactly what I should do to solve the memory/video bitmap shenanigans, as well as the massive increase in complexity from what I idealized. For one thing, I'll have to change my bitmaps from being a pointer to an Allegro bitmap, into being a pointer to a structure that in turn points to a bitmap, and also has a boolean, that controls whether or not the bitmap needs to be converted. Then I'd need a global list that holds all of the loaded bitmaps, so that I may iterate through them and convert them, once my loading process is done.

And that wouldn't even be so bad, if I was at least sure of what to do. I've got the idea on how to load normal images (load them on the thread as memory bitmaps, then convert at the end), but I'm at a complete loss for my level buffer images. Like I said above, drawing memory bitmaps on memory bitmaps is sub-optimal, specially considering what Thomas said, but drawing video bitmaps on video bitmaps is kind of impossible in a non-display-owning thread... And let's not even think about drawing memory bitmaps on video ones, or vice-versa!

Thomas Fjellstrom
Member #476
June 2000
avatar

If you have to build these level buffers dynamically, and in a separate thread, you're kind of stuck with mem-mem. Is there a reason you can't provide the level buffers pre-rendered?

Or heck, if you "cant" do it, maybe only the first time you load the game it has to render the full buffers, then just cache them on disk to load the pre-rendered one on the next loads.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

André Silva
Member #11,991
May 2010
avatar

Good idea, but it wouldn't work for me, because the levels are meant to be edited by players. I'll add my game to the depot once I have a more palpable demo, but for now, it'll just remain on GitHub.

Anyway, I was thinking... I could do it like so:

  • Have the main thread display the loading screen as usual.

  • Have the loading thread load the textures used in the level geometry.

  • Every frame of the loading screen, the main thread converts the textures (memory bitmaps) into video bitmaps (the example does this, and it's probably simple and quick).

  • Once the loader thread has finished the textures, it waits for the next tick on the main thread (can be easily achieved with the likes of ALLEGRO_COND).

  • This will make sure that every texture the level will ever need is a video bitmap.

  • After that, the loader thread calculates the level buffer image size, number, etc., and creates them (as memory bitmaps).

  • The loader thread once again awaits for the main thread to convert the new bitmaps into video bitmaps.

  • Once that is done, the loader thread can now draw the textures (video bitmaps) onto the buffers (video bitmaps) without any problems or slowdowns.

This should work... unless the video-on-video drawing can only be accelerated if the thread that does the drawing is the one that "owns" the display.

pkrcel
Member #14,001
February 2012

Once that is done, the loader thread can now draw the textures (video bitmaps) onto the buffers (video bitmaps) without any problems or slowdowns.

Not sure but this might be not possible, I think the threads that do not own the current context cannot handle ANY video bitmap.

And, of course I was referring to the least expensive patth according to your constraints, video-to-video is the fastest route for all the right reasons ;D

It is unlikely that Google shares your distaste for capitalism. - Derezo
If one had the eternity of time, one would do things later. - Johan Halmén

André Silva
Member #11,991
May 2010
avatar

pkrcel said:

Not sure but this might be not possible, I think the threads that do not own the current context cannot handle ANY video bitmap.

I'm afraid you're right... I just tried a quick test, and the buffer images were not drawn to.

So I guess that's it. My only way really is to work on everything as a memory bitmap, and only at the very end convert to video.

Thank you all for your insight. I'll try actually implementing this either today or tomorrow, and when I'm done, I'll get back to this thread.

Thomas Fjellstrom
Member #476
June 2000
avatar

You can swap the current context to other threads. Just that a context can only be attached to a single thread at a time. So it is possible to do some work in the loading thread in video bitmaps. But it could take longer than a frame on the main thread...

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

pkrcel
Member #14,001
February 2012

This brings up interesting afterthought, SI I might as well ask.

I once read (cannot link, I hadn't bookmarked) that this might primarily an OpengL limitation, while Direct3D does handle well multithreaded draw operations...

is that really the case?

It is unlikely that Google shares your distaste for capitalism. - Derezo
If one had the eternity of time, one would do things later. - Johan Halmén

André Silva
Member #11,991
May 2010
avatar

...I just realized that I use sub-bitmaps, because I use spritesheets. And when I convert the memory bitmaps into video bitmaps, the sub-bitmaps will start pointing at trash.

Okay, I've reached the conclusion that it's just not worth it. Until there is a sensible solution to this, I won't implement a loading screen. I'll just blit a static image onto the screen once, load everything on the main thread, and be on my way with the main game.

I just cannot afford to waste my time and patience implementing a system that has to keep track of all of my bitmaps, refactor the code so that image variables are actually pointers to a harness that in turn points to an image, plus the mess that is implementing a thread to begin with, sacrificing loading time because I can't do video-on-video drawing, and on top of that, now finding some convoluted solution to my sub-bitmap problem? Fat chance.

I appreciate your help a lot. Although I haven't reached a real answer yet, you lot helped me understand how memory/video bitmaps work a bit better, as well as make my mind tick for ways on how to overcome this. Maybe one day I'll return to this and share my experience, but that certainly won't be now.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

'Thread' it yourself manually. Ie. parallel programming. For every other timer tick (or some appropriate ratio) update your loading screen. On the other ticks load your bitmaps in the main thread a few at a time until you're done. This includes building your level images in a parallel way as well. Ie, process part of it at a time. This involves lists of bitmaps to load, and lists of geometry to draw. Then you do one piece at a time, alternating processing with your loading screen.

I guess the main thing to remember is that you only get video bitmaps in the thread with the current drawing context, which means if you use a thread to load them into memory bitmaps, you will still have to convert them on the main thread.

m c
Member #5,337
December 2004
avatar

You have: files on disk
you want: subbitmap sprites backed by sprite sheet in vram.

Why not just

1) Load all bitmap files from disk to memory bitmap objects in background thread while updating loading progress in main thread, also add a pointer to each one to a list
2) Then go through that list, maybe in batches of 100 at a time with a loading screen update in between? convert all of the memory bitmap objects to video bitmap objects in main thread
3) Now you can make your sub bitmaps for the actual sprite objects

Keeping some in memory and some in video while the program is running sounds too complicated. Do you have more bitmaps than can fit in VRAM and you have to page them in and out based on a priority system? That is what old games had to do but that sounds pretty difficult to program.

Isn't it okay to have a loop that goes through the list and every 100 or 1000 iterations you do you screen drawing / input handling so that the program doesn't completely freeze?

(\ /)
(O.o)
(> <)

Chris Katko
Member #1,881
January 2002
avatar

I... I don't understand why this is so big of an issue. How exhaustive of an "loading screen" are we talking about? Video? Textured, lit 3-D scene? Or simply a bar filling up against a background with some text changing?

The scenario I'm envisioning shouldn't even need a separate thread as long as it gives sufficient time to the display while loading files. The only advantage of a second thread would be to reduce computation time by using a second core, but you're already disk limited.

If you're afraid of blocking events on large files, and running C++, std::future might be worth noting. But perhaps that re-opens the problems you've been discussing already.

[edited]

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

pkrcel
Member #14,001
February 2012

Hey, following Chris I might also have missed something: I thought the thread loading would go BEYOND the loading screen.

I mean a loading screen is a STOP on interaction and only informs the user that something's going on in the "background" (not really, it's most foreground) and he CANNOT play right now.
So int doesn't necessarily need to be threaded, as others have noted.

NOW, if you have your player wandering around in gameplay AND you need to manage resources in background WITHOUT having your user stand still waiting....it's another matter entirely.

I GUESS you'd need to implement a resource managing system that loads and unloads from videoram based on a priority system or a spatial algorythm or any other kind of custom heuristic really, and for this you're prolly (but not mandatorily) going to need proprer threading.

If you're simply building up everything beforehand, the threaded loading screen is sort of an exercise with no real benefit (cause your user will not interact with the game in any case...), and sinple time sharing alternative, like edgar and M_C sort of suggested, might be more than enuff.

It is unlikely that Google shares your distaste for capitalism. - Derezo
If one had the eternity of time, one would do things later. - Johan Halmén

André Silva
Member #11,991
May 2010
avatar

Oh, I didn't notice that my goal wasn't clear. Yes, I want a blocking loading process. Before the level loads, the player can do nothing, as the game is loading. I just brought up the whole threaded shenanigans because I envisioned an animated loading screen, so that the player knows the program is alive, but at the same time, I wanted it to have a minimally consistent framerate.
Also, the content I load is varied. Images can go from spritesheets to be used by user-created enemies, to textures used in user-created maps, to static HUD images, to variable particle images. Plus I also load sounds, different types of text files, and more.

But threading would just be overkill, and raise all the silly problems written in this topic.
A static screen works at the moment (specially because it currently takes around a second, so no worries), but if I expand upon this in the future, I'll likely go with a single-threaded solution, probably like Edgar or m c said. Although that would again raise the complexity a bit, in that I'd have to find a way to uniform the loading process for any and all sorts of bitmaps I load, which are quite varied. Plus I'd have to find some way to control how much of the timer tick's time I have left, so I can judge what images to load before I have to render the screen.

It really looks I jumped in on the "different thread" idea far too early. And for quite a while, I couldn't grasp the idea of having a loading process while drawing. And then I realized that the normal game logic does pretty much that, except replace "loading" with "logic". Although on the other hand, a game tick's logic procedure runs in pretty much constant time, whereas content loading might not (one texture can be 32x32, while the next can be 800x800).

I have to apologize, I completely failed to realize that deep down, you were suggesting normal solutions to combat my over-complicated ideology! ::)

Gideon Weems
Member #3,925
October 2003

Static loading screens can be interesting, if the image is interesting and changes between runs.

Chris Katko
Member #1,881
January 2002
avatar

It's okay if the screen stutters a bit, as 99% of games do that anyway.

That could be as easy as:

if(!draw_event_fired)load_next_file();
if(draw_event_fired)draw(); //at worst, a single file went too long.

If your game is doing a background, and then fading to another background every say, 15 seconds: What you could do is load / update as you intend, but during the perceptible change, block or reduce the amount of time given to loading files to ensure smooth playback.

if(!running_animation_fade)
{
if(!draw_event_fired)load_next_file();
if(draw_event_fired)draw(); //at worst, a single file went too long before animation started. (Imperceptible)
}else{
if(draw_event_fired)draw(); 
}

Keeping in mind that any time dedicated to smooth graphics is going to delay the loading time. However, if the loading screen is interesting enough, the perceived time should be lower. If the perceptible event like the fade is a long lasting fade though (5-10 seconds), you'll definitely want to load files during that.

A step further could be to force large files to load inbetween fades (never during). So that the lag from them is hidden.

[edit] Come to think of it, if you draw first, then you'll likely be inside the VSYNC blanking period during the first bits of loading files. So the largest file which overreaches, would use a little bit of that blanking period to hide the delay. But VSYNC also blocks (and isn't available sometimes) so what about the time eaten by waiting for VSYNC? Or is that not-applicable for Allegro 5 since it's event-driven? My head hurts.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

Go to: