A4, SDL, A5
kazzmir

I have added an Allegro5 back-end to my game engine, paintown, and here is my experience doing the port.

Originally Paintown was written using Allegro4. Eventually I wanted to have better support for OSX and the ability to resize windows so I ported the game to SDL (while keeping the Allegro4 back-end). Recently I have implemented most of the Allegro5 back-end well enough to play the game sans a few features.

Porting Allegro4 to SDL was not that hard. Mostly I had to assemble various extra SDL libraries that together give the same functionality as Allegro4. Allegro4 and SDL share mostly the same philosophy about 2d graphics: create memory bitmaps at will and blit them to the screen at some point. I already had a wrapper class around Allegro4 so none of my game logic had to change to support the SDL back-end.

Implementing Allegro5 was much harder. The single most major issue I repeatedly ran into was the split between video and memory bitmaps. The main thread is the only one that should be creating video bitmaps (assuming a single display for the application) and is the only thread that should be blitting said bitmaps. All other threads should deal with memory bitmaps.

The main thread also should only be blitting/drawing video bitmaps to the backbuffer rather than drawing on those video bitmaps and then blitting them like in Allegro4/SDL. I had a few places where I would create a bitmap, draw some rectangles and circles on it, and then blit it to the screen. Instead its better to create sub-bitmaps of the backbuffer and draw the rectangles and circles directly to the backbuffer. This resulted in fewer resources being used for all backends because I didn't need to create an extra memory bitmap in Allegro4/SDL nor did I need an FBO in Allegro5. I did run into the dreaded "so I add 0.5 to all coordinates?" issue but I seemed to have gotten past that.

Paintown is set up to show a loading screen while various resources are being loaded and this is where I learned about the difference between memory and video bitmaps. After initially sort of getting things to work the game ran but extremely slowly (~3fps). The reason is that the loading threads were creating memory bitmaps by default because there was no 'current_display' set in the tls structure. When a new thread is created the tls structure is initialized mostly with 0's for every field. I still use this setup, where loading threads create memory bitmaps, but now I convert them to video bitmaps before they are used in the main thread (more on this later).

I was experiencing many segfaults during development that were very hard to debug until I finally ran valgrind. It turned out that the problem was in the way I was using the ttf addon. The loading threads were calling al_get_font_width and al_get_font_height but those methods try to actually draw the text and measure the size of the pixels that come out. Before the loading thread was created I had already used the fonts in the main thread to draw the menu and so the ttf drew the glyphs using its cache of video bitmaps that were created in the main thread. When the loading thread tried to use these same fonts a segfault occurred because the video bitmaps from the main thread could not be used in a thread where there was no display set up.

The solution to the ttf problem was to keep two versions of a font, one that just used video bitmaps and one that used memory bitmaps. As long as the current target bitmap is NULL the fonts will use memory bitmaps. My code looks basically like this:

  ALLEGRO_FONT * memory;
  ALLEGRO_FONT * video;

  int height(){
    ALLEGRO_BITMAP * save = al_get_target_bitmap();
    al_set_target_bitmap(NULL);
    int h = al_get_font_height(memory, ...); // use memory bitmaps to get dimensions
    al_set_target_bitmap(save);
  }

  void printf(...){
    al_draw_text(video, ...); // use video bitmaps to render
  }

I talked with Elias about this on IRC who would like to find some solution
within the ttf addon itself.

A somewhat related problem to the fonts (or at least one that manifested itself while the fonts were being used) was reseting the target bitmap. I do draw onto bitmaps occasionally and the bitmap wrapper class supports this so it calls al_set_target_bitmap(my_bitmap) before any draw operation. Of course the bitmap class also destroys the ALLEGRO_BITMAP* in the destructor which could end up with the situation that a destroyed bitmap was still set as the target bitmap. The solution was to set the target bitmap to NULL if the current target is the same as the bitmap about to be destroyed.

  if (al_get_target_bitmap() == my_bitmap){
    al_set_target_bitmap(NULL);
  }
  al_destroy_bitmap(my_bitmap);

Finally I dealt with the issue of converting memory bitmaps to video bitmaps by checking if the bitmap is memory immediately before it was drawn to a video bitmap. If that is the case then I clone the bitmap as a video bitmap and destroy the old memory bitmap.

  void Bitmap::draw(Bitmap where){
    al_set_target_bitmap(where.my_bitmap);
    if ((al_get_bitmap_flags(where.my_bitmap) & ALLEGRO_VIDEO_BITMAP) &&
        (al_get_bitmap_flags(my_bitmap) & ALLEGRO_MEMORY_BITMAP){
        ALLEGRO_BITMAP converted = al_clone_bitmap(my_bitmap); // will create video
        al_destroy_bitmap(my_bitmap);
        my_bitmap = converted;
    }
    al_draw_bitmap(my_bitmap);
  }

This works ok except it breaks the sharing properties I was using in my game. Objects that share Bitmap objects share the underlying ALLEGRO_BITMAP* pointer but when one object creates its own ALLEGRO_BITMAP* then it becomes the sole owner thus the bitmap could be duplicated amongst several objects. I could make my engine smarter about this but at this point it would be a pain to fix.

Instead Elias suggested a new flag for bitmaps that would automatically convert memory bitmaps to video if the same scenario existed as above. Something like ALLEGRO_UPLOAD_AS_SOON_AS_POSSIBLE. I took a very quick stab at hacking it into Allegro5 but it didn't work, Elias says he will try to implement it later in the week or something.

Blending confused the heck out of me for a while. Basically I randomly changed the arguments to al_set_blender until it worked and I think I understand things now but they are definitely not intuitive to someone coming from Allegro4/SDL.

To emulate draw_trans_sprite from Allegro4 in Allegro5 this is what I used (credit to
Trent for the suggestion)

  int alpha = 128;
  al_set_blender(ALLEGRO_ADD, ALLEGRO_ALPHA, ALLEGRO_INVERSE_ALPHA);
  al_draw_tinted_bitmap(bitmap, al_map_rgba(255, 255, 255, alpha), x, y, 0);

is equal to

  set_trans_blend(0, 0, 0, 128);
  draw_trans_sprite(bitmap, ...);

I have yet to implement a handful of features like dealing with the 8-bit stuff that MUGEN mode wants and resizing windows but things are looking up. Allegro5 is pretty cool once you get used to it so thanks to everyone who continually works on it.

If you want to try Paintown in Allegro5 mode build it like so:

  $ svn co https://paintown.svn.sourceforge.net/svnroot/paintown/trunk
  $ export ALLEGRO5=1
  $ make

The only dependency is scons. There is a cmake build but it doesn't support Allegro5 yet (but it will eventually). Also I only tested on linux so far but if it works on other os's I would be glad to know!

Matthew Leverton
kazzmir said:

Instead Elias suggested a new flag for bitmaps that would automatically convert memory bitmaps to video if the same scenario existed as above. Something like ALLEGRO_UPLOAD_AS_SOON_AS_POSSIBLE.

This particular issue has been discussed before. While I personally wouldn't be opposed to such an opt-in feature, I still think it would be better to explicitly declare when such conversion takes place.

display = al_create_display(640, 480);
al_upload_memory_video_bitmaps(display); // conversion doesn't happen until here

The ALLEGRO_UPLOAD_AS_SOON_AS_POSSIBLE wouldn't really be necessary then.

Also, in your case, even an al_convert_to_video_bitmap(bmp) function would work. I think that would sometimes be useful.

Thomas Fjellstrom

I still think it would be better to explicitly declare when such conversion takes place.

But now you can't choose which ones get uploaded.

Matthew Leverton

But now you can't choose which ones get uploaded.

My suggestion is according the current behavior of video bitmaps silently being treated as memory bitmaps if there is no display. So only those memory bitmaps that are supposed to be video bitmaps would be uploaded.

If you don't want them uploaded, then why mark them as video bitmaps? It seems like people would always set ALLEGRO_UPLOAD_AS_SOON_AS_POSSIBLE with video bitmaps, so it seems extraneous to me.

To me, the important thing is to be able to say when and which display you want to use. But of course, the proposed flag could still be used along with an explicit function as I suggest. They aren't mutually exclusive. (But again, I don't think the flag is very useful.)

If you want to pick and choose with more precision than just globally uploading all bitmaps that are supposed to be video bitmaps, then an explicit function to convert a single bitmap could be used.

kazzmir

When I tried hacking this feature into A5 the ttf stuff had problems. Basically it was trying to auto-convert the memory ttf bitmaps to video bitmaps which is exactly what I don't want. But maybe I screwed something else up.

<edit> Ok I think a function that set a flag on a bitmap to convert to a video bitmap as soon as possible would be better than setting the global bitmap flags.

Matthew Leverton
kazzmir said:

Basically it was trying to auto-convert the memory ttf bitmaps to video bitmaps which is exactly what I don't want.

With my suggestion you would just mark them as memory bitmaps when loading the TTF font, and the upload function would never try to convert them because they weren't originally loaded with the video bitmap flag set.

I assume that would work, but I don't know much about the TTF add-on.

Quote:

Ok I think a function that set a flag on a bitmap to convert to a video bitmap as soon as possible would be better than setting the global bitmap flags.

Is the proposed flag in your original post meant to be a display or bitmap one?

Anyway, I'm curious on the use cases. To me it seems like the ALLEGRO_VIDEO_BITMAP is the flag. And all we need is a function to claim those "video" bitmaps that are currently stored as memory bitmaps.

That is, if you set ALLEGRO_MEMORY_BITMAP, then those bitmaps would never be uploaded by the explicit claim-all function. You would have to convert those ones yourself by a function that operated on a single bitmap.

Evert

Also, in your case, even an al_convert_to_video_bitmap(bmp) function would work. I think that would sometimes be useful.

I've thought that both that one and the reverse would be useful additions to have.

kazzmir said:

The main thread is the only one that should be creating video bitmaps (assuming a single display for the application) and is the only thread that should be blitting said bitmaps. All other threads should deal with memory bitmaps.

I'm not sure that's necessarily true, I think you can blit to the display from a background thread (at least you could at one point), but you have to set things up to do that. Even then, this is probably not a very good idea and should be avoided, as you say (blitting to other displays should be fine though).

Anyway, nice post! I'm sure it'll be a great help to many people. :)

EDIT:

Quote:

Of course the bitmap class also destroys the ALLEGRO_BITMAP* in the destructor which could end up with the situation that a destroyed bitmap was still set as the target bitmap. The solution was to set the target bitmap to NULL if the current target is the same as the bitmap about to be destroyed.

Forgot to say, a cleaner approach here may be to store/restore the target bitmap state. May not be worth it, but I thought I'd mention it in case you didn't know about the save/restore state functions.

Elias

I like it. al_upload_memory_video_bitmaps() should make loading bitmaps in another thread much easier. And from what I see it will also work for fonts (which currently can't be loaded in another thread at all since we have no al_clone_font() function).

(Doesn't really solve the issue we discussed last time... if you forget to create a video bitmap after al_create_display() you likely also forget to call al_upload_memory_video_bitmaps() :P)

Matthew Leverton
Elias said:

if you forget to create a video bitmap after al_create_display() you likely also forget to call al_upload_memory_video_bitmaps()

I agree, but at least the solution is simple. I don't consider this functionality as a way to help the forgetful; it's meant to allow people who understand what they are doing have a bit more control over how to load images.

Elias

Yes. I'll add the function if nobody else wants to.

Evert
Elias said:

Yes. I'll add the function if nobody else wants to.

I think you just volunteered. :)

kazzmir

My architecture works best with on-demand uploading so as long as I can do that with whatever functions get added I'm fine.

Another issue I found last night is stretching bitmaps works differently in A4 and A5. In A4 if I stretch a bitmap with starting coordinates off the screen the bitmap will get stretched first and clipped second. In A5 its the reverse. So to emulate the A4 behavior I added a translate transformation before the scaling transformation and always draw to 0, 0.

Here is what it should look like in A4/SDL (note the position of Akuma):
{"name":"a4.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/0\/701e345b804ad5354e8a1f3945d9b2a8.png","w":646,"h":510,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/0\/701e345b804ad5354e8a1f3945d9b2a8"}a4.png

Here is what happens in A5 with just scaling when x=-170.

{"name":"a5-bad.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/b\/6\/b6bb42f77783ae78099d297875c9af95.png","w":640,"h":480,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/b\/6\/b6bb42f77783ae78099d297875c9af95"}a5-bad.png

Here is just the drawn bitmap with no transformation

  al_draw_bitmap(bitmap, x, y);

{"name":"a5-no-scale.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/1\/d\/1d81c24aa54bb4561b64dc8936a23b1b.png","w":640,"h":480,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/1\/d\/1d81c24aa54bb4561b64dc8936a23b1b"}a5-no-scale.png

And finally here is A5 with translation and scaling:

  ALLEGRO_TRANSFORM t;
  al_identity_transform(&t);
  al_translate_transform(&t, -x / 2, -y / 2); // note the /2
  al_scale_transform(&t, 2, 2);
  al_use_transform(&t);
  al_draw_bitmap(0, 0); // draw at 0, 0

{"name":"a5-ok.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/8\/c8349b68a76dcdae860b540ab179e48b.png","w":640,"h":480,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/8\/c8349b68a76dcdae860b540ab179e48b"}a5-ok.png

I have to draw at 0, 0 to avoid the bitmap being clipped to early and also translate it to a position taking the scaling into account.

<edit>
Ignore all of the above. Elias set me straight. Instead of messing around with transforms I can just use al_draw_scaled_bitmap.

al_draw_scaled_bitmap(getData().getBitmap(), 0, 0, getWidth(), getHeight(), x, y, new_width, new_height, 0);

Which replaces about 15 lines of transform nonsense and does the right thing.

Elias
Evert said:

I think you just volunteered.

Hm, more involved than I expected. For display bitmaps we don't actually allocate an ALLEGRO_BITMAP but instead this:

typedef struct ALLEGRO_BITMAP_OGL
{
   ALLEGRO_BITMAP bitmap; /* This must be the first member. */
   ...
}

or this:

typedef struct ALLEGRO_BITMAP_D3D
{
   ALLEGRO_BITMAP bitmap; /* This must be the first member. */
   ...
}

It's not possible to convert the type of a bitmap with that.

I see two solutions:

1. Whenever an ALLEGRO_BITMAP is allocated, allocate enough storage for ALLEGRO_BITMAP_D3D/ALLEGRO_BITMAP_OPENGL.

2. Don't use that inheritance trick any longer. Instead ALLEGRO_BITMAP gets an additional field called "void *extra" which is allocated by drivers.

I somehow feel the latter is the cleaner solution, even though it requires re-writing all of the platform specific code.

Matthew Leverton

I was wondering if that would be a problem. I think it would be fine to allocate enough space for the largest of the structs (i.e., solution 1), even though it's ugly.

Long term, I think solution 2 would be better. (Even the file interface drivers were changed to use the void *extra style for similar limitations.)

What currently happens to video bitmaps when a display is destroyed?

Evert
Elias said:

1. Whenever an ALLEGRO_BITMAP is allocated, allocate enough storage for ALLEGRO_BITMAP_D3D/ALLEGRO_BITMAP_OPENGL.

Could we do that by actually making ALLEGRO_BITMAP a union of the current struct and the ALLEGRO_BITMAP_OGL/ALLEGRO_BITMAP_D3D structs? That's similar to how it's done with the event system.
Not very different, but a bit less messy than allocating enough space for the largest of those structs by hand...

Matthew Leverton
Evert said:

Could we do that by actually making ALLEGRO_BITMAP a union of the current struct and the ALLEGRO_BITMAP_OGL/ALLEGRO_BITMAP_D3D structs?

The downside is it would involve the core Allegro knowing about each different bitmap struct.

Thomas Fjellstrom

The void pointer is likely the best route. Pain in the rear, but may be worth the effort.

Elias

What currently happens to video bitmaps when a display is destroyed?

Some very ugly code which works because ALLEGRO_BITMAP is smaller than the other two :P

I also don't understand why even sub-bitmaps use the platform specific structs right now.

So, just adding a function al_convert_bitmap(ALLEGRO_BITMAP *bitmap) which works like al_clone_bitmap but doesn't create a new bitmap will be quite a bit of work. But necessary I guess.

AMCerasoli

Hi guys, sorry for interrupt your deep conversation, but, I was studying one of Peter's examples (OGRE) and it says:

Quote:

Ogre 1.7 (and, optionally, earlier versions) uses the FreeImage library to
handle image loading. FreeImage bundles its own copies of common libraries
like libjpeg and libpng, which can conflict with the system copies of those
libraries that allegro_image uses. That means we can't use allegro_image
safely, nor any of the addons which depend on it.

One solution would be to write a codec for Ogre that avoids FreeImage, or
write allegro_image handlers using FreeImage. The latter would probably be
easier and useful for other reasons.

Now that you're talking about bitmaps and so, implementing FreeImage would be too difficult? this has something to do with what you're talking about?.

Because I really think making Allegro compatible with OGRE would be a strong step, don't you think?

Elias

It already is compatible, that example works here. Just need someone to make it run under Windows and OSX as well.

AMCerasoli

Oh, Allegro is already using FreeImage to handle image loading?. Or do we need to use the codec thingy?.

Thread #607477. Printed from Allegro.cc