Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » embarrassingly simple question on 4.9

This thread is locked; no one can reply to it. rss feed Print
 1   2 
embarrassingly simple question on 4.9
Neil Walker
Member #210
April 2000
avatar

Hello,
In the olden days when I created video bitmaps and it failed (e.g. it ran out of video memory) it just returned a failure and I carried on attempting to use System and then memory. When drawing I just told it to draw the bitmap regardless of where it was.

In the new allegro when everything is done via the display driver and memory bitmaps are a (I presume) means of keeping stuff compatible with magic pink, etc. what happens when video memory runs out, either by creation or loading, and where does it then store them?

Neil.
MAME Cabinet Blog / AXL LIBRARY (a games framework) / AXL Documentation and Tutorial

wii:0356-1384-6687-2022, kart:3308-4806-6002. XBOX:chucklepie

Matthew Leverton
Supreme Loser
January 1999
avatar

   /* Else it's a display bitmap */

   bitmap = current_display->vt->create_bitmap(current_display, w, h);
   if (!bitmap) {
      TRACE("al_create_bitmap: failed to create display bitmap\n");
      return NULL;
   }

It apparently returns NULL.

memory bitmaps are a (I presume) means of keeping stuff compatible with magic pink

Nothing is compatible with magic pink unless you explicitly convert that color to alpha zero. (Allegro provides a function for that.)

Neil Walker
Member #210
April 2000
avatar

I guess what I'm getting at is I can load 300mb of bitmaps into allegro 4.2. My current hardware has 64mb of video ram. What happens when I breach this tiny limit, surely it can't just fail? This means you will now have to create a complete bitmap manager to handle moving bitmaps in and out of video memory, etc.?

Neil.
MAME Cabinet Blog / AXL LIBRARY (a games framework) / AXL Documentation and Tutorial

wii:0356-1384-6687-2022, kart:3308-4806-6002. XBOX:chucklepie

Erin Maus
Member #7,537
July 2006
avatar

You could do that or you could use memory bitmaps for non-important graphics:

al_set_new_bitmap_flags(ALLEGRO_MEMORY_BITMAP)
ALLEGRO_BITMAP * foo = al_create_bitmap(1024, 1024); // Now it's a memory bitmap. 

Of course, there would be speed penalties for such.

---
ItsyRealm, a quirky 2D/3D RPG where you fight, skill, and explore in a medieval world with horrors unimaginable.
they / she

Neil Walker
Member #210
April 2000
avatar

yes, I was just hoping allegro would manage this for me as the shoe is on the other foot as it were (4.2 defaults to memory bitmaps that fill an endless pot whereas 4.9 defaults to video bitmaps with a rather limited pot).

Sorry about the over use of idioms/metaphors in that last paragraph. Sometimes a leopard finds it hard to change his spots and can't cut the mustard when it comes to proper English.

Neil.
MAME Cabinet Blog / AXL LIBRARY (a games framework) / AXL Documentation and Tutorial

wii:0356-1384-6687-2022, kart:3308-4806-6002. XBOX:chucklepie

Erin Maus
Member #7,537
July 2006
avatar

You could always write a wrapper around al_create_bitmap that tries to create a video bitmap, and if that fails, creates a memory bitmap. Shouldn't be too difficult.

---
ItsyRealm, a quirky 2D/3D RPG where you fight, skill, and explore in a medieval world with horrors unimaginable.
they / she

Neil Walker
Member #210
April 2000
avatar

No bitmap management, no datafile support, complex api... At this level of abstraction I may as well use SDL or get out my GoF design patterns book from the loft for some serious refactoring ;)

Neil.
MAME Cabinet Blog / AXL LIBRARY (a games framework) / AXL Documentation and Tutorial

wii:0356-1384-6687-2022, kart:3308-4806-6002. XBOX:chucklepie

Thomas Fjellstrom
Member #476
June 2000
avatar

no datafile support

Yeah, no datafiles. But it supports Zip files, guess you can't win em all ::)

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Matthew Leverton
Supreme Loser
January 1999
avatar

To be truthful, the API is not much more complex than A4's. And by the time A5 is done, the differences will probably be marginal.

However, setting it up (e.g., compiling, linking) is (and will remain) more complicated than A4.

Evert
Member #794
November 2000
avatar

complex api

It's not, really. Just different.

SiegeLord
Member #7,827
October 2006
avatar

In the olden days when I created video bitmaps and it failed (e.g. it ran out of video memory) it just returned a failure and I carried on attempting to use System and then memory. When drawing I just told it to draw the bitmap regardless of where it was.

You can still do that, and just like in the olden days you paid the price for using memory bitmaps versus video bitmaps. In A4.9 the difference in speeds is perhaps a little more profound (check the ex_membmp example on your clunker to see the extent of the difference), but it was always the case that it was a bad idea to blindly load video bitmaps like there is no tomorrow.

That being said, I can't imagine it being very hard to create something that 'caches' the memory bitmaps into video bitmaps, but that is likely something that should be done on a game specific basis, because the implementation is likely be very task dependent.

That, or wait for the possible non-HW accelerated driver to arrive.

EDIT: E.g. on my abysmally bad laptop, I get 9 fps vs 6 fps for video bitmaps vs memory bitmaps in ex_membmp. On low end systems it won't be all that much difference in using memory bitmaps all the time, as you will be forced to by the low memory requirements.

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

X-G
Member #856
December 2000
avatar

It's not like 4.3 had "bitmap management" either. I for one thoroughly endorse 4.9/5.0. It's a breath of fresh air.

Now all we need is an a5_cg module. ;)

--
Since 2008-Jun-18, democracy in Sweden is dead. | 悪霊退散!悪霊退散!怨霊、物の怪、困った時は ドーマン!セーマン!ドーマン!セーマン! 直ぐに呼びましょう陰陽師レッツゴー!

Martin Kalbfuß
Member #9,131
October 2007
avatar

The best part about a5 is its event handling and its api is much cleaner then a4. But it still is too low level for my taste. I know someone has to do it ;-).

http://remote-lisp.spdns.de -- my server side lisp interpreter
http://www.nongnu.org/gm2/ -- Modula-2 alias Pascal++

Neil Walker
Member #210
April 2000
avatar

I'm not complaining, I was just asking whether fancy bitmap management was taking place.

Out of interest, once we're all up to speed with A5 and shoving out best-sellers like there's no tomorrow, at what level of graphics card should we expect people to have, e.g. if I write a program that requires 60mb of graphics and I expect it to all fit into video memory is it too unreasonable to abort out if they don't have it to save on a caching algorithm, or if my graphics aren't to the power of 2 and their card doesn't support it, etc

Neil.
MAME Cabinet Blog / AXL LIBRARY (a games framework) / AXL Documentation and Tutorial

wii:0356-1384-6687-2022, kart:3308-4806-6002. XBOX:chucklepie

X-G
Member #856
December 2000
avatar

There's one thing I have been wondering about: does A5 do any automatic texture atlasing?

--
Since 2008-Jun-18, democracy in Sweden is dead. | 悪霊退散!悪霊退散!怨霊、物の怪、困った時は ドーマン!セーマン!ドーマン!セーマン! 直ぐに呼びましょう陰陽師レッツゴー!

SiegeLord
Member #7,827
October 2006
avatar

I write a program that requires 60mb of graphics and I expect it to all fit into video memory is it too unreasonable to abort out if they don't have it to save on a caching algorithm, or if my graphics aren't to the power of 2 and their card doesn't support it, etc

That's usually the case, or just let them play it with memory bitmaps handling the overflow and accept the low FPS. In terms of the non-power of 2 textures, A4.9 handles them automatically (by embedding them in the next highest power of 2 texture).

X-G said:

There's one thing I have been wondering about: does A5 do any automatic texture atlasing?

You can use sub-bitmaps to manually break up a big texture into little textures, but there isn't an automatic mechanism to stitch multiple bitmaps into a bigger one. That does sound like a good idea to me though (perhaps as a fancy new bitmap flag? or ALLEGRO_BITMAP al_create_atlas(int width, height, ALLEGRO_BITMAP** bmps)?).

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

X-G
Member #856
December 2000
avatar

Yes, quite. I was just wondering if I had to implement it myself or not. It feels like something that would fit quite well in A5.

--
Since 2008-Jun-18, democracy in Sweden is dead. | 悪霊退散!悪霊退散!怨霊、物の怪、困った時は ドーマン!セーマン!ドーマン!セーマン! 直ぐに呼びましょう陰陽師レッツゴー!

Thomas Harte
Member #33
April 2000
avatar

Won't at least the OpenGL target get automatic texture swapping on account of OpenGL having automatic texture swapping? In which case maybe it would make sense to expose glPrioritizeTextures, especially as it's effectively just a hint that a driver can ignore?

Neil Walker
Member #210
April 2000
avatar

Quote:

texture atlasing

Is that just a posh way of saying sprite sheet?

on account of OpenGL having automatic texture swapping

Right, so in opengl mode there is an endless pot of graphics memory :) I take it this is what you're talking about?

Quote:

Once you upload your texture to GL, where is it stored exactly?
The driver will most likely keep it in RAM. Keep in mind that the driver has its own memory manager.
When you want to render something with this texture, the driver will then upload the texture to VRAM (if you have a system with dedicated video memory). It will likely upload 100% of the texture with all the mipmaps.
If there isn't enough VRAM, the driver will delete another texture or another VBO. That is the driver's choice and there is nothing you can do about it.
The driver will always keep a copy in RAM, even when a copy is made in VRAM. RAM is considered a permanent storage. VRAM is considered volatile.

As for directX I found this:

Quote:

f you need more space than available the that's no problem. The driver will move the data between system-ram, AGP-memory and video ram for you. In practice you never have to worry that you run out of video-memory. Sure - once you need more video-memory than available the performance will suffer, but that's life

Which is kind of saying the same thing, though mentioning AGP makes me wonder given it's quite an old thing that nobody uses anymore...

Perhaps someone with A5 up and running could test this out by loading loads of textures in DX and OpenGL mode and see what happens :)

Neil.
MAME Cabinet Blog / AXL LIBRARY (a games framework) / AXL Documentation and Tutorial

wii:0356-1384-6687-2022, kart:3308-4806-6002. XBOX:chucklepie

SiegeLord
Member #7,827
October 2006
avatar

Is that just a posh way of saying sprite sheet?

An automatically generated sprite sheet. Basically, the inverse operation of creating a sub-bitmap.

Won't at least the OpenGL target get automatic texture swapping on account of OpenGL having automatic texture swapping? In which case maybe it would make sense to expose glPrioritizeTextures, especially as it's effectively just a hint that a driver can ignore?

Nice. Learn something new every day, heh.

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

Peter Wang
Member #23
April 2000

I have to say, many of Allegro 4.9's memory bitmap routines are really, really unoptimised. I don't think that will be fixed any time soon.

Elias
Member #358
May 2000

Out of interest, once we're all up to speed with A5 and shoving out best-sellers like there's no tomorrow, at what level of graphics card should we expect people to have, e.g. if I write a program that requires 60mb of graphics and I expect it to all fit into video memory is it too unreasonable to abort out if they don't have it to save on a caching algorithm, or if my graphics aren't to the power of 2 and their card doesn't support it, etc

A5 currently runs fine on GP2X Wiz and IPhone, so I'd say pretty low. Basically anything which claims to run either OpenGL or Direct3D will work. Performance really doesn't depend on A5 itself much, it's a very thin layer over those two.

SiegeLord said:

You can use sub-bitmaps to manually break up a big texture into little textures, but there isn't an automatic mechanism to stitch multiple bitmaps into a bigger one. That does sound like a good idea to me though (perhaps as a fancy new bitmap flag? or ALLEGRO_BITMAP al_create_atlas(int width, height, ALLEGRO_BITMAP** bmps)?).

This sounds like a good idea. It also would simplify the font addon which already does this for the glyph cache - it could then just use the normal API for it.

Maybe the API could instead be:

al_enable_texture_atlas(int width, int height);

Then while it is enabled, calls to al_create_bitmap (without MEMORY flag and not of bigger size) would instead return a sub-bitmap to a video bitmap of the given size - allocating new bitmaps when needed.

It would be the same as the ALLEGRO_BITMAP** version, except that the user wouldn't have to memory-manage an array of pointers... not sure how much of a different that makes.

Someone wants to write a patch for it? Maintaining the free space within the big textures is the main problem I see - is there any good algorithm for it? Basically it should be fast but not waste too much texture space...

Also, related to this, we might want to add a retained drawing mode. E.g. drawing a tilemap could then be done like this:

al_start_retained();
for (;;) { for (;;) { al_draw_bitmap(...); }}
al_render_retained();

Instead of issuing one glDrawArrays() command for each al_draw_bitmap, the whole tilemap could be drawn with a single glDrawArrays() in al_render_retained(). The problem is, it would only work properly if bitmaps with different textures don't overlap - but we could warn of that in the documentation.

I added this in my local git-svn branch some time ago and drawing a screen full of text ran with 2.5 times the FPS compared to without, which is nice. However, users can already optimize it themselves by using al_draw_prim() or by using OpenGL directly... so not sure we need it.

--
"Either help out or stop whining" - Evert

Thomas Fjellstrom
Member #476
June 2000
avatar

Elias said:

Maintaining the free space within the big textures is the main problem I see - is there any good algorithm for it? Basically it should be fast but not waste too much texture space...

The only two things the come to mind are some simple(ish) quad tree type allocation mechinism, or steal whatever 4.x's DOS code has been doing for more than a decade.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Thomas Harte
Member #33
April 2000
avatar

Elias said:

Maintaining the free space within the big textures is the main problem I see - is there any good algorithm for it?

Surely it's the bin packing problem and therefore NP-hard? There's a whole bunch of heuristics, but I think most of them rely on you knowing the complete set of objects to be packed in advance... given that we can't do that without adding significant API and implementation overhead, maybe it'd be best just to explicitly adopt first fit and advocate that bitmaps to be atlassed are sorted by size prior to being passed to Allegro? Even C has qsort, so that isn't forcing too much logic into recommended practice.

Elias
Member #358
May 2000

Hm, another problem is, how do we deal with GL_LINEAR filtering? My understanding is that GL_CLAMP_TO_EDGE only works at the texture boundary - so a bitmap which has a whole texture for itself would look different than a sub-bitmap (specifically, there would be a seam when the texture is supposed to tile).

Should we do something like have a gap between textures and duplicate a 1-pixel wide area all around each one? Or just pack them tightly and leave it to users to make their own texture atlas if they get color bleed from neighbor tiles?

--
"Either help out or stop whining" - Evert

 1   2 


Go to: