Allegro seems to be taking double ram for the video bitmaps.
kovarex

I might be doing something wrong, but this is the way how I think it is supposed to work:

Bitmap has its data saved in Video memory and Ram at the same time, so it can be restored when the device is lost and found.

But this is how it behaves for us:

this->atlas = al_create_bitmap(16384, 7439);
// The game allocates approximately the correct amount of data:
// 16384 * 7439 * 4 -> 487 Mb, but only in RAM

ALLEGRO_BITMAP* bump = al_create_bitmap(1, 1);
// The game allocates 487 Mb in VRAM, but ANOTHER 500 Mb in RAM.
// So the first bitmap is in RAM twice for some reason.

It almost looks, like the RAM version of the bitmap is stored both in the d3d and in allegro, but I don't really understand the stuff.

Is there a good reason for this? Are we doing something wrong? Or is it an allegro error? Is it something we could fix easily?

Thx for any hints.

Edit: After some research I found out, that the allegro is allocating memory for the bitmap in d3d_disp.cpp:

 bitmap->memory = (unsigned char *)al_malloc(bitmap->pitch * h);

but then, it creates the bitmap in d3d this way:

if (video_texture) {
         err = disp->device->CreateTexture(w, h, levels,
            D3DUSAGE_RENDERTARGET | autogenmipmap,
            (D3DFORMAT)_al_pixel_format_to_d3d(format), D3DPOOL_DEFAULT,
            video_texture, NULL);
         if (err != D3D_OK && err != D3DOK_NOAUTOGEN) {
            ALLEGRO_ERROR("d3d_create_textures: Unable to create video texture.\n");
            return false;
         }
      }

The D3DPOOL_DEFAULT means that d3d also keeps the copy of the bitmap in memory.
(https://msdn.microsoft.com/en-us/library/windows/desktop/bb172584(v=vs.85).aspx)

So we really have one copy of the bitmap used by allegro and one by d3d and I have no idea why is it so.

Changing the flag in the call just made the function to fail.

Should we (I), try to remove the RAM from the d3d system, or remove it from the allegro ?

Thomas Fjellstrom

D3D doesn't store bitmaps in ram, so allegro has to, so it can restore them when necessary. Now if you have an integrated gpu, the gpu stores it in ram. you can turn off allegro's copy, but then you'll have to restore textures yourself.

kovarex

Well, apparently it (the d3d) does store the memory in RAM.

The D3DPOOL_DEFAULT means that d3d also keeps the copy of the bitmap in memory, at least it is what the source says, and this is what I measured:
(https://msdn.microsoft.com/en-us/library/windows/desktop/bb172584(v=vs.85).aspx)

Thomas Fjellstrom

I think it actually says DEFAULT puts it in the most appropriate memory, ie: vram. Which is normal. But when a display context is lost, all of that data is wiped clean and has to be restored, which is why allegro keeps a memory copy.

It seems like D3DPOOL_MANAGED would obviate our need to copy memory like that (as that would make D3D keep a memory copy, as GL does), maybe we should switch to that to make the d3d backend more GL like, at least when the ALLEGRO_NO_PRESERVE_TEXTURE is not enabled.

kovarex

Yes, I understand that the ram copy needs to be stored somewhere, as long as you don't have the ALLEGRO_NO_PRESERVE_TEXTURE.

I understand that the ram copy of the bitmap is used to restored the bitmap after the display is lost.

What I don't understand is, why d3d is keeping this ram copy and allegro is doing that at the same time.

EDIT:

#SelectExpand
1if (video_texture) { 2 err = disp->device->CreateTexture(w, h, levels, 3 D3DUSAGE_RENDERTARGET | autogenmipmap, 4 (D3DFORMAT)_al_pixel_format_to_d3d(format), D3DPOOL_DEFAULT, 5 video_texture, NULL); 6 if (err != D3D_OK && err != D3DOK_NOAUTOGEN) { 7 ALLEGRO_ERROR("d3d_create_textures: Unable to create video texture.\n"); 8 return false; 9 } 10 } 11 12 if (system_texture) { 13 err = disp->device->CreateTexture(w, h, 1, 14 0, (D3DFORMAT)_al_pixel_format_to_d3d(format), D3DPOOL_SYSTEMMEM, 15 system_texture, NULL); 16 if (err != D3D_OK) { 17 ALLEGRO_ERROR("d3d_create_textures: Unable to create system texture.\n"); 18 if (video_texture && (*video_texture)) { 19 (*video_texture)->Release(); 20 *video_texture = NULL; 21 } 22 return false; 23 }

Actually, I just found out, that it is actually doing both the video_texture and system_texture, so the bitmap is created twice in d3d, and the system texture with the flag D3DPOOL_SYSTEMMEM, creates the second version of the bitmap in ram. (next to the one created directly by allegro).

Thomas Fjellstrom

Hm, yes, the SYSTEMMEM type will tell D3D to keep a copy according to the docs. So it should be changed.

SiegeLord

So here's how the D3D backend works. If render-to-texture is not supported (this is probably never true these days), then each bitmap just gets a video texture created with D3DPOOL_MANAGED and nothing special happens.

When render-to-texture is supported (the common case), each bitmap gets a video texture (created with D3DPOOL_DEFAULT) and a system texture (D3DPOOL_SYSTEMMEM). This is done because you cannot draw into D3DPOOL_MANAGED textures and you cannot lock D3DPOOL_DEFAULT textures. So what Allegro does is when you lock a bitmap, it locks the system texture and then syncs the video texture with the system texture using a D3D function. The same mechanism is used when restoring bitmaps after device loss.

In addition to the above, there's also an Allegro managed memory buffer which keeps yet another copy of the bitmap. Now, I don't quite know why we keep this extra memory buffer, as I'd imagine the system texture seems to preserve its contents when device is lost (at least, the documentation seems to imply this). It might be possible to get rid of it without any issues.

Getting rid of system_texture is probably a lot harder, since without it, you can't lock the bitmap. Perhaps we could add a D3D-specific function to drop this texture after you load your bitmap, and then have Allegro re-create it if it needs it again.

posila

Having a video texture and system texture makes sense to me. But I couldn't get my head around why that extra memory buffer is needed. After some tinkering with Allegro insides, we were able to create huge bitmap without the memory buffer.

Problem we came across was that after device is lost and system texture is synced to video texture (using IDirect3DDevice9::UpdateTexture) it works only for the first time. After the device is lost (or reseted) for the second time refreshed video texture stayed empty even though UpdateTexture was called. We fixed it by calling IDirect3DTexture9::LockRect and UnlockRect on system texture without touching its memory in places where it used to copy extra memory buffer into system texture.

So now I think I understand what problem was the extra memory buffer supposed to fix. Having to lock/unlock system texture everytime before I want to use it to refresh video texture seems wrong to me. I haven't found reason for this yet, and maybe it's an indication something else is wrong.

SiegeLord

The documentation suggests you need to 'dirty' the source texture. Could you try calling AddDirtyRect(NULL) on the system texture in lieu of locking it to see if that fixes that issue?

posila

Replacing LockRect/UnlockRect with AddDirtyRect works too. I misunderstood UpdateTexture documentation and thought that entire texture is be update if it doesn't contain any dirty area. Thank you SiegeLord and Thomas for your help.

SiegeLord

Awesome! Thanks for trying it out, I'm going to incorporate it into Allegro at some point. Seems like a nice memory savings.

Thread #615451. Printed from Allegro.cc