Hi,
I'm desperately in need of some help before I lose the will to live
I'm using Allegro 5.1.9 within X (ie using Opengl (GLES)).
I have the following code that has some weird behaviour.
I have attached the 3 image files saved.
My questions consist of:
Why is the rectangle not shown in 2.png (deferred drawing isn't enabled) ?
Why, when I finally draw the image to screen, is the rectangle then 'written' but the rest of the image corrupts?
Any idea on where I should be looking - I'm running with the debug version and am quite happy to instrument the code, I'm really at a loss where to go from here and need some guidance
Kev
Could be some kind of bug in Allegro, but 5.1.9 is a rather old version so might have been fixed since. Does it work in Allegro 5.1.11?
Also, does it work if you use normal OpenGL instead of GLES?
I've moved to 5.1.11 - same issues - I think they're specific to the platform I'm on..
I have found that if I do the following, the drawn bitmap doesn't get corrupted..
temp1 = al_load_bitmap2("test24.png"); // ** this doesn't fix anything //image = al_clone_bitmap(temp1); // ** // ** but this does image = al_create_bitmap(al_get_bitmap_width(temp1), al_get_bitmap_height(temp1)); al_set_target_bitmap(image); al_draw_bitmap(temp1, 0, 0, 0); // ** // 'image' is now accessible and drawn on screen correctly.. (ie no corruption)
But (what I've been trying to do is use
al_convert_mask_to_alpha(image, al_map_rgb(255,0,255)
- old habits die hard!), but it doesn't work - ie I get MAGIC PINK left in image..
So I tried:
al_draw_filled_rectangle(3, 3, 8, 8, al_map_rgb(255,255,255)); // white rectangle col = al_get_pixel(image, 5,5)
but it returns r,g,b,a values of 0.000,0.000,0.000,0.000 - I expected 1.0,1.0,1.0,?.?
Return values from al_get_bitmap_format/flags(image) return H'11 & H'4E0
Which I believe means
format = ALLEGRO_PIXEL_FORMAT_ABGR_8888 flags = ALLEGRO_VIDEO_BITMAP ALLEGRO_MAG/MIN_LINEAR ALLEGRO_INTERNAL_OPENGL
Any ideas on where I go from here?
Is there a way of getting to the pixel data specifically (ie as hex) as we did on Allegro 4 with 'line'..? That may give me some idea what's going on (possibly).
I'm suspicious that Allegro thinks the display is in a different RGB(A) format to what it actually is..
but then why does everything get displayed correctly?
Kev
Is there a way of getting to the pixel data specifically (ie as hex) as we did on Allegro 4 with 'line'..? That may give me some idea what's going on (possibly).
Take a look at al_lock_bitmap.
On the same topic I've found this issue -
al_clear_to_color(al_map_rgba_f(1.0, 1.0, 1.0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("WHT: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(1.0, 0, 0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("RED: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(1.0, 1.0, 1.0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("WHT: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(0, 1.0, 0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("GRN: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(1.0, 1.0, 1.0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("WHT: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(0, 0, 0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("BLK: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(1.0, 1.0, 1.0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("WHT: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(0, 0, 1.0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("BLU: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a); al_clear_to_color(al_map_rgba_f(1.0, 1.0, 1.0, 1.0)); col = al_get_pixel(logo, 5, 5); printf("WHT: %f, %f, %f, %f\n", col.r, col.g, col.b, col.a);
This outputs:
WHT: 0.870588, 0.082353, 0.188235, 0.462745 RED: 0.000000, 0.000000, 0.000000, 0.000000 WHT: 0.000000, 0.000000, 0.000000, 0.000000 GRN: 0.000000, 0.000000, 0.000000, 0.000000 WHT: 0.000000, 0.000000, 0.000000, 0.000000 BLK: 0.000000, 0.000000, 0.000000, 0.000000 WHT: 0.000000, 0.000000, 0.000000, 0.000000 BLU: 0.000000, 0.000000, 0.000000, 0.000000 WHT: 0.000000, 0.000000, 0.000000, 0.000000
Although what is displayed is correct..
Is there something wrong with READING from an ALLEGRO_BITMAP..
Or am I doing something fundamentally wrong..?
It may not be implemented for GLES at all, maybe not even possible... reading back from a texture is not something you'd usually do. You can use a memory bitmap, then it will always work (you shouldn't draw those of course as it would be extremely slow).
Hm, and just a thought, if you call al_flip_display before al_get_pixel, does that make any difference?
Thanks for the speedy reply Elias.
Just rebuilding Allegro at the moment - will have a try flipping the display in a moment in case that helps/flushes something.
With video bitmaps, are there two copies of the bitmap?
Did consider also, when I draw to display, to also draw to a temporary bitmap and see what values that has.
Reason I'm going down this line of investigation is that I'm trying to use al_convert_mask_to_color() btw and it doesn't work. Maybe for that I need to use a (system) memory bitmap and then copy that to a video bitmap once parsed.
EDIT:
Added, before I create the bitmap
al_set_new_bitmap_flags(ALLEGRO_MEMORY_BITMAP)
and it made no difference.
Also tried calling al_flip_display() and again no change.
EDIT2:
Adding ALLEGRO_KEEP_BITMAP_FORMAT to the flags (with ALLEGRO_MEMORY_BITMAP) actually meant the values read back are correct ie 1.0 where expected).. but the displayed images are corrupt..!
Is something going wrong with a conversion somewhere...?
Could be some kind of bug in Allegro, but 5.1.9 is a rather old version so might have been fixed since.
I find it funny that 2 steps on a minor, minor version number constitute "a rather old version."
Are you sure your format flags are right? Ie - is col.r really a float? I thought you were supposed to use al_unmap_rgba_f for floats.
Added, before I create the bitmap
al_set_new_bitmap_flags(ALLEGRO_MEMORY_BITMAP)
and it made no difference.
A memory bitmap means it just is a buffer with RGBA values, so I don't see how this would be possible.
Adding ALLEGRO_KEEP_BITMAP_FORMAT to the flags (with ALLEGRO_MEMORY_BITMAP) actually meant the values read back are correct ie 1.0 where expected).. but the displayed images are corrupt..!
Is something going wrong with a conversion somewhere...?
This flag is not actually implemented, so this is impossible to make any difference whatsoever.
Make sure you are not mixing 5.1.9 and 5.1.11 libraries and headers, none of the 5.1.x versions are compatible with each other.
This is a fresh install of the O/S - just installed 5.1.11.
Appears memory bitmaps don't appear to work at all..!
I had hoped I could call (where image is a system bitmap)
al_convert_mask_to_alpha(image, al_map_rgb(255,0,255))
and then copy to a video bitmap - no luck.
For example running ex_rotate - switching to memory bitmaps displays nothing, video bitmap is okay..
EDIT:
Memory bitmaps appear to work - ie if I fill one with magic pink and then convert to alpha, it works as expected (when interrogated with get_pixel).. However then copying that system meory image to a video bitmap (or display) doesn't actually do anything.
FWIW I am using 5.1.9 on iOS and memory bitmaps are working fine ( I need them for some pre-game blitting to generate some images, and it's much faster with the memory bitmaps to do pixel drawing on it) ).
Yeah thanks for that - I have run this code on the RPi (imx6 is also ARM) and an Intel platform with no issues as well, so don't suspect it's Allegro, just hoping for some workarounds of the problems..
I'm now suspecting that Allegro hasn't detected the bit depth of the display correctly, so I'm investigating that at the moment.
Amongst other things it seems like the alpha channel isn't being utilised during a video bitmap draw..
I think there are 2 or 3 issues confusing the analysis - will stick at it for a while yet.
EDIT:
I have homed in on the fault more..
There appears to be a problem with using al_put_pixel() vs al_draw_pixel().
Changing the blender to ADD/ONE/ZERO and using al_draw_pixel() in al_convert_mask_to_alpha() makes it work.. sort of - I get some pink fringing (it is 255,0,255 and not a 'tone' of pink) but I'll look at that soon.. Maybe it is still a RGB format issue.
Looking through al_put/draw_pixel() shows that 'draw' uses OPENGL and 'put' uses direct memory access..
Question is..
Are there two copies of bitmaps when using OPENGL?
If so, how are they synchronised?
al_put_pixel works by first locking the bitmap and then using memory access to that locked memory.
What appears to be the problem is related to direct memory access to the bitmaps.
Using the OPENGL based 'functions', everything appears to work.
Using al_put_pixel() in al_convert_alpha_to_mask(), it sets the correct bits and I can read it back fine, but when I display the resultant image - nothing has changed..
Swap that to al_draw_pixel(), the image updates.
I'm not at all knowledgeable about how OPENGL works (pipelines, etc?)..
Could it be that the image (texture?) isn't being flowed down to the GPU?
I have had the same sort of issue just loading an image from a file - the resultant image was empty, but if I loaded it twice and used al_draw_bitmap() to overwrite the image, it's fine.
al_put_pixel works by first locking the bitmap and then using memory access to that locked memory.
That sounds insanely slow (which is in line with my experience doing simple pixel rain in a5). I don't remember... is there a way to defer calls and do them in one lock?
Also, is "blit to memory, draw, blit to VRAM" really the only way to do things? Is direct VRAM access completely gone in newer OpenGL/etc?
Actually unless the target bitmap is locked already, al_put_pixel() will individually lock/unlock each pixel.
So locking, batching al_put_pixel() calls and unlocking is preferred (as convert_mask_to_alpha() does).
So..
Are we saying the bitmap that al_put_pixel() directly accesses needs to be 'uploaded' to video memory (gpu?) after being written to..?
Should this be done automatically by the driver?
Is there a way to force this to happen?
Or am I completely off in my understanding (very likely!)
I would assume the solution is to use the primitives add on and draw very small triangles in stead of pixels. Like that the draws will be hardware accelerated.
Interesting idea!
I'll try that tomorrow..
That sounds insanely slow
It's not really meant to be used outside of memory bitmaps. al_draw_pixel is a little faster, but if you really want a ton of pixels, you use al_draw_prim.
direct VRAM access
There's a locking API that Allegro wraps. On system with APUs, it's essentially direct DRAM acces.
I feel I almost have a fix for this, but I seem to have hit a snag..
How can I use al_draw_prim() function to actually write in the 'alpha' value rather than use it?
Or is there another function (obviously not al_put_pixel()) that I could use to store all 4 (RGBA) values in the target image..?
Turning off the blender with ADD, ONE, ZERO as I guess it just ignores the alpha parameter.
Maybe something to do with _al_ogl_update_render_state() .. ??
______________
Okay - code below works as an alternative to al_convert_mask_to_alpha()..
However, it's very slow.
In order to speed things up I tried locking/unlocking the bitmap - but it crashes.
Any ideas on how locking/unlocking works?
Does it copy from video to system ram when locked and then copy it back when it unlocks???
UPDATE:
Have sped this up by cloning the 'video' bitmap to a 'system' bitmap and lock that one for ALLEGRO_LOCK_READONLY
Still don't understand where the issue is - probably an incompatibility between the driver and Allegro.
I have other bitmaps that are corrupted in other ways, guess I'll have to trawl my way through fixing those interfaces as well.
(Do I need to unlock a bitmap if I'm about to destroy it?).
Have sped this up by cloning the 'video' bitmap to a 'system' bitmap and lock that one for ALLEGRO_LOCK_READONLY
That makes little sense, because locking a video bitmap will download the texture, and then any drawing on it will be done on the cpu in software like it would with a system bitmap. Cloning does essentially the same thing, it would download the data from the video bitmap to put in the system bitmap. Puzzling.
I'm glad someone else thinks it's weird
Yes, I thought locking/unlocking would do something like that.
All I can guess is the clone_bitmap() function access the driver differently..
It is all quite baffling, I feel some basic functionality is broken somewhere and if I could home in on that, then everything would fall into place.
Problem is, there's not really any documentation I can find (apart from trudging through the source code) that explains how this all works.
If you come up with any ideas that may help track down the issue, I'm keen to try it(!)