I am trying to narrow down why my fps is so low. After messing around, I found that my fps drops in half when calling a function. I then went into that function and commented out everything and kept rerunning my program while putting back each line one at a time. Everything was running at 60 fps... until I added one drawing function:
al_draw_bitmap_region(player.graphic, 0, player.direction, 26, 26, player.x - 13, player.y - 13, NULL);
This is just a 26x104 bitmap of my player sprite. This drawing function cuts my fps from 60 to 30. However the fps doesn't drop when I draw a primitive instead. Any thoughts?
Make sure you load all your graphics after you create your display otherwise they will be memory bitmaps. That's the major reason a bitmap would draw slowly in A5.
I create my display before anything else. I pulled the player sprite graphic out of the player struct and declared it in my main function. As expected, that didn't change anything.
To be 100% sure it's not a memory bitmap, try calling al_get_bitmap_flags(player.graphic) & ALLEGRO_VIDEO_BITMAP and making sure that's not zero.
Calling al_get_bitmap_flags(playerSprite) & ALLEGRO_VIDEO_BITMAP returned 1. If that is how you check, it seems that it isn't a memory bitmap. To be sure, I called al_set_new_bitmap_flags(ALLEGRO_VIDEO_BITMAP) at the beginning of my program.
Are you vsyncing? The draw operation could just be the straw that breaks the camel's back.
Show code then, main at least.
Here is my declaration and my gameloop
When I call RedrayOverlay(), that is when the fps drops. RedrawOverlay() looks like this:
void RedrawOverlay(ALLEGRO_BITMAP *&overlay, ALLEGRO_BITMAP *&clouds, ALLEGRO_BITMAP *&playerSprite, Player &player, int cloud1x, int cloud2x, Mouse maincurser) { al_set_target_bitmap(overlay); al_clear_to_color(al_map_rgba_f(0,0,0,0)); al_draw_bitmap_region(playerSprite, 0, player.direction, 26, 26, player.x - 13, player.y - 13, NULL); //al_draw_filled_circle(player.x, player.y, 3, al_map_rgb_f(1,0,1)); //al_draw_bitmap_region(clouds, , , WIDTH, HEIGHT, 0, 0, NULL); //al_draw_bitmap(clouds, cloud2x, 0, NULL); }
The al_draw_bitmap_region() is what makes the frame drop.
EDIT: After more testing, I found that most my drawing functions drop the frame rate significantly. However, drawing to the screen does not seem to affect it.
Let me guess, the overlay is a memory bitmap? Drawing to memory bitmaps is also relatively slow...
There's got to be something more going on here. Drawing a single sprite should never drop your fps from 60 to 30 if it is hardware accelerated.
Are you sure all your bitmaps are loaded after the display is created? Your overlay could be a memory bitmap which would explain the slow drawing to it. There's got to be some other explanation.
I don't mind looking at the rest of your source code if you want to post it. Zip files are best.
Okay here is my project folder.
EDIT: This is updated! Look at this one instead.
32X125 = 3750 + 250 = 4000
So your overlay texture is 4000 x 4000. That is likely too large of a texture for your graphics card to allocate. Most have a max size of 2048, 4096 if you're lucky. This means overlay is probably being created as a memory bitmap. Any reason you need a 4000x4000 overlay?
Well I made an overlay for the hud, player, mobs, and other objects and use environment for things that would not change very often. I made it the same size as environment to avoid some math when drawing. I'll change it and let you know how it goes.
The general paradigm nowadays is just to blast everything through the video card on every frame. And you can usually get away with it too. But I understand your desire to minimize excess drawing, it's only natural.
I eliminated the giant 4000x4000 bitmaps and recoded the drawing functions to draw the tiles one at a time and now it's running at 60 fps! Thanks!
So your overlay texture is 4000 x 4000. That is likely too large of a texture for your graphics card to allocate.
Though, for modern systems (*not crappy tablets), it shouldn't be unless perhaps you're trying to update the entire thing by drawing individual tiles to it (including wasting time drawing off the screen boundaries.)
That is, it's a throughput limit, not a hardware specification limitation.
Any card supporting OpenGL 4.1, for example, is required to support 16,384 x 16,384 as the lowest "maximum texture size" of a card. Which is 16384^2 * 32-bits / 1024 / 1024 / 1024 = 1 GB if uncompressed! There are some that list 2-D texture size at 64Kx64K! The future is amazing.