Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » Drawing directly on the display?

This thread is locked; no one can reply to it. rss feed Print
Drawing directly on the display?
nshade
Member #4,372
February 2004

So I'm porting an old dos game that used memory buffers in an array as video pages. This is not a big deal and I changed the code like this

  for (x = 1; x <= num; x++) {
    bufmap[x] = al_create_bitmap(GameWidth, GameHeight);
    if (bufmap[x] == NULL) break;

However, bufmap[0] is the actual physical video memory. I thought I would be sly and do this....

  DISPLAY = al_create_display(GameWidth, GameHeight);
  bufmap[0] = DISPLAY;

You see the game actually copies graphics to and from the display as well for some of it's effects. It's also "draws" on the screen in kind of a "pen mode" that I need to figure out.

Every time I try and draw directly on the screen, the program crashes. Is there a way I can directly access this? If not, what could I use for a work-around?

-----

As an idea, maybe I can make a thread that every 30th of a frame or so does this....

then do everything on "fakescreen"
I don't know :/

//EDIT//
Wow, I just looked at how to do a thread and I am completely lost. I was hoping just to have a function that auto-executes outside my program loop. Maybe I'm looking at this too hard..

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

You're making this way too hard.

So you need triple buffering? 3 Video pages? That's fine, just draw your current video page on every refresh.

No, you cannot assign an ALLEGRO_BITMAP* the value of an ALLEGRO_DISPLAY* to get access to the video memory. Why do you even think that would work?

In Allegro 5, there is simply no way to draw directly on the 'screen'. Everything you do depends on the driver and whether it chose a buffered or paged setup. But you have no access. The only way to 'update' the screen is to call al_flip_display(). And you can't depend on the contents of the video driver's buffer. As soon as you flip the display, the contents of the backbuffer are undefined. They could be old pages, they could be the same buffer, they could be anything.

Allegro 5 expects you to completely draw the screen on every update. With a GPU at your disposal, this is easy.

You're welcome to use double buffering or page flipping or triple buffering as you please with allegro. See al_set_target_bitmap.

Get the drawing fixed first, then worry about the threading. Allegro 5 has some special caveats when using displays on threads other than main.

nshade
Member #4,372
February 2004

That's a drag. See, DOS used to have direct access to video memory and used it sometimes as "memory" memory. This is the problem. There is no "refresh". Old graphics cards just "drew" what was in the video memory every hardware refresh without any input from the application at all. I'm actually I'm trying to port a graphics library that was used by an old commercial game from the 90s and making an Allegro wrapper. This way I don't have to change much of the game when I port that.

The game arbitrarily draws on the buffers and the main screen whenever it wants. It's version the pageflip command takes an argument and it will swap the screen with whatever buffer you pass it. (The buffer goes on the screen and the screen goes in the buffer)

OK, this means I'll have to maintain a virtual screen on a bitmap, and every time it's "touched" I will have to copy that to the backbuffer and then flip that to the display. This way I can blit to and from the virtual display.

Th original game ran at 4.77 Mhz. I think I can take the hit on the overhead (haha)

It's a little more than triple buffering. It's literally unprotected memory copies. It's not that it's too complex. In this game's mind, everything should be readable and writable everywhere... It's Allegro making it too complex. However, this game was written before protected memory, so the display code can use an update anyway :)

I was hoping to just shim in a one and two line Allegro equivalent to every command in the library. SO much for being lazy :P

Chris Katko
Member #1,881
January 2002
avatar

nshade said:

OK, this means I'll have to maintain a virtual screen on a bitmap, and every time it's "touched" I will have to copy that to the backbuffer and then flip that to the display. This way I can blit to and from the virtual display.

Like you said, modern computers have no problem doing just that. (*) Also, it's not arbitrary. 99% of games wait for VSYNC. So tap into that and update. And for the few that don't, allow the "poor man's VSYNC" mode where you track every single update.

(*) Even in the late 90's, OpenGL games were rendering entire worlds onto textures (a "virtual screen") and then drawing that texture in the world. Drawing to a texture (from VRAM stored textures) is almost free.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

nshade
Member #4,372
February 2004

The game is very heavily GUI based (It's like the Star Trek GUI.. and not with typical controls like you would expect) and it's the controls themselves that sometimes directly update the video memory in their little clipped area. Also sometimes a function will arbitrarily flip buffers to the display and back off again. It also reads pixel values from the screen. It's strange. My "virtual" bitmap display is the best bet. Looking at the original code, the functions themselves actually test if they are video memory or not.

There is a "master" loop (That I have not implemented yet) that runs on a BIOS 1Ch timer. Right now I'm getting basic functions down. So I kinda rubber-ducky'd you guys to come up with a solution.

When the mast loop is running. I'll revisit.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

It's more than just about reading and writing to memory arbitrarily. The problem with that is that you have to upload from the cpu and download back from the gpu when you're doing things like that. This memory resides on the graphics card, not in the cpu or some special graphics register.

Chris Katko
Member #1,881
January 2002
avatar

GPU + CPU are 99.9% going to be faster than any classic CPU-only program. So I wouldn't worry about speed until it's actually a problem. GPUs are insanely fast... even when run incorrectly. But I would focus on eliminating as many "incorrect" styles as possible.

- First, I'd simply port it over, line for line as a baseline. No hard fixes. Just one-liner conversions. Leave all the pixel operations.

Then start chopping away at "bad" things like:

- Avoid per-pixel operations. When possible, swap getpixel and putpixel with draw calls like lines, circles, or full bitmaps (draw the whole GUI box once and cache it in a bitmap instead of redrawing it from scratch). But failing that, even al_draw_pixel can be extremely fast. It's a bottleneck, but that neck may be more than big enough to fit all the pixels you want on any GPU made since the year 2000.

- Avoid memory bitmaps. You "can" do per-pixel oldschool operations on bitmaps by storing them in RAM and then when you're "done" you send the whole thing over to VRAM. That's what locking a bitmap does.

GPUs love drawing textures -> textures. (VRAM to VRAM). They can alpha blend for almost free. They can draw lines, circles, and polygons for almost free. You want fewer, larger "macro" operations instead of many tiny pixel operations because every pixel has to go across the bus and the bus is a HUGE bottleneck. Meanwhile, (generalizing) while "I want red pixel at 5,5" is about as complex on the bus as "I want this 4K texture printed, scaled, and blended at 5,5." It's way better for the bus to say "Draw picture #13." than "Draw the following colors 1,12,23,4,25,35, ..., [1 MB later]"

So when possible, it's way better to store every possible variation of a "thing" finished and rendered into many textures (=tons of KB of VRAM) than to "build" that thing every frame by drawing the little pieces pixel-by-pixel.

If you post some example pictures of your program, we can probably give some more specific insight into "this can easily be cached this way" and "this GUI easily be drawn fast this way".

Oh, and if you have many "memory buffers" those can become textures. So you draw onto those textures, and then draw the textures to the screen. Just like before. It'll be slower than ideal, but it's still possible to keep the code as-was-designed that way. Then you can move toward optimizing them.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

Go to: