Allegro.cc - Online Community

Allegro.cc Forums » Off-Topic Ordeals » Some really obnoxious A5 questions

Credits go to Dustin Dettmer, Edgar Reynaldo, Elias, Evert, Johan Halmén, Kitty Cat, Matt Smith, Milan Mimica, SiegeLord, Thomas Fjellstrom, and Thomas Harte for helping out!
This thread is locked; no one can reply to it. rss feed Print
 1   2   3 
Some really obnoxious A5 questions
Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

It's unthinkable to disallow it.

Most games seem to destroy the old window/context and create a new one. Due to how windows treats full screen stuff specially.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Milan Mimica
Member #3,877
September 2003
avatar

Quote:

I don't get it. If A5 can create a fullscreen or windowed display, and can destroy a fullscreen or windowed display context, then why can't it do both at the same time?

Because it has to preserve bitmaps, which are gone when the display is destroyed.

Quote:

For example in the case of textures, it's a matter of going through all bitmaps and re-creating the texture of each in the new OpenGL context.

You make it sound like it's easy.
Would need to:
1. Convert all video bitmaps to memory bitmaps (easy, there is an API) EDIT: err, wonrg, make a copy, not convert
2. Let the display destroy, but prevent it from deallocating it's bitmaps, but still free the resources used by the bitmaps
3. Create a new display
4. Init video bitmaps for the new display
5. Upload the old contents from memory bitmaps

EDIT: there is another way, a bit hacky, which would let us convert the memory bitmap to a video bitmap, if the memory bitmap was a video bitmap before... The main problem is that memory bitmap struct is smaller in size than video bitmap, but w don't have to truncate it (and we don't actually), so it can expand again...

Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

1. Convert all video bitmaps to memory bitmaps (easy, there is an API)

I was under the impression that Allegro keeps a memory cache of any "normal" allegro bitmap regardless if it has a texture associated with it.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Milan Mimica
Member #3,877
September 2003
avatar

it doesn't

Thomas Fjellstrom
Member #476
June 2000
avatar

It should.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Edgar Reynaldo
Member #8,592
May 2007
avatar

Quote:

1. Convert all video bitmaps to memory bitmaps (easy, there is an API) EDIT: err, wonrg, make a copy, not convert
2. Let the display destroy, but prevent it from deallocating it's bitmaps, but still free the resources used by the bitmaps

Isn't it the responsibility of the user to make memory bitmap copies of anything they need to preserve? Regarding #2, why does the display need to be prevented from deallocating its bitmaps?

Milan Mimica
Member #3,877
September 2003
avatar

But it gives me a better idea. There is locking, which creates a memory cache. Maybe we can lock a bitmap ... <do everything needed> ... and unlock the cache to another texture.

edit:

Quote:

Isn't it the responsibility of the user to make memory bitmap copies of anything they need to preserve?

I wish.

Quote:

why does the display need to be prevented from deallocating its bitmaps?

Because users are holding pointers to it.

Thomas Fjellstrom
Member #476
June 2000
avatar

I was under the impression that by default all ALLEGRO_BITMAPs were actually set to be a merger of A4's memory and video bitmaps. ALLEGRO_BITMAPs are also supposed to have user definable useage policies to tell allegro and the backend how to deal with the bitmap, like say, "Read little, Write a lot", "Write none, read alot" etc, and that would effect the backend's storage and use of said bitmap. And that the user could if they wanted, force any bitmap to be memory only, but that should be a minority case.

This was the plan, why was is changed?

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Thomas Harte
Member #33
April 2000
avatar

If what Thomas Fjellstrom suggests is not the case, then I would very much suggest that it will prima facie need to be if you want to support clean switching in and out of full-screen mode. Otherwise, what will you do if all the things that were video bitmaps before you switched state no longer fit in video memory?

In any case, the user shouldn't have to care about stuff like video versus memory bitmaps; they should be driver-level issues and only exist in Allegro 3+ as a quick hack (and in my [EDIT: hindsight-equipped] mind the wrong one — an adaptation of compiled or RLE sprites should have been the means for uploading graphics to video memory) to enable some users to get some video acceleration if they try really hard. And most don't.

Kitty Cat
Member #2,815
October 2002
avatar

Quote:

I meant to say that after a decade and a bit of the 3d APIs competing to do everything they possibly can, I think that they are probably now at their last hurrah and that general purpose languages which, by the way, can also target GPUs will be the future.

Isn't that a bit like saying STL is obsolete because you have libc? The 3D APIs are designed to provide a mechanism for drawing a 3D scene. In the end, you're essentially just going to be drawing textured triangles no matter what you do (until ray-tracing becomes viable, but that's beside the point). Why would you write a "software" renderer to run on hardware, when these 3D APIs already have it done for you? Why would you want to rewrite z-buffer handling, stenciling, texturing, vertex/pixel buffers, etc? Not only that, but they also manage resources (eg. lost textures/surfaces) and memory for you.

Sure, you could basically rewrite a GL driver, or some proprietary 3D API, using CUDA/OpenCL/whatever, but that seems a bit like reinventing the wheel. A general purpose languange is just that.. a general purpose coding language. And a 3D API is just that.. an API designed for handling 3D rendering. Why anyone thinks you can get rid of the latter because you have the former is lost to me..

--
"Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham

Thomas Harte
Member #33
April 2000
avatar

Quote:

Isn't that a bit like saying STL is obsolete because you have libc?

No, it's not. I've phrased myself badly at least twice now, so I can see how my argument comes across like that. In the hope of doing better I'm going to try to restate myself — I hope you'll forgive me if it sounds like I'm just persistently restating myself rather than engaging with what you're saying.

Since maybe the GeForce, OpenGL and DirectX have become mainly abstractions. Their goal is to enable a programmer to efficiently use a GPU from a high-level language. In recent years GPUs have changed a lot. Both of the major APIs have had to be adapted to allow for the new capabilities of GPUs.

In my opinion, the capabilities of GPUs either have or are starting to become so far in advance of mere tools for 3d graphics that OpenGL and DirectX are not going to continue to be the de facto gateways to the GPU. The GPU will become another programmable resource that the OS manages as applications require it. One of its tasks will be 3d rendering and I agree with your point that for as long as people want to draw 3d scenes with polygons, nobody is going to suddenly deprecate the popular APIs for drawing 3d scenes with polygons.

The central topic is Allegro's future use of hardware and whether adding pxiel shaders now will be maintainable in the future. My argument is that OpenGL and DirectX are unlikely to undergo any further revolutionary changes such that pixel shaders written now become hard to maintain legacy code within their respective DirectX and OpenGL drivers.

I am basing that on my belief that as the GPU slowly wends away from being purely for graphics, exposing its functionality means changes underneath the 3d APIs, not on top of them.

Though it is therefore possible and, according to history, probably even likely that one or both of OpenGL or DirectX will become a harmful abstraction that just obstructs efficient hardware usage at some point in the far flung future, I think that in Allegro terms, that would mean evicting complete drivers from the source, not just bits of drivers.

In other words, I'm using an unnecessarily unfocussed argument to argue that adding pixel shaders to the DirectX and OpenGL drivers is unlikely to make either more likely to become out of date and unmaintainable.

weapon_S
Member #7,859
October 2006
avatar

So are the developers of allegro aiming to define that general shader API, as Allegro has been a hegemony in the past?
I don't care; this is all way over my head. I'm out :( Before this thread, I thought shaders were only for bumpmapping and the sorts.
So did Evert write those circle routines? I didn't quite catch that.
I would regret it if Allegro dropped support for older PC's without a proper GPU. Imagine a Pong game that needed a GPU... But as I said, I'm only an user.
The small point I was trying to make before is: if there is as al_do_circle that has sub-pixel accuracy, the proc function should recieve 2 float arguments, n'est-ce pas?

Matt Smith
Member #783
November 2000

On thing shaders can do that the plain GL API can't always do, is mimic the rendering output of another 3D API. I believe it would be possible to allow paletted textures etc.

Kitty Cat
Member #2,815
October 2002
avatar

Quote:

In other words, I'm using an unnecessarily unfocussed argument to argue that adding pixel shaders to the DirectX and OpenGL drivers is unlikely to make either more likely to become out of date and unmaintainable.

I can agree with that, as long as the pixel shaders are used for "shading" pixels. I don't think I'd want to see Allegro exposing its own shader language, though.. maybe just as a pass-through to the current driver, or using Cg.

Quote:

I was under the impression that by default all ALLEGRO_BITMAPs were actually set to be a merger of A4's memory and video bitmaps.

I was under the impression that it meant it could be either video and/or memory, depending on the backend, format, size, direction of the keyboard, phase of the moon, etc. For OpenGL, there'd be no need to maintain a memory copy because it already does. AFAIK, D3D8+ does too. The hints would just be taken as that.. hints, as to whether it would be faster in RAM or VRAM (and such a position may change depending on /dev/urandom, as long as everything still works).

Quote:

There is locking, which creates a memory cache. Maybe we can lock a bitmap ... <do everything needed> ... and unlock the cache to another texture.

This will prevent the lock being able to work as a pass-through for D3D's lock. In general, and definitely in D3D's case, you just lock an object to get a pointer, then unlock that object and the changes to the data are used however it needs. The unlock is on the object, not the data. You can't lock one object, then unlock another to have the changes affect that one instead.

--
"Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham

Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

I was under the impression that it meant it could be either video and/or memory, depending on the backend, format, size, direction of the keyboard, phase of the moon, etc

I remember it being a little more well defined than that.

Quote:

For OpenGL, there'd be no need to maintain a memory copy because it already does.

So you don't have to make a copy of the texture data to change it in memory? As it is it seems you have to call some GL or DX method to fetch the bitmap data which may actually force a download more often than not, and copy that data, make the change, and re upload? Why not just have a copy, and upload after its changed. And this way if the GL or DX context somehow changes or goes away, the bitmap is still there and could easily be reattached to any context at any later time.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Kitty Cat
Member #2,815
October 2002
avatar

Quote:

So you don't have to make a copy of the texture data to change it in memory?

To change it in GL, you have to read it (which admittedly is not to efficient.. bind the texture (slow), get the texture data (possibly converting it, though I'd be surprised if drivers don't keep the system copy in the host format), then write the data). I'd hope GL would know whether or not it's been written to, so as to avoid a download.

But then, changing texture data doesn't need to be fast, just efficient. There are other ways to handle it.. like for dynamic textures (eg. video playback), a combination of a discard flag and scratch memory would yield good performance. Maybe creating a memory copy only if it's locked a lot. Rendering to a texture in GL should use FBOs/pbuffers/aux-buffer-with-copy, and such an action could mark the memory copy as "dirty" which would download ("clean" it) when locked.

This also only applies to GL. D3D's texture locking symantics should automatically handle this for itself, making an Allegro-handled memory copy there even more redundant.

--
"Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham

SiegeLord
Member #7,827
October 2006
avatar

Quote:

So are the developers of allegro aiming to define that general shader API, as Allegro has been a hegemony in the past?
I don't care; this is all way over my head. I'm out :( Before this thread, I thought shaders were only for bumpmapping and the sorts.

I'm not doing it :P The only thing I would suggest having shaders for is the internal mechanisms for drawing quadrics and cubics, which cannot be done well with straight line approximations. Also, shaders are the only way to do proper anti-aliasing. I was not suggesting a generic public API for that. Nevertheless, there's OGRE that has a generic shader API that somehow works for both DirectX and OpenGL, so it's possible in theory.

Quote:

So did Evert write those circle routines? I didn't quite catch that.

Which ones? I've no idea who wrote the one that is in A4 right now. I wrote the one in the wiki.

Quote:

I would regret it if Allegro dropped support for older PC's without a proper GPU. Imagine a Pong game that needed a GPU... But as I said, I'm only an user.

That is an unfortunate truth right now, but so far the argument was that if you want a software device, you choose A4. I think it's silly myself, but there are not enough devs for another backend, or at least I've been led to believe that.

Quote:

The small point I was trying to make before is: if there is as al_do_circle that has sub-pixel accuracy, the proc function should recieve 2 float arguments, n'est-ce pas?

Eh? You could think of the callback for the do_circle as a fragment shader. Fragment shaders get the coordinate of the final fragment (the pixel) and that necessarily is an integer, or at least that was my understanding. You would be going in very deep doo-doo if you just passed the offsetted do_circle's output to the callback, it'd be gaps/overdraw galore.

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

Evert
Member #794
November 2000
avatar

Quote:

So did Evert write those circle routines?

I didn't write any circle drawing routines that you're likely to find online anywhere. So no, I didn't.

Quote:

That is an unfortunate truth right now, but so far the argument was that if you want a software device, you choose A4. I think it's silly myself, but there are not enough devs for another backend, or at least I've been led to believe that.

Probably right - and besides, one doesn't exist now, but Windows and Linux ports didn't exist when Allegro 3 was released either. There's no reason such a software renderer can't be added once API has stabilised and things work as they should using the other backends.

And yes, help in that department is welcome, I'm sure. :)

Matt Smith
Member #783
November 2000

CUDA might help here too on NV, as it can apparently share resources with either GL or DX. The GL textures might actually be owned by the CUDA context rather than the GL one.

Just a thought. no proof or code to back it up

axilmar
Member #1,204
April 2001

allegroNG.h

Next Generation!!!!

Vanneto
Member #8,643
May 2007

Some of my suggestions of the filename:

#include <allegro_ng.h>    // axilmar's suggestion.
#include <allegro_ngx.h>   // With an X, X's mean future.
#include <allegroX.h>      // Without NG, X is just like NG.
#include <allegrou.h>      // This just sounds cool.

Quite frankly, any of these is better then the current allegro5.h. Why? Because it does not contain the 5.

There was a time when a high version number was considered very nice. Why? It meant that the program was advanced. It meant it was ape shit advanced. But as some of you old farts here would remember, today are not the old days.

A high version number means shittines. Hard to believe, I know. But Dr. Vann Vannerhouer conducted a research on the International Institute for Applied Statistics and Supervising (II-ASS).

He presented two software products to 10,000 test subjects. One was Norton Antivirus (version number 16) and the other was Firefox 2.1.5.

Lets clear one thing, all the subjects were chosen because of their DFR. Thats the Dumb Fuck Rating. Why? Well if the subjects knew anything about computers, naturally they would first ask themselves why an AV program and a browser program are being compared. Secondly, anyone with a sound brain prefers Firefox over Norton. I know I know, not comparable. But hey, the people with DFR over 10.5 don't know that.

If anyone is asking themselves how the DFR is calculated, its pretty simple actually, but I think that Dr. Vannerhouer himself explained it quite good:

Dr. Vannerhouer said:

The DFR rating is calculated by dividing the Stupidity of the subject with his Ignorance. Then the value is multiplied by pi.

If you want to know more about the DFR please contact Dr. Vannerhouer: vann80@gmail.com. (Yes, he is 80 years old. :))

Back to the subject matter at hand. Allegro 5, this is what the topic is actually about, yes? Well because I'm an on-topic kind of guy, lets continue.

All the subjects had no prior knowledge about the programs. They could only judge them by their version numbers. So, what happened?

9,745 people preferred Norton over Firefox because the higher version number seemed to imply importance/superiority. This was an unexpected result. Quite fascinating actually. Why did this happen? DF's. Yes. They don't know that the version number doesn't matter.

We do.

What can we conclude? It doesn't matter what the name of the header is. The only people using Allegro 5 will be people that actually know something about technology. Who cares if there is a 5 in there or not?

Well, these are my 2 cents anyway.

In capitalist America bank robs you.

SiegeLord
Member #7,827
October 2006
avatar

I'm confused. You want Allegro 5 to become Allegro 5000?

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

Thomas Fjellstrom
Member #476
June 2000
avatar

Vanneto: Uh, the 5 in there isn't just for show ::) Nor do we actually care what the version number of allegro actually is. You did know it took about 8 years to get to Allegro 5 since Allegro 4 was released? Its not like we've been version jumping ::)

I'm not sure why you ranted like that, but it was overly pointless. Allegro's version numbers actually mean something.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Evert
Member #794
November 2000
avatar

This is strictly my personal opinion.

Quote:

#include <allegro_ng.h> // axilmar's suggestion.

I hate it. What "ng" really mean anyway? (I know, it's meant to be Next Generation or something like that). What do you do when you make a new major revision? nng? Calling the new version of foo foo_new or new_foo or something like that not only looks stupid, you've painted yourself in a corner for when you have something even newer.

Quote:

#include <allegro_ngx.h> // With an X, X's mean future.

Worse. I hate it when people stick an X on something just to make it look cool (note: X11 is version 11 of the X Window system, which is the successor of the W window system, the X in Mac OS X is the version number).

Quote:

#include <allegroX.h> // Without NG, X is just like NG.

Even worse.

Quote:

#include <allegrou.h> // This just sounds cool.

That's just silly. Why no allegroo.h? Then the next version can be alegrooo.h.

I agree the 5 in there looks a bit silly, but it doesn't really matter one way or the other and shouldn't bother anyone.

That said, I do have my favourite letter replacement. You can probably guess (since I keep coming back to this) but that's allegrov.h (or allegroV.h if you want to make it clearer). But I'm not seriously suggesting that as an alternative name.

MiquelFire
Member #3,110
January 2003
avatar

Go with the Linux (well, I know they have it at least) library way of either file name or library version, and use allegro2.h as this is the second version of the API for Allegro. Then the next Allegro version that has to break backward compatibility can be allegro3.h

I know the suggested way to handle libraries is like that somewhere...

---
Febreze (and other air fresheners actually) is just below perfumes/colognes, and that's just below dead skunks in terms of smells that offend my nose.
MiquelFire.red | +Me
Windows 8 is a toned, stylish, polished professional athlete. But it’s wearing clown makeup, and that creates a serious image problem. ~PCWorld Article

 1   2   3 


Go to: