Some really obnoxious A5 questions
weapon_S

OK, I know Allegro 5 is all the rage now, but I still want to make another thread about it.
allegro5/allegro5.h
Will it stay like this i.e. allegro5.h?
What's up with the "drawing primitives"? No do_circle() ?
Will Allegro now only work through system dependant API's and OpenGL (internally)?
Do I have to learn OpenGL syntax? (Now known as Allegro syntax ??? :P )
If I have to download extended functionality separately (like file formats), how useful is it integrating into the core Allegro API? Or am I mistaken and are all add-ons only invoked by including their separate headers/libraries? (How modular)
Will I be able to install functionality later on? Or will that be just reinstalling Allegro?
What's going to happen to the giftware "license"? (Especially with the add-ons)?
I hope I'm not being too obnoxious...for people to answer. Please? :-*
I'm considering to stick with 4... forever.

Johan Halmén
Quote:

I'm considering to stick with 4... forever.

I might do that, too. I don't have the whole picture about Allegro 5 and what Allegro will be in the future. Will it dissolve into a bunch of add on libraries and add on data structures? A4 is still something I can see as a coherent library, despite some add ons I use. bmp and tif is there, wav and mid is there, gui is there.

Thomas Harte

I offer the following answers as someone who has been following Allegro 5, I believe them to be correct but I'm hardly authoritative... and I have no idea about your final two questions (sorry!)...

Quote:

What's up with the "drawing primitives"? No do_circle() ?

I believe SiegeLord is working on geometric primitives, the plan being that they'll be there, they just aren't yet.

Quote:

Will Allegro now only work through system dependant API's and OpenGL (internally)?

I think I must misunderstand the question; with the exception of the DOS port which is not currently maintained, Allegro has always only worked through system dependent APIs. But I think it will farm all its graphics chores out to OpenGL or DirectX as appropriate wherever it can, though I understand that software versions of all routines are being prepared, so I guess it'll support pure framebuffer targets too.

Quote:

Do I have to learn OpenGL syntax? (Now known as Allegro syntax)

I believe you won't, as Allegro will function where OpenGL doesn't exist. Some of the layout of the Allegro API may quite obviously be designed around the way it wants to use video hardware internally, but you'll still be using "the Allegro API" (albeit not the Allegro 2.0 — 4.x API).

Milan Mimica
Quote:

Do I have to learn OpenGL syntax? (Now known as Allegro syntax

No. I assume that by "syntax" you mean API. No. Allegro has it's own graphics API. It just uses OpenGL internally sometimes. You don't have to care about that. Again, no.

Quote:

If I have to download extended functionality separately (like file formats),

... you don't. It's all in the same package.

Evert
Quote:

OK, I know Allegro 5 is all the rage now, but I still want to make another thread about it.

At the risk of sounding like a broken record, Allegro 5 is not done yet. You cannot and should not judge it on the basis of an incomplete WIP release. Do not call 4.9.5 Allegro 5. It is not Allegro 5 (yet).

Quote:

Will it stay like this i.e. allegro5.h?

Something like that anyway, to avoid conflicts with Allegro 4's allegro.h.
I agree that the 5 tacked onto it looks a bit goofy, especially if we ever want to have a 6 that's compatible with 5. Feel free to suggest an alternative.

Quote:

What's up with the "drawing primitives"? No do_circle() ?

yet. For do_circle(), the A4 function could actually be pretty much copied verbatim. Feel free to help out here.

Quote:

Will Allegro now only work through system dependant API's and OpenGL (internally)?

Allegro will use platform dependent APIs internally, yes. I must admit I don't see the point you're trying to make though, it's like that in A4 as well. How else do you expect to get DirectX on Windows and X11 in Linux?

Quote:

Do I have to learn OpenGL syntax?

If you intend to do 3D stuff that's probably a good idea. Other than that, no.

Quote:

If I have to download extended functionality separately (like file formats),

Who says you have to do that?
Loaders are all addons, meaning they are independent modules. Modularity is good. But you get the modules providing basic functionality along with the core library.
In A4 you have to jump through hoops (or at least install addons yourself) to get PNG and Ogg support. In A5 you get those out of the box. So in fact you have to download fewer addons.

Quote:

Or am I mistaken and are all add-ons only invoked by including their separate headers/libraries? (How modular)

Yes.

Quote:

Will I be able to install functionality later on?

I'm not sure what you mean. Do you mean "can I leave out the image loader and install it later"? Yes, you can, but I don't see why you would.

Quote:

Or will that be just reinstalling Allegro?

At the moment, you run cmake and make again to build the skipped addon, which will only compile said addon and not the rest of Allegro.
When A5 is out and we have binary packages up for download - who knows? Maybe you can pick and choose which components you want. I still don't see why you'd leave out one of the default addons.

Quote:

What's going to happen to the giftware "license"? (Especially with the add-ons)?

I don't think anything happens to the giftware license. As far as I know A5 and its bundled addons are all giftware. People providing their own addons can pick whatever license they want, just as they can with A4.

SiegeLord
Quote:

What's up with the "drawing primitives"? No do_circle() ?

Err, pretty much everything I was planning on making myself is outlined here: Linky. There is still no do_circle because that is beyond my meager mathematical skills to implement properly (although, there is this which, unfortunately, does not work too well). So yeah, help appreciated ;D.

weapon_S

[obligatory but sincere sucking up] You guys rock! :) You (almost) answered all my questions. ;D

Evert said:

Allegro 5 is not done yet.

Ah,... yes,... my apologies.

Thomas Harte said:

I understand that software versions of all routines are being prepared

That's what I meant to ask with my silly "platform dependant API's internally".

Milan Mimica said:

No. I assume that by "syntax" you mean API.

Big duhhh on my part:-[
So some 3D things will be directly via OpenGL. Hm...
As I realize now, I'm very much confused about the add-on thingy. So this is what I guess now:
Allegro 5 will have (slightly?) more "out of the box" functionality, but also incorporates a structure that allows for easy extensions. So will allegro have all the libraries listed here "OTB"(Out of The Box???) ? And 4D graphics? :P (I really think it should have 4D graphics for the aforementioned reasons). More seriously, speech synthesis? I heard somebody about it... I think.

Evert said:

Feel free to suggest an alternative.
...
For do_circle(), the A4 function could actually be pretty much copied verbatim. Feel free to help out here.

allegroN.h ? And allegroO.h for the old one?
(That's pretty much how I manage all my code :-/) But don't listen to me if you want anything maintainable. Sooooo, about the drawing primitives. I only see those vertex things now in the primitives. So will al_circle() be invoked through that ??? Vertex circle ? AL_VERTEX2D al_do_circle()??? Or is it another function? Should it incorporate sub-pixel accuracy as well? I'd like to help out, but currently I'm doing my "developing" on a old PC with w95. (Works pretty well actually: conTEXT, Mingw, DJGPP, project dogwaffle. Yes, I'm upgrading "soon" ;D)
But more importantly installing a WIP version is a pain in the lower back.
Yes, I'm confused about the drawing primitives too :P
[edit]SiegeLord posted...before me
[edit]So I actually can help out...Sweet. Just know: I'm not reliable.
I thought a vertex was something 3D:-X It's a pixel.
May I suggest (and make;D perhaps:-[):

void al_do_circle(BITMAP *bmp, float x, float y, float radius, int d, void (*proc)(BITMAP *, float, float, int));
void al_do_circle(BITMAP *bmp, int x, int y, int radius, int d, void (*proc)(BITMAP *, int, int, int));
void al_do_fast_circle(BITMAP *bmp, float x, float y, float radius, int d, void (*proc)(BITMAP *, float, float, int))
void al_do_fast_circle(BITMAP *bmp, int x, int y, int radius, int d, void (*proc)(BITMAP *, int, int, int))

(Yes, nitpicking.)
And of course circle drawing version too.(See Siegelords links : )
I don't get al_draw_primitive_2d. And where did SiegeLord make this page come from? I didn't see any links.
[edit]Damnit! Where's Weapon S 2.0 with mutex support?
[edit] realized I made some mistakes in my "suggestions"

SiegeLord
Quote:

al_circle() be invoked through that ??? Vertex circle ? AL_VERTEX2D al_do_circle()??? Or is it another function? Should it incorporate sub-pixel accuracy as well?

At this particular time, everything is a polygon or a polyline, so al_draw_circle would be a many sided polygon. In theory, it is possible to create something that would be subpixel accurate, and I did link to one of my abortive attempts at that, (although I challenge you to find the difference between a very many sided polygon and a true circle), but I do not have the concentration/skill/time to do them at this time. Trust me, having polygonal approximation is better than having nothing at all.

Thomas Harte

It would be relatively trivial to do a pixel-perfect on-GPU circle drawer using a pixel shader. So: is there currently any shadery stuff in Allegro 4.9 at all (for uploading them, managing them, ensuring they survive context changes or video memory flushes if necessary, depending on OS, etc, etc) and, furthermore, would a path that relied not just on a particular API but also a particular class of hardware being available integrate nicely or be acceptable?

SiegeLord
Quote:

It would be relatively trivial to do a pixel-perfect on-GPU circle drawer using a pixel shader. So: is there currently any shadery stuff in Allegro 4.9 at all (for uploading them, managing them, ensuring they survive context changes or video memory flushes if necessary, depending on OS, etc, etc) and, furthermore, would a path that relied not just on a particular API but also a particular class of hardware being available integrate nicely or be acceptable?

Here are my own thoughts, as I have not seen Allegro devs proper discuss this before. Any and all operations that Allegro does seem to have a software backup, which makes writing in new graphical features a little difficult at times. I agree that with shaders, pretty much anything can be done. My own thoughts on this matter basically ended up at this conclusion: there will be a second primitive addon that will be shader based. It will allow for pixel perfect primitives, as well as anti-aliasing. Somehow, it will have shaders for DX, OpenGL and software. For hardware without shaders, the software shaders will be used that will operate on memory bitmaps that will be transfered back and forth between RAM and VRAM (might not be an issue with AMD Fusion and such).

weapon_S: What's the difference between fast/non-fast versions of those functions?

Quote:

And where did SiegeLord make this page come from? I didn't see any links.

From my mind. Nobody does these things anymore, and perhaps never did, so I just made it from scratch.

Quote:

I don't get al_draw_primitive_2d. al_draw_primitive_2d

It'll probably make more sense once I actually make it, and then make a usage example.

Evert
Quote:

There is still no do_circle because that is beyond my meager mathematical skills to implement properly

For do_circle or do_circle-like functionality, you can just copy A4's function. It's not tied to the A4 API in any way except maybe the argument list of the callback function. However, whether A4's method for drawing a circle outline is the best way to do it in A5 is perhaps another question.

Quote:

So this is what I guess now:
Allegro 5 will have (slightly?) more "out of the box" functionality, but also incorporates a structure that allows for easy extensions.

That's right.

Quote:

So will allegro have all the libraries listed here "OTB"(Out of The Box???) ?

It will have those you can take seriously. Of course it does not have speech synthesis, PHP, 4D graphics or /dev/null compression. It also doesn't have console ports at the moment (it's quite enough work getting it running on Windows, Linux and OS X at the moment).
I know I ran down that list in another post a couple of weeks ago, so use the forum search to look it up.

Quote:

So will al_circle() be invoked through that ??? Vertex circle ?

I hope not. Polygons really suck when you compare them to proper circles.
Doing the outline (using a do_circle() like function) is easy. The problem is doing a filled circle. The best way to do that, as far as I know, is along the lines of filling in the bounded square and then fill in the remaining arcs with increasingly small triangles, until your triangles become as thin as a single line of pixels. I don't know how fast or slow that will be, but I can imagine it taking up an inordinate amount of time filling in the region close to the boundary. Alternatively, a higher-order polygon can be used and the remaining sections filled in in the same way. I would need to do some more reading on the subject (using triangle fans is not the proper way to go, apparently, before someone brings that up).

Quote:

although I challenge you to find the difference between a very many sided polygon and a true circle

The difference is quite obvious in some of the examples I've seen, actually.

Quote:

Trust me, having polygonal approximation is better than having nothing at all.

That is a disputable point and whether you agree with it or not depends on what you're trying to do. In general terms, I think having something that does it only half right is worse than not having it at all, because it's of no use to me anyway if I want it done right.
So if you care about having a good approximation to a circle, then no, it's not better. If you want something that maybe looks vaguely like a circle (if you squint), then yes, it's better.

SiegeLord

Well, squinting means having something to compare to :P It's basically a pixel here or there once you have a sufficient number of line segments. That sufficient number is not that large either, up to 1000 segments will take care of most circle sizes, especially if I implement clipping of circles.

You can't just use the allegro's do_circle because it takes integer arguments. A5 seems to have a preference towards floating point arguments for its functions, so having one of its primitive drawers ignoring the floating part would be bad.

Quote:

using triangle fans is not the proper way to go, apparently, before someone brings that up

People say that, but never provide a better method :P

Dustin Dettmer
Evert said:

Of course it does not have speech synthesis, PHP, 4D graphics or /dev/null compression. It also doesn't have console ports at the moment (it's quite enough work getting it running on Windows, Linux and OS X at the moment).

Damn, I was really pushing for PHP.

A more serious problem: Now that A5 is actually coming out, what version will be the magical one with Mind Control and PHP?

Evert
Quote:

You can't just use the allegro's do_circle because it takes integer arguments. A5 seems to have a preference towards floating point arguments for its functions, so having one of its primitive drawers ignoring the floating part would be bad.

I think you can generalise it to work with floats, but I must admit I haven't tried.

Quote:

That sufficient number is not that large either, up to 1000 segments will take care of most circle sizes

Yes, and if you come across one of those where it doesn't?
Well, you probably weren't going to draw a circle with a 3-pixel radius using 1000 line segments anyway.

Quote:

People say that, but never provide a better method :P

I just did.
I can't take the credit for thinking of it though (Bob posted it in a previous discussion on the topic, and he also explains why triangle fans are not the way to go).

SiegeLord
Quote:

I think you can generalise it to work with floats, but I must admit I haven't tried.

I'm sure you can. The thing I posted is the result of months of my trying to generalize it before I gave up.

Quote:

I just did.
I can't take the credit for thinking of it though (Bob posted it in a previous discussion on the topic, and he also explains why triangle fans are not the way to go).

Meh, until I see some benchmarks, I am not going to believe that. All of these methods have the same number of polygons anyway, so I hardly think there'll be any difference. In fact, drawing scanlines via GL_LINES manually was faster than polygons at a certain point anyway.

Milan Mimica
Quote:

So: is there currently any shadery stuff in Allegro 4.9 at all (for uploading them, managing them, ensuring they survive context changes or video memory flushes if necessary, depending on OS, etc, etc)

No, there isn't any of this. I thought that shaders are part of the OpenGL state, and state is built to stay.

Quote:

and, furthermore, would a path that relied not just on a particular API but also a particular class of hardware being available integrate nicely or be acceptable?

I don't know. Separate shaders rendering paths for particular hardware class, even particular vendor, scares me. I imagine we could have two or three different shader programs that do the same thing. Sounds much worse than ASM code in A4. But then, the one is always afraid of things he doesn't know.

Looking at the future, I think it would be a better idea to start thinking of a next-generation OpenGL backhand for A5, which would not even try to work on old hardware.

Matt Smith
Quote:

what version will be the magical one with Mind Control

If we wait too long, it won't be a joke anymore._45110385_monkey_nerves_226.gif

Quote:

I don't know. Separate shaders rendering paths for particular hardware class, even particular vendor, scares me. I imagine we could have two or three different shader programs that do the same thing. Sounds much worse than ASM code in A4. But then, the one is always afraid of things he doesn't know.

The ASM code was, in its time, a great part of the performance of Allegro 3.x. Getting rid of it was just clearing the decks for this decade's neat trick, which is shaders. It's a complication we should accept, IMHO, because the benefits are potentially so large. I'm not offering to write any of this shader code, but I have the CUDA SDK & Driver installed ready to test :)

Thomas Harte
Quote:

No, there isn't any of this. I thought that shaders are part of the OpenGL state, and state is built to stay.

Right, but my understanding was that using OpenGL with the Windows API is a painful process that often necessitates destroying your current rendering context and creating a new one if you want to do much more than have a fixed-size window that stays on one monitor. In DirectDraw (at least when I used it), you couldn't even assume that anything you had in video memory would stay in place from one blit to the next. I guess that's all part of trying to tack something persistant onto a (pre-Vista) inherently nonpersistant windowing model.

Quote:

I don't know. Separate shaders rendering paths for particular hardware class, even particular vendor, scares me.

If we start with GLSL then we get every OpenGL vendor with a reasonably recent card (last three or four years) and can do everything that I understand the Allegro API to be likely to require. I guess DirectX people can do whatever they need to for that driver.

As Matt implies, we're probably now at the stage in computer development where OS-level graphics-specific APIs are on the way out, to be replaced with much more general languages that, by the way, can also use their many-threaded output to plot colours to a pixel based display. But I don't think Allegro needs to care.

Edgar Reynaldo
Quote:

As Matt implies, we're probably now at the stage in computer development where OS-level graphics-specific APIs are on the way out, to be replaced with much more general languages that, by the way, can also use their many-threaded output to plot colours to a pixel based display. But I don't think Allegro needs to care.

Isn't that precisely the reason A5 should care about these things? Is compatibility with equipment that is becoming obsolete a part of the A5 mission statement? Don't we want A5 to stick around well into the future without being bogged down on future development because it's forced to be compatible with older than recent hardware and OS'es?

Thomas Harte
Quote:

Isn't that precisely the reason A5 should care about these things? Is compatibility with equipment that is becoming obsolete a part of the A5 mission statement? Don't we want A5 to stick around well into the future without being bogged down on future development because it's forced to be compatible with older than recent hardware and OS'es?

I'm sorry, I think I was unclear. I meant to say that after a decade and a bit of the 3d APIs competing to do everything they possibly can, I think that they are probably now at their last hurrah and that general purpose languages which, by the way, can also target GPUs will be the future. The CPU manufacturers need to push the world to greater parallelism anyway, so it all ends up tying together neatly.

The reason I raised this was to support my never really stated argument that supporting the OpenGL & DirectX shading languages of the last few years is probably the sum total of the shading language support that Allegro will ever need. I don't think the graphics-specific APIs are necessarily going much further (though how the branding will transfer, I don't know) and I think the cross-vendor shading languages have been sufficient for anything Allegro will want to do for many years.

If, one day, OpenGL and DirectX are about as well supported as QuickDraw is now under OS X or GDI under Vista then of course Allegro should move on, but it'll be a case of writing a new system target.

Incorporating shading language stuff into Allegro should be a target-specific task that won't escalate up and infect the whole library with complexity as the x86 assembler did in the past. It also shouldn't add too much complexity to each specific target (at least, it wouldn't for OpenGL; I left the DirectX fold long before pixel shaders came along but I'd be surprised if it was much more complicated there).

EDIT: sorry, I know I got a bit into "futurology seems a bit useless now but in 50 years it'll be the single most important profession", key points were:

  • support for shaders would be something that lives down in the bowels of the target specific code;

  • there's not much reason to support more than one set of shaders per target at the minute; and

  • I can't see that they'll need that much work to be kept up to date until such time as entire new targets need adding.

I should also add that they're all written in quite high-level languages now, so would be much easier to maintain than ye olde x86 assembly on that front too.

Milan Mimica
Quote:

Right, but my understanding was that using OpenGL with the Windows API is a painful process that often necessitates destroying your current rendering context and creating a new one if you want to do much more than have a fixed-size window that stays on one monitor.

That's only when you switch from fullscreen to windowed and vice-versa. I think allegro should not even allow that, but it does ATM and it is broken on all platforms I've tried. We better drop that.

Even now, without changing anything, al_create_display() can select the appropriate OpenGL backhand. It's just that there is only one.

Quote:

there's not much reason to support more than one set of shaders per target at the minute

Good. There must be someone who will say NO when someone else comes saying "hey this circle drawing shader could be implemented 5% more efficient using NV specifics!".

Thomas Fjellstrom
Quote:

it is broken on all platforms I've tried. We better drop that.

It just isn't implemented on X atm. In X, there is no "full screen", all windows are equal. Just create a window thats the size of the screen and move it to the right place.

Edgar Reynaldo
Quote:

That's only when you switch from fullscreen to windowed and vice-versa. I think allegro should not even allow that, but it does ATM and it is broken on all platforms I've tried. We better drop that.

I don't get it. If A5 can create a fullscreen or windowed display, and can destroy a fullscreen or windowed display context, then why can't it do both at the same time?

Matt Smith
Quote:

...when you switch from fullscreen to windowed and vice-versa. I think allegro should not even allow that,

This is an essential feature of practically any modern PC game. It's unthinkable to disallow it.

Elias
Quote:

This is an essential feature of practically any modern PC game. It's unthinkable to disallow it.

Yes, we absolutely need to support it. However, we could destroy the OpenGL state in the process. Then users simply would have to "re-upload" all shaders themselves. In case the users are also using Allegro's OpenGL driver, and Allegro itself created textures or FBOs (or with that shader-primitives addon also shaders) - Allegro of course must take care to restore them after the switch. For example in the case of textures, it's a matter of going through all bitmaps and re-creating the texture of each in the new OpenGL context.

Thomas Fjellstrom
Quote:

It's unthinkable to disallow it.

Most games seem to destroy the old window/context and create a new one. Due to how windows treats full screen stuff specially.

Milan Mimica
Quote:

I don't get it. If A5 can create a fullscreen or windowed display, and can destroy a fullscreen or windowed display context, then why can't it do both at the same time?

Because it has to preserve bitmaps, which are gone when the display is destroyed.

Quote:

For example in the case of textures, it's a matter of going through all bitmaps and re-creating the texture of each in the new OpenGL context.

You make it sound like it's easy.
Would need to:
1. Convert all video bitmaps to memory bitmaps (easy, there is an API) EDIT: err, wonrg, make a copy, not convert
2. Let the display destroy, but prevent it from deallocating it's bitmaps, but still free the resources used by the bitmaps
3. Create a new display
4. Init video bitmaps for the new display
5. Upload the old contents from memory bitmaps

EDIT: there is another way, a bit hacky, which would let us convert the memory bitmap to a video bitmap, if the memory bitmap was a video bitmap before... The main problem is that memory bitmap struct is smaller in size than video bitmap, but w don't have to truncate it (and we don't actually), so it can expand again...

Thomas Fjellstrom
Quote:

1. Convert all video bitmaps to memory bitmaps (easy, there is an API)

I was under the impression that Allegro keeps a memory cache of any "normal" allegro bitmap regardless if it has a texture associated with it.

Milan Mimica

it doesn't

Thomas Fjellstrom

It should.

Edgar Reynaldo
Quote:

1. Convert all video bitmaps to memory bitmaps (easy, there is an API) EDIT: err, wonrg, make a copy, not convert
2. Let the display destroy, but prevent it from deallocating it's bitmaps, but still free the resources used by the bitmaps

Isn't it the responsibility of the user to make memory bitmap copies of anything they need to preserve? Regarding #2, why does the display need to be prevented from deallocating its bitmaps?

Milan Mimica

But it gives me a better idea. There is locking, which creates a memory cache. Maybe we can lock a bitmap ... <do everything needed> ... and unlock the cache to another texture.

edit:

Quote:

Isn't it the responsibility of the user to make memory bitmap copies of anything they need to preserve?

I wish.

Quote:

why does the display need to be prevented from deallocating its bitmaps?

Because users are holding pointers to it.

Thomas Fjellstrom

I was under the impression that by default all ALLEGRO_BITMAPs were actually set to be a merger of A4's memory and video bitmaps. ALLEGRO_BITMAPs are also supposed to have user definable useage policies to tell allegro and the backend how to deal with the bitmap, like say, "Read little, Write a lot", "Write none, read alot" etc, and that would effect the backend's storage and use of said bitmap. And that the user could if they wanted, force any bitmap to be memory only, but that should be a minority case.

This was the plan, why was is changed?

Thomas Harte

If what Thomas Fjellstrom suggests is not the case, then I would very much suggest that it will prima facie need to be if you want to support clean switching in and out of full-screen mode. Otherwise, what will you do if all the things that were video bitmaps before you switched state no longer fit in video memory?

In any case, the user shouldn't have to care about stuff like video versus memory bitmaps; they should be driver-level issues and only exist in Allegro 3+ as a quick hack (and in my [EDIT: hindsight-equipped] mind the wrong one — an adaptation of compiled or RLE sprites should have been the means for uploading graphics to video memory) to enable some users to get some video acceleration if they try really hard. And most don't.

Kitty Cat
Quote:

I meant to say that after a decade and a bit of the 3d APIs competing to do everything they possibly can, I think that they are probably now at their last hurrah and that general purpose languages which, by the way, can also target GPUs will be the future.

Isn't that a bit like saying STL is obsolete because you have libc? The 3D APIs are designed to provide a mechanism for drawing a 3D scene. In the end, you're essentially just going to be drawing textured triangles no matter what you do (until ray-tracing becomes viable, but that's beside the point). Why would you write a "software" renderer to run on hardware, when these 3D APIs already have it done for you? Why would you want to rewrite z-buffer handling, stenciling, texturing, vertex/pixel buffers, etc? Not only that, but they also manage resources (eg. lost textures/surfaces) and memory for you.

Sure, you could basically rewrite a GL driver, or some proprietary 3D API, using CUDA/OpenCL/whatever, but that seems a bit like reinventing the wheel. A general purpose languange is just that.. a general purpose coding language. And a 3D API is just that.. an API designed for handling 3D rendering. Why anyone thinks you can get rid of the latter because you have the former is lost to me..

Thomas Harte
Quote:

Isn't that a bit like saying STL is obsolete because you have libc?

No, it's not. I've phrased myself badly at least twice now, so I can see how my argument comes across like that. In the hope of doing better I'm going to try to restate myself — I hope you'll forgive me if it sounds like I'm just persistently restating myself rather than engaging with what you're saying.

Since maybe the GeForce, OpenGL and DirectX have become mainly abstractions. Their goal is to enable a programmer to efficiently use a GPU from a high-level language. In recent years GPUs have changed a lot. Both of the major APIs have had to be adapted to allow for the new capabilities of GPUs.

In my opinion, the capabilities of GPUs either have or are starting to become so far in advance of mere tools for 3d graphics that OpenGL and DirectX are not going to continue to be the de facto gateways to the GPU. The GPU will become another programmable resource that the OS manages as applications require it. One of its tasks will be 3d rendering and I agree with your point that for as long as people want to draw 3d scenes with polygons, nobody is going to suddenly deprecate the popular APIs for drawing 3d scenes with polygons.

The central topic is Allegro's future use of hardware and whether adding pxiel shaders now will be maintainable in the future. My argument is that OpenGL and DirectX are unlikely to undergo any further revolutionary changes such that pixel shaders written now become hard to maintain legacy code within their respective DirectX and OpenGL drivers.

I am basing that on my belief that as the GPU slowly wends away from being purely for graphics, exposing its functionality means changes underneath the 3d APIs, not on top of them.

Though it is therefore possible and, according to history, probably even likely that one or both of OpenGL or DirectX will become a harmful abstraction that just obstructs efficient hardware usage at some point in the far flung future, I think that in Allegro terms, that would mean evicting complete drivers from the source, not just bits of drivers.

In other words, I'm using an unnecessarily unfocussed argument to argue that adding pixel shaders to the DirectX and OpenGL drivers is unlikely to make either more likely to become out of date and unmaintainable.

weapon_S

So are the developers of allegro aiming to define that general shader API, as Allegro has been a hegemony in the past?
I don't care; this is all way over my head. I'm out :( Before this thread, I thought shaders were only for bumpmapping and the sorts.
So did Evert write those circle routines? I didn't quite catch that.
I would regret it if Allegro dropped support for older PC's without a proper GPU. Imagine a Pong game that needed a GPU... But as I said, I'm only an user.
The small point I was trying to make before is: if there is as al_do_circle that has sub-pixel accuracy, the proc function should recieve 2 float arguments, n'est-ce pas?

Matt Smith

On thing shaders can do that the plain GL API can't always do, is mimic the rendering output of another 3D API. I believe it would be possible to allow paletted textures etc.

Kitty Cat
Quote:

In other words, I'm using an unnecessarily unfocussed argument to argue that adding pixel shaders to the DirectX and OpenGL drivers is unlikely to make either more likely to become out of date and unmaintainable.

I can agree with that, as long as the pixel shaders are used for "shading" pixels. I don't think I'd want to see Allegro exposing its own shader language, though.. maybe just as a pass-through to the current driver, or using Cg.

Quote:

I was under the impression that by default all ALLEGRO_BITMAPs were actually set to be a merger of A4's memory and video bitmaps.

I was under the impression that it meant it could be either video and/or memory, depending on the backend, format, size, direction of the keyboard, phase of the moon, etc. For OpenGL, there'd be no need to maintain a memory copy because it already does. AFAIK, D3D8+ does too. The hints would just be taken as that.. hints, as to whether it would be faster in RAM or VRAM (and such a position may change depending on /dev/urandom, as long as everything still works).

Quote:

There is locking, which creates a memory cache. Maybe we can lock a bitmap ... <do everything needed> ... and unlock the cache to another texture.

This will prevent the lock being able to work as a pass-through for D3D's lock. In general, and definitely in D3D's case, you just lock an object to get a pointer, then unlock that object and the changes to the data are used however it needs. The unlock is on the object, not the data. You can't lock one object, then unlock another to have the changes affect that one instead.

Thomas Fjellstrom
Quote:

I was under the impression that it meant it could be either video and/or memory, depending on the backend, format, size, direction of the keyboard, phase of the moon, etc

I remember it being a little more well defined than that.

Quote:

For OpenGL, there'd be no need to maintain a memory copy because it already does.

So you don't have to make a copy of the texture data to change it in memory? As it is it seems you have to call some GL or DX method to fetch the bitmap data which may actually force a download more often than not, and copy that data, make the change, and re upload? Why not just have a copy, and upload after its changed. And this way if the GL or DX context somehow changes or goes away, the bitmap is still there and could easily be reattached to any context at any later time.

Kitty Cat
Quote:

So you don't have to make a copy of the texture data to change it in memory?

To change it in GL, you have to read it (which admittedly is not to efficient.. bind the texture (slow), get the texture data (possibly converting it, though I'd be surprised if drivers don't keep the system copy in the host format), then write the data). I'd hope GL would know whether or not it's been written to, so as to avoid a download.

But then, changing texture data doesn't need to be fast, just efficient. There are other ways to handle it.. like for dynamic textures (eg. video playback), a combination of a discard flag and scratch memory would yield good performance. Maybe creating a memory copy only if it's locked a lot. Rendering to a texture in GL should use FBOs/pbuffers/aux-buffer-with-copy, and such an action could mark the memory copy as "dirty" which would download ("clean" it) when locked.

This also only applies to GL. D3D's texture locking symantics should automatically handle this for itself, making an Allegro-handled memory copy there even more redundant.

SiegeLord
Quote:

So are the developers of allegro aiming to define that general shader API, as Allegro has been a hegemony in the past?
I don't care; this is all way over my head. I'm out :( Before this thread, I thought shaders were only for bumpmapping and the sorts.

I'm not doing it :P The only thing I would suggest having shaders for is the internal mechanisms for drawing quadrics and cubics, which cannot be done well with straight line approximations. Also, shaders are the only way to do proper anti-aliasing. I was not suggesting a generic public API for that. Nevertheless, there's OGRE that has a generic shader API that somehow works for both DirectX and OpenGL, so it's possible in theory.

Quote:

So did Evert write those circle routines? I didn't quite catch that.

Which ones? I've no idea who wrote the one that is in A4 right now. I wrote the one in the wiki.

Quote:

I would regret it if Allegro dropped support for older PC's without a proper GPU. Imagine a Pong game that needed a GPU... But as I said, I'm only an user.

That is an unfortunate truth right now, but so far the argument was that if you want a software device, you choose A4. I think it's silly myself, but there are not enough devs for another backend, or at least I've been led to believe that.

Quote:

The small point I was trying to make before is: if there is as al_do_circle that has sub-pixel accuracy, the proc function should recieve 2 float arguments, n'est-ce pas?

Eh? You could think of the callback for the do_circle as a fragment shader. Fragment shaders get the coordinate of the final fragment (the pixel) and that necessarily is an integer, or at least that was my understanding. You would be going in very deep doo-doo if you just passed the offsetted do_circle's output to the callback, it'd be gaps/overdraw galore.

Evert
Quote:

So did Evert write those circle routines?

I didn't write any circle drawing routines that you're likely to find online anywhere. So no, I didn't.

Quote:

That is an unfortunate truth right now, but so far the argument was that if you want a software device, you choose A4. I think it's silly myself, but there are not enough devs for another backend, or at least I've been led to believe that.

Probably right - and besides, one doesn't exist now, but Windows and Linux ports didn't exist when Allegro 3 was released either. There's no reason such a software renderer can't be added once API has stabilised and things work as they should using the other backends.

And yes, help in that department is welcome, I'm sure. :)

Matt Smith

CUDA might help here too on NV, as it can apparently share resources with either GL or DX. The GL textures might actually be owned by the CUDA context rather than the GL one.

Just a thought. no proof or code to back it up

axilmar

allegroNG.h

Next Generation!!!!

Vanneto

Some of my suggestions of the filename:

#include <allegro_ng.h>    // axilmar's suggestion.
#include <allegro_ngx.h>   // With an X, X's mean future.
#include <allegroX.h>      // Without NG, X is just like NG.
#include <allegrou.h>      // This just sounds cool.

Quite frankly, any of these is better then the current allegro5.h. Why? Because it does not contain the 5.

There was a time when a high version number was considered very nice. Why? It meant that the program was advanced. It meant it was ape shit advanced. But as some of you old farts here would remember, today are not the old days.

A high version number means shittines. Hard to believe, I know. But Dr. Vann Vannerhouer conducted a research on the International Institute for Applied Statistics and Supervising (II-ASS).

He presented two software products to 10,000 test subjects. One was Norton Antivirus (version number 16) and the other was Firefox 2.1.5.

Lets clear one thing, all the subjects were chosen because of their DFR. Thats the Dumb Fuck Rating. Why? Well if the subjects knew anything about computers, naturally they would first ask themselves why an AV program and a browser program are being compared. Secondly, anyone with a sound brain prefers Firefox over Norton. I know I know, not comparable. But hey, the people with DFR over 10.5 don't know that.

If anyone is asking themselves how the DFR is calculated, its pretty simple actually, but I think that Dr. Vannerhouer himself explained it quite good:

Dr. Vannerhouer said:

The DFR rating is calculated by dividing the Stupidity of the subject with his Ignorance. Then the value is multiplied by pi.

If you want to know more about the DFR please contact Dr. Vannerhouer: vann80@gmail.com. (Yes, he is 80 years old. :))

Back to the subject matter at hand. Allegro 5, this is what the topic is actually about, yes? Well because I'm an on-topic kind of guy, lets continue.

All the subjects had no prior knowledge about the programs. They could only judge them by their version numbers. So, what happened?

9,745 people preferred Norton over Firefox because the higher version number seemed to imply importance/superiority. This was an unexpected result. Quite fascinating actually. Why did this happen? DF's. Yes. They don't know that the version number doesn't matter.

We do.

What can we conclude? It doesn't matter what the name of the header is. The only people using Allegro 5 will be people that actually know something about technology. Who cares if there is a 5 in there or not?

Well, these are my 2 cents anyway.

SiegeLord

I'm confused. You want Allegro 5 to become Allegro 5000?

Thomas Fjellstrom

Vanneto: Uh, the 5 in there isn't just for show ::) Nor do we actually care what the version number of allegro actually is. You did know it took about 8 years to get to Allegro 5 since Allegro 4 was released? Its not like we've been version jumping ::)

I'm not sure why you ranted like that, but it was overly pointless. Allegro's version numbers actually mean something.

Evert

This is strictly my personal opinion.

Quote:

#include <allegro_ng.h> // axilmar's suggestion.

I hate it. What "ng" really mean anyway? (I know, it's meant to be Next Generation or something like that). What do you do when you make a new major revision? nng? Calling the new version of foo foo_new or new_foo or something like that not only looks stupid, you've painted yourself in a corner for when you have something even newer.

Quote:

#include <allegro_ngx.h> // With an X, X's mean future.

Worse. I hate it when people stick an X on something just to make it look cool (note: X11 is version 11 of the X Window system, which is the successor of the W window system, the X in Mac OS X is the version number).

Quote:

#include <allegroX.h> // Without NG, X is just like NG.

Even worse.

Quote:

#include <allegrou.h> // This just sounds cool.

That's just silly. Why no allegroo.h? Then the next version can be alegrooo.h.

I agree the 5 in there looks a bit silly, but it doesn't really matter one way or the other and shouldn't bother anyone.

That said, I do have my favourite letter replacement. You can probably guess (since I keep coming back to this) but that's allegrov.h (or allegroV.h if you want to make it clearer). But I'm not seriously suggesting that as an alternative name.

MiquelFire

Go with the Linux (well, I know they have it at least) library way of either file name or library version, and use allegro2.h as this is the second version of the API for Allegro. Then the next Allegro version that has to break backward compatibility can be allegro3.h

I know the suggested way to handle libraries is like that somewhere...

Edgar Reynaldo

As long as it's not

#include <allegrue.h>

then I'm fine with it. ;)

And just why is it a stiletto can't kill a grue anyway? ::)

Elias
Quote:

Go with the Linux (well, I know they have it at least) library way of either file name or library version, and use allegro2.h as this is the second version of the API for Allegro. Then the next Allegro version that has to break backward compatibility can be allegro3.h

I know the suggested way to handle libraries is like that somewhere...

It's just what we are doing - only using 5 instead of 2.

We could re-version Allegro 5 into Allegro 2, and retro-actively fix our versioning tree like this:

  • 1.0 -> 1.0.0

  • 2.0 -> 1.2.0

  • 3.0 -> 1.3.0

  • 3.12 -> 1.3.12

  • 4.0.0 -> 1.4.0

  • 4.2.0 -> 1.6.0

  • 4.3.10 -> 1.7.10

  • 4.4.0 (not released yet) -> 1.8.0

  • 4.9.5 -> 1.99.5

  • 5.0.0 (not released yet) -> 2.0.0

As you say, the fixed tree would better reflect the API changes. However, the versions historically are as they are, so Allegro 5 will actually be the 5th major API revision and using 2 as major version would just be confusing.

Thomas Fjellstrom
Quote:

Go with the Linux (well, I know they have it at least) library way of either file name or library version, and use allegro2.h as this is the second version of the API for Allegro.

Thats just plain confusing. And that "library version" is JUST for the .so files. Its not supposed to be taken up anywhere else.

I think if we do change it, we go with allegroV or something. Unless someone can come up with something better.

Quote:

As you say, the fixed tree would better reflect the API changes.

I don't think its a very accurate depiction. Allegro 3 was a fairly large change fwir, and Allegro 4 added actual OS ports.

Evert

I don't remember if the change from 8-bit only to 8-bit and high colour graphics happened between 1 and 2 or between 2 and 3, but that changed the API (a bit anyway).
I'm sure there's some significant change between the other release as well, a look at the change log should clear that up.
The change 3->4 changed the API in the sense that there were new platform specific functions and some new platform neutral functions (display colour depth, windowed mode, close buttons, lock_bitmap()).

So really, from an API versioning perspective, 5 is entirely right.

Elias

Yes, I meant, if the first number in the version is to reflect a major API change, then 4.x -> 5.x actually will be a much bigger change than any earlier versions. E.g. the 1.0 demo source looks like it could compile in 4.3.10 with only a few minor adjustments.

Anyway, I think an Allegro 6 actually only would make sense if there was a similar big API change than with A5 - so it won't happen anytime soon. Once compatibility breaks (which shouldn't be soon either), we will switch from 5.0.x to 5.1.x and so on.

Thomas Harte
Quote:

I don't remember if the change from 8-bit only to 8-bit and high colour graphics happened between 1 and 2 or between 2 and 3, but that changed the API (a bit anyway).

Between 2 and 3, but the change from 1 to 2 was even larger. Prior to version 2, sprites were a separate data structure to bitmaps. The distinction is probably why we now have a family of functions named like draw_sprite and another named like masked_blit.

EDIT: Version 2.0 was the first version I used, by the way.

axilmar
Quote:

I hate it. What "ng" really mean anyway? (I know, it's meant to be Next Generation or something like that). What do you do when you make a new major revision? nng? Calling the new version of foo foo_new or new_foo or something like that not only looks stupid, you've painted yourself in a corner for when you have something even newer.

I agree with you! my suggestion was a little joke...I was reading about Star Trek before posting that.

Actually, allegro5.h is fine.

Evert
Quote:

Anyway, I think an Allegro 6 actually only would make sense if there was a similar big API change than with A5 - so it won't happen anytime soon. Once compatibility breaks (which shouldn't be soon either), we will switch from 5.0.x to 5.1.x and so on.

Good points, I agree.

Quote:

my suggestion was a little joke...I was reading about Star Trek before posting that.

Well thank god you didn't propose Allegro Enterprise then. ;)

axilmar
Quote:

Well thank god you didn't propose Allegro Enterprise then

I thought about it though ;-).

_XDnl_

allegro_ng, allegro_ngx, allegroX, allegrou, or anything else, I think that developers must keep in mind that

Allegro is the Italian for "quick, lively, bright".

I prefer to use Allegro for its simplicity....

weapon_S

Oh, good now you're talking about a subject of my level again :P
I think I caught somewhere the phrase "if you want to do that, keep allegro 4". So maybe really rename the library. Allegro Live Library? AlLive? ALL?
(That is soo fucking gay >_<) Crescendo? Power Allegro? (Pall >_< I'm going gay again) But that would imply prolonged maintenance of the "allegro 4 branch". And I guess, the developers being fed up with that, is the reason, they changed it.
So why wasn't the naming an issue in the past???
Because they didn't have a kickass forum ;)
BTW Dr. Vannerhouer's DFR is outdated. He doesn't clearly separate heredity and environment, which is very important for error correction. I'd measure in good old stupidity. (Speaking of which: GNU stupid-eval totally chokes on this thread :-/)

Dustin Dettmer

I say keep it allegro.h. Make a macro ALLEGRO_PRE_5 or something when defined does whatever thing is causing this version incompatibility.

Macros are better for versioning than header names.

@weapon: You should be more grateful of the hard work these guys are putting in. This new version is going to do some very cool things.

SiegeLord

I think there is already an allegro.h and allegro5.h just includes that. So, if you wanted you could just use #include <allegro.h> and it should include the right header if you only have A5 installed. You should also be able to do #include <allegro5/allegro.h>. At the same time, I don't think you can do #include <allegro/allegro.h>...

And yes, we should have an allegro dev appreciation day... :D

weapon_S

Whoa, where did I come over as ungrateful? I may not seem super enthousiastic over the new Allegro: but I guess that is because I don't use most functionality mentioned. (I don't even use add-ons right now.)
Things I like until now: integrated thread API/events API, "magical" optimizations for blitting (the bitmap-type-thingy), modular extendability,
better GPU support and I think I like the new way screen updates are handled.
Yes, I appreciate all the work done in the past, all the work being done now, and the folks here explaining about it!

Thomas Fjellstrom
Quote:

Things I like until now: integrated thread API/events API, "magical" optimizations for blitting (the bitmap-type-thingy), modular extendability,
better GPU support and I think I like the new way screen updates are handled.

All of that stuff you say you like is pretty much the entirety of the Core part of Allegro 5. So it is a bit strange that you're unenthusiastic. Anything else is in addons like image loading, and sound loading/streaming/playing, etc.

Thread #597978. Printed from Allegro.cc