Wery Interestink.
I like.
Only think I can think to ask is, how efficient can the Dirty rect handling be for most things? Isn't Dirty rects better to implement based on the ussage? I'm sure it won't play with the rect list every pixel or something stupid like that. Are they updated on al_update? or al_release_*? I just think you'd loose alot of room for optimization using a builtin dirty rect scheme.. But thats just me.
Good work.
Someone has to remind me to work on the Alsa midi driver (for alsa 0.9.*/1.0...). And maybe the pcm driver if I have time. (Im just waiting till the alsa midi api stabalizes.. If it hasn't, but I haven't heard anything about it yet...)
AFAIK dirty rectangles would be implemented on a per-primitive scale. Each time you draw a primitive on the screen, a rectangle would be created and stored. It would then be compared to the current list of rectangles and merged / split with them to reduce / eliminate overlap.
Bob,
I like it..
I think coloured fonts are a standard requirment of the new API. Also a user selectable mask colour would be very useful in allowing the user to create new masks.
Would including bit masks on a seperate bitmap not be a little bit faster for masked_blitting? Or is the gain neglible?
Rich
I agree with Richard on both coloured fonts (which are actually easier to implement than mono fonts) and that we should be able to use a separate mask bitmap. I think the separate mask bitmap would be slightly slower because you are reading twice as much stuff, but still...
We could even have a 'create_mask()' function which returns an RLE mask, small and efficient.
If a RLE Mask was created and used then the source pixels would not be needed to be checked to see if they are the Mask Colour. All that would be needed would be a decreasing run_count variable.
So this method may actually be quicker for masked_blitting (and allow different masks)!
Rich
Regarding fonts:
Right now I'm using an antialiased font (as created by ttf2pcx) as alpha mask. My font rendering routine gets the string and a texture bitmap, and the color of the actually drawn pixel is calculated by taking the color of the texture bitmap, the color of the destination and the alpha value (as it's stored in the font).
/** Renders an antialiased font using a given texture. @param dest pointer to the destination BITMAP @param x x position of the text @param y y position of the text @param font pointer to the used font @param texture pointer to a bitmap holding the color values to use for the text. @param text the string to print @return the x position of the string end */ int textoutAATextured(BITMAP *dest, int x, int y, FONT *font, BITMAP* texture, const char* string);
This allows me to blit single colored antialiased fonts, colored antialised font, etc.
I've also realized that even with just 4 levels of alpha you can get really nice looking fonts. If you're using 8 levels of alpha, there's almost no loss in quality compared to the 256 levels of antialiasing.
My implementation is just a trivial one, but if one of the gurus (Bob?) would have a look at it, it could become a really fast routine.
BRILLIANT
Would it be possible to get rid of 'blit' as a word and replace it with 'draw' (also losing the funny source, destination ordering)
void al_draw(AL_BITMAP* dest, AL_BITMAP* src, int x, int y); void al_draw_section(AL_BITMAP* dest, AL_BITMAP* src, int x, int y, int sx, int sy, int sw, int sh); void al_draw_scaled(AL_BITMAP* dest, AL_BITMAP* src, int dx, int dy, int dw, int dh, int sx, int sy, int sw, int sh);
Then something in the BITMAP vtable tells if it's masked or RLE or compiled or not.
AL_BITMAP* al_create_masked(AL_BITMAP*); AL_BITMAP* al_create_rle(AL_BITMAP*); AL_BITMAP* al_create_compiled(AL_BITMAP*); ... AL_BITMAP* bmp=al_load_bitmap("xxx.bmp", palette); AL_BITMAP* bmp_m=al_create_masked(bmp); ... al_draw(screen, bmp, 0, 0); /* draw normal */ al_draw(screen, bmp_m, 0, 0); /* draw masked */
where bmp and bmp_m have the same sort of relationship as a bitmap and its sub-bitmap
Implementation could be (in terms of allegro 4 api, obviously this is confused/inefficient)
inline void al_draw(AL_BITMAP* d, AL_BITMAP* s, int x, int y) { (s->vtable->blit)(s, d, 0, 0, x, y, s->w, s->h); } inline AL_BITMAP* al_create_masked(AL_BITMAP* b) { AL_BITMAP* nb=create_bitmap(b->w, b->h); blit(b, nb, 0, 0, 0, 0, b->w, b->h); /* blit is a private API function */ nb->vtable->blit=blit_masked; return nb; }
Sorry if I've mentioned this before.
Pete
ps. Spellcasters fonts are a great idea too
Would font support be implemented as a module? It doesn't really belong to the absolute core functionality of Allegro. So if it can be seperated, it should be seperated.
I quite like the graphics api. But I, too, have objections regarding the commeon implementation of dirty rectangles. I think in most cases you need to write your own routines to get really good performance, but for most people a standard Allegro system may be good enough.
I still say that :
al_set_int("/gfx/color_depth", 16); al_set_int("/gfx/refresh_rate", 70);
Isn't good enough for multiheaded display purposes, unless you like lots of code of the form :
char string[30]; sprintf(string, "%s/color_depth", gfxname1); al_set_int(string, 16); ...
Why not either :
al_set_int("/gfx", "/color_depth", 16);Or :
disp1 = al_open("/gfx"); al_set_int(disp1, "/color_depth", 16); ... al_close(disp1);
?
Ok,
I sometimes need to use masked_blit and blit with the same bitmap. So having one command to do both will be a problem. Also I don't see what the problem is with al_blit(source, destination.. );
Just make sure all the drawing commands use the source, destination assumption.
Rich.
I agee.
I also use blit and masked blit a lot on the same bitmap, and also normally directly after another.
So, the costs involved in switching would be quite high.
Regarding the
al_set_int("/gfx/color_depth", 16);
How do you set the color depth for a certain window? Or display?
Since multi window/ display is going to be supported in allegro5, don't we need to do something like:
al_set_int("/gfx/display0/color_depth", 16);
The problem here is, that before the call to al_create_display() it's not clear how many displays we're going to have.
al_set_int("/gfx/display0/color_depth", 16); al_set_int("/gfx/display1/color_depth", 24); al_set_int("/gfx/display2/color_depth", 32); BITMAP *display0 = al_create_display(AL_GFX_AUTO, 640, 480, AL_GFX_TRIPLE_BUFFERING); BITMAP *display1 = al_create_display(AL_GFX_AUTO, 640, 480, AL_GFX_TRIPLE_BUFFERING); BITMAP *display2 = al_create_display(AL_GFX_AUTO, 640, 480, AL_GFX_TRIPLE_BUFFERING);
How do we set display properties (like color depth) before we know how many displays are going to be created?
spellcaster: It'll look more like this:
I still say that :
al_set_int("/gfx/color_depth", 16); al_set_int("/gfx/refresh_rate", 70);
Isn't good enough for multiheaded display purposes,
Then create your displays from the same thread, or do the section locking yourself. Allegro 4.2/5.0 may get a mini synchronization module, so you can use the "platform independent" locks yourself.
Peter Hull: blit stays. There's also no sense in making a copy of a bitmap just to set it's masked parameter. Better to have a function which can deal with masked bitmap, translucency and so on.
Perhaps something like
al_draw_bitmap(source, dest, AL_BITMAP_MASKED | AL_BITMAP_ADD, 0, 0, 0, 0, source->w, source->h);
spellcaster:
My implementation is just a trivial one, but if one of the gurus (Bob?) would have a look at it, it could become a really fast routine.
Probably not. I'd rather use color add effects instead - then we can re-use the blenders.
Although, I'd say that anything beyond colored fonts would need to go in a add-on.
If a RLE Mask was created and used then the source pixels would not be needed to be checked to see if they are the Mask Colour. All that would be needed would be a decreasing run_count variable.
So this method may actually be quicker for masked_blitting (and allow different masks)!
Sure, it would allow different masks. It'll be dog slow though. CPUs don't like if's. That's especially true when it can't predict when the jump will occur or not. If the CPU mispredicts the jump, then there's a cost of hundreds of cycles (if not more).
If you look at the current implementation of masked_blit's SSE code, there are no conditional jumps beyond the loop management.
Ideally, we could add a function that would draw a mask over a certain bitmap. Getting an MMX optimized version of that would be fairly trivial.
User defined mask colors are still in limbo though.
Bob,
I thought the RLE sprite blitting routines already in Allegro were supposed to be fast? Surely they involve if's in the inner loops?
Where is the low-level source for the Blitting routines? I can't seem to find them besides the higher level blit routines.
Rich.
OK then. I personally seem to use a bitmap either masked or not masked.
Bob: I meant they would share pixel data- think of it as getting a different interface to a block of data.
Rich: the only problem with blit is (1) I don't like the word (2) blit and draw_sprite have their args the opposite way round.
Pete
hmm.. the problem with al_draw is that doesn't explicitly say that it's for blitting. What I mean is that you could have al_draw_line, al_draw_rectangle, al_draw_circle, and then al_draw.. may be confusing what al_draw actually does to newbies at first.
I agree blit is a bit of a technical word for newbies too. I suppose you could call it al_copy_block or al_draw_block, but that's getting a little long winded to type for a very common function name.
I only use blit, never draw_sprite, which explains my source, destination bitmap bias.
I think that this order should be the norm because all the drawing primitives have a source parameter. So to me it's natural that functions which use two bitmaps would have the source first too and then the destination bitmap.
Rich
may be confusing what al_draw actually does to newbies at first.
Yes, I agree: we really should call it al_blit. After all, `blit' is hardly too complex a word for people who program games for a hobby!
all the drawing primitives have a source parameter
No, they all have a destination parameter.
Personally, I would prefer dest, src ordering since that is mainly what is used in the C library, and it mimics assignment, but I'm not really that bothered about it, because I never have problems with draw_sprite() or blit(), and I use both.
As I said in another topic in the allegro developers list, in Allegro 5 instead of so much defines we could use enums. eg:
from
#define AL_COLOR_COMPONENT_TYPE_PALETTE 0 #define AL_COLOR_COMPONENT_TYPE_ARGB 1 #define AL_COLOR_COMPONENT_TYPE_ARGB_F 2 #define AL_COLOR_COMPONENT_TYPE_ARGB_S 3 #define AL_COLOR_COMPONENT_TYPE_YUV 4 #define AL_COLOR_COMPONENT_TYPE_HSV 5 #define AL_COLOR_COMPONENT_TYPE_RAW -1 int type;
to
enum AL_COLOR_COMPONENT_TYPE { AL_COLOR_COMPONENT_TYPE_PALETTE, AL_COLOR_COMPONENT_TYPE_ARGB, AL_COLOR_COMPONENT_TYPE_ARGB_F, AL_COLOR_COMPONENT_TYPE_ARGB_S, AL_COLOR_COMPONENT_TYPE_YUV, AL_COLOR_COMPONENT_TYPE_HSV, AL_COLOR_COMPONENT_TYPE_RAW }; enum AL_COLOR_COMPONENT_TYPE type;
after all, that's what enums are for
It also makes clearer what parameter/val can take which values and it is also better, since that way you cannot assign incorrectly another type of variable.
al_set_int("/gfx/windowed", TRUE);
These settings will only take effect after the next call to set_gfx_mode.
So it will not be possible to write an application which supports switching between windowed and full screen display?
One other thing, what do you intend to do about the macros such as bmp_select and bmp_read_line? Will there names now fit the rest of the scheme, or will they remain confusing and seperate, appearing almost designed to confuse people you don't think should be using them, as they have done before?
oopps!
Sorry, I got that source - dest thing mixed up there!
DOH!
Oh well.. Rich
Javier: Good idea. I forgot about that, thanks for reminding me of. I'll add it to the current codebase.
Thomas:
1) Detect key combination
2) Destroy old screen mode
3) Created new display
That should do it.
Edit:
Javier: That won't work, because of C++. Enums are strongly typed there, which means that using them in combination with the config API is out of the question
We can't possibly add a new config entry point for every enum type.
I don't see what you mean, I just tried this program compiled with DJGPP (-Wall) and MSVC (max warning level):
1 | #include <iostream.h> |
2 | |
3 | enum AL_ENUM { |
4 | AL_ZERO, AL_ONE |
5 | }; |
6 | |
7 | void myPrint(enum AL_ENUM a, int n) { |
8 | cout << a << n << endl; |
9 | } |
10 | |
11 | int main(void) { |
12 | AL_ENUM a = AL_ZERO; |
13 | myPrint(AL_ZERO, AL_ONE); |
14 | myPrint(a, a); |
15 | return 0; |
16 | } |
and I didn't get any warning or error.
Also, in case there is such a case, we can always use (int) typecasts (but I still don't see where it coulbe be necessary)
Ah ok - you're not using typedefs. In C, you need to use typedef, or else drag the word "enum" everywhere. If we use "typedef enum" in allegro.h, then they'll break in C++.
I still don't see the problem
I tested it with typedefs:
1 | #include <iostream.h> |
2 | |
3 | typedef enum AL_ENUM { |
4 | AL_ZERO, AL_ONE |
5 | } AL_ENUM; |
6 | |
7 | void myPrint(AL_ENUM a, int n) { |
8 | cout << a << n << endl; |
9 | } |
10 | |
11 | int main(void) { |
12 | AL_ENUM a = AL_ZERO; |
13 | myPrint(AL_ZERO, AL_ONE); |
14 | myPrint(a, a); |
15 | return 0; |
16 | } |
and still no problems
Are there still any other cons to not use them?
Thomas:
1) Detect key combination
2) Destroy old screen mode
3) Created new display
That should do it.
No, since changing display mode can break any video bitmaps you have you need to destroy and recreate those also. I was just wondering if Allegro was going to help at all with that issue? Of course I understand that it isn't an easy thing to do automatically since, for example, you may not be able to keep as much in video memory in the new mode as in the old, but certainly a large quantity of the work can be done for the user if memory copies are to be kept of video bitmaps anyway, which I'm sure I saw mentioned somewhere. So will there be a few new API functions for helping with this sort of thing, or will the user be entirely on their own?
Yes, I know it'll destroy all video bitmaps. But keeping memory copies and updating those pretty much halves the speed (or worse) of any operation on video bitmaps...
But keeping memory copies and updating those pretty much halves the speed (or worse) of any operation on video bitmaps...
True, but for most people operations on video bitmaps are sub-optimal anyway (compared to most other bitmap types), and the main purpose of them is to enable hardware accelerated (masked) blitting of static images. Perhaps you could offer backup and restore functions for video bitmaps which back them up to main memory and restore them to video space? The main argument for taking this into the API being that not all targets will require the calls to actually do anything, since as you say in Allegro 5 not all targets use the same memory pool for display buffers and video bitmaps.
My personal opinion would go to have separate functions for all blitting types for optimum speed, but with a common interface.
void al_blit(AL_BITMAP* dest, AL_BITMAP* src, int x, int y); void al_blit_section(AL_BITMAP* dest, AL_BITMAP* src, int x, int y, int sx, int sy, int sw, int sh); void al_blit_scaled(AL_BITMAP* dest, AL_BITMAP* src, int dx, int dy, int dw, int dh, int sx, int sy, int sw, int sh); void al_masked_blit(AL_BITMAP* dest, AL_BITMAP* src, int x, int y); void al_masked_blit_section(AL_BITMAP* dest, AL_BITMAP* src, int x, int y, int sx, int sy, int sw, int sh); void al_masked_blit_scaled(AL_BITMAP* dest, AL_BITMAP* src, int dx, int dy, int dw, int dh, int sx, int sy, int sw, int sh);
If many people are looking for a quick search&replace upgrade to allegro 5.0
then these could be renamed to nblit??
(new blit)
This is looking really sweet so far. Gives me high hopes for Allegro 5. I agree with 'al_draw' rather than 'al_blit'. Makes more sense as it has a wider meaning.
I also like the state changes via a string. It would really make it easy for a user to edit settings during runtime via a console/scripting system.
Hey and that 'al_main' thing . Anything that gets rid of 'END_OF_MAIN()' (ugh!).
Losing the 'screen' global is a great idea and multi windowing is definately an issue i had with allegro4.
The only problem I have is with the differentiation between structures and functions. Hungarian is obviously not an option, so I suggest that structures be written as:
struct AL_structure_name {
//...
};
Capitol 'AL_'.
Good work Bob!
I propose that rather than something like:
al_masked_blit()
You go more like:
alMaskedBlit()
It makes for cleaner-looking code.
I also suggest a
alMaskedBlit2x()
for extra fast 2x blitting (I dunno about other peeps but I love the 2x stretch blitting in Fblend almost as much at the translucency).
Whoops, that shows I've been coding too much: I just tried to end that last sentence with a semicolon.:P
Using alDraw is not a good idea compared to alBlit. Blit is an operation that anyone doing anything in 2D should know. If they don't know what a blit is, then they should educate themselves. By the same token, if someone is playing with 3D and they don't know what a matrix is used for, they need to educate themselves.
I suggested that function typeing too Cage. Bob didn't go for it.:-[
I also agree with using 'alDraw' instead of 'alBlit'.
See my earlier message in this thread:
"hmm.. the problem with al_draw is that doesn't explicitly say that it's for blitting. What I mean is that you could have al_draw_line, al_draw_rectangle, al_draw_circle, and then al_draw.. may be confusing what al_draw actually does to newbies at first.
I agree blit is a bit of a technical word for newbies too. I suppose you could call it al_copy_block or al_draw_block, but that's getting a little long winded to type for a very common function name."
Rich.
I think any newbie would learn quite quickly from the documentation what 'al_draw' is for. And if they don't read the doc's, they should be slapped upside the head.
And I had another idea too. Allegro add-on paks should reform their api to conform with the new allegro api. So that paks like AllegroGL having the function name:
allegro_gl_blit();
Is quite long winded. It should follow a tighter syntax like the new allegro.
agl_blit();
Or as I was saying before:
agl_draw();
And this is all arbitrary since I think AllegroGL should be part of mainstream Allegro.
For the love of all that's holy, call it al_blit al_draw? wtf is that? Draw what? Besides, how long does it take to explain what "blit" means to a newbie, not to mention the fact that every other graphics API in existence has a blit function
Thank you very much 23yr3yrold! I am completely in agreement about this idea of using al_draw. It's an unneccessary step backwards to a less descriptive function name as well.
hmmm.. let's call load_bmp, al_load.. and while we are at it let's have al_scare and al_unscare too..
Do you get what I mean about al_draw being unclear compared to al_blit yet, when we would already have al_drawline and al_drawcircle?
Rich.
Actually, I'd like to retract my opinion that the new API should have functions like "myFunctionName" instead of "my_function_name". I tried coding with the former and it looks quite messy
Also, I STRONGLY believe (at least for now;D) that it should be al_blit, not al_draw. Listen to the pleas of the majority...
I'm as dumb as toast and I still know what a blit is. It's a BIt Transfer with L in the middle
Whatever.
My point was that if we were to swap the source, destination order* and still call it blit that would cause no end of confusion. So, invent a new function name with new arguments. al_draw was the first one I could think of. That's all. Anyway I am puzzled by the argument that everyone can understand al_blit but al_draw will cause poor little allegro users to give up in confusion. "What are we drawing with this AL_BITMAP parameter?" they will weep into their keyboards. Hm.
Pete
one of the stated aims was to reduce inconsistency in the API. Having blit(src, dest, ...) and draw_sprite(dest, src, ...) is an inconsistency. It is. You know it is.:-/
Just to make it clear, in C coding, there are unwritten laws that clearly state that Capital and small letters should never be mixed.
Only Microsoft code uses AlBlitDrawBitmap style.
and some insane programmers =)
I agree totally to the opinion that blit should not be changed to draw, which is less descriptive
Having flags passed to a blit function adds a huge load to a low level function, we need to have many different ones so you don't need to pass arguments unneccessarily.
I hate filling things with like "NULL, NULL, 0"
Blit actually means BLock Transfer, with an 'i' in the middle
bIt BLock Transfer but the letters are all gobboly goo.
Funklord,
I think my code breaks the unwritten capital law!
I write my functions roughly like this:
AL_block_transfer_blit();
oops..
Insane programmer..
Rich.
I'd like to propose a new function for text printing:
al_textprintf(AL_BITMAP *dst, int x, int y, AL_FONT *font, int color, int align, char *format, ...);
Nothing new here but the align paramter. It could be AL_LEFT_ALIGN, AL_CENTER_ALIGN or AL_RIGHT_ALIGN. This way we can avoid having the xxx_center xxx_right versions of the text functions.
Great idea spellcaster!
Why don't you have a look at the thread in this forum I made for a new masked_blit function and tell me what you think?
Danke,
Rich.
I suppose if 'align' is non-zero then the 'x' parameter would be ignored.
Nah. The x parameter should be used the same way as in the current functions. It's either the left-most point, the center point or the right-most point.
Just to make it clear, in C coding, there are unwritten laws that clearly state that Capital and small letters should never be mixed.
Only Microsoft code uses AlBlitDrawBitmap style.
and some insane programmers =)
We're programmers. Calling us "insane programmers" is just restating the obvious
spellcaster: Add to that separate foreground / background colors and you'd be set.
Background colors?
What will they be used for?
While the textout with the solid bg is nice for debugging, it's not really needed for games...
And if someone really wants it, he can simply rectfill that area himself.
I see no reason to keep that solid bg textout in the new version. Unless it will fade the antialiased font to the background color, of course
Hmm, actually, that might be a good idea.
he biggest thing that comes to mind when i hear the graphics api being updated is.... is the directx code also gonna be updated? I have heard one simple minded person complain that by doing so would kill NT support but who really gives a damn. NT aint a gaming platform.... I would **LOVE** for allegro to use atleast directx 7 .. whats it using now 3.0 hehe? Many would agree directx really didnt start coming around till about 6.0... 8.1[or whatever the hell is out now] would ofcourse be choice... This is by far the biggest thing im looking for in a allegro update. I have really stoped using allegro all together because of this. Whats al_blit() even gonna look like? dont u think an al_ex_blit() is needed?
GNE is rather buggy, and ive heard bad things about libnet but i havent used it myself. If allegro sported a easy to use networking library [add-on or not] it would be a big boost for allegro... No matter where you stand on it if your choosing between say sdl or allegro from a link from some website you gotta agree you would pick the one that "Even has tcp/ip support!" over one without....
Bloat is a terrible thing tho.. glad to see them damned fli playing routines gone. The GUI looks awful and im sure everyone here hates it as much as i do, update it or cut it. Ofcourse grabber would need it though...
Hey im all for the new update method, id go one farther and have a DetectBestUpdateMethod() function... i dont wanna make the user choose,and i dont wanna write my own code to do it. It should therefore be added:)
Dreamcast port?GBA port? I'm dumb!... u cant even buy a dreamcast anymore.. maybe at a flea market... and im not sure how limiting the GBA is even tho it is beastie for a handheld:) Consoles/Handhelds come and go to quickly why on earth even add it to allegro? It seems silly.....
Ohh and why im completely off topic i think sound samples when loaded u should be able to adjust there volume, rather then just playing it at the volume needed, after awhile and u have so many that gets confusing...
I have heard one simple minded person complain that by doing so would kill NT support but who really gives a arg. NT aint a gaming platform
Allegro could support both DirectX3 and 7 at the same time, and autodetect which one is installed. In fact, you could write the DX7 driver right now.
However, no one is willing to write that DX7 driver. So right now, it's more of a "if you want it, you'll need to code it".
BTW, what feature of DX6/7/8 would benefit Allegro?
(I'm just asking, I really don't know all that much about DX).
farther and have a DetectBestUpdateMethod() function
Then select AL_GFX_UPDATE_AUTO
Ohh and why im completely off topic i think sound samples when loaded u should be able to adjust there volume
Just loop through the sample data and multiply/shift them by some value...
Blit actually means BLock Transfer, with an 'i' in the middle
I am dumb as toast. Curses.
pete
Directx 8 would be huge for 3d programmers.. Theres really to much to even go into, its so much nicer to work with then other versions...
and even sporting 7 should be noticeable speed gain.... ofcourse older video cards wont get much out of it...
I have to agree with Oz. Dx8 has support for programming nVidia vertex shaders, it's easier to code, and faster. Since Allegro is using Dx already, will it be possible to use Dx functions alongside Allegro. Or will it interfere with the library?
What are vertex shaders? How do I use them in 2d programming?
(Point here is: If they're not used in 2d, the job is up to OpenGL)
um... How about BLock bIt Transfer.
Directx 8 would be huge for 3d programmers.. Theres really to much to even go into, its so much nicer to work with then other versions...
Although I'm not trying to start a D3D vs OpenGL war, we already have OpenGL support (via AllegroGL). It is very likely that Allegro 5.0 will have GL support built in. Why bother with D3D 8 when OpenGL (via extensions) supports even more features than D3D exposes?
What are vertex shaders? How do I use them in 2d programming?
Vertex programs wouldn't be very useful in 2D, unless you using them to set up data transfer for Pixel shaders (which can be very useful in 2D. Imagine bump-mapped sprites, for one).
Remember, blits in a 3D renderer are done via texture-mapped quads. Therefore, you can use vertex programs to play around with vertex data on textured quads. Typically, for 2D, you're not going to need vertex programs.
Speed is one reason, OGL is not even reconized by many video card manufacturers [granted almost all do nowadays], and is just given basic support by others. DX on the other hand is the industry standard, every video card is designed to run the best it possible can, in DX.
I really couldn't give a rats A$$ about DX support. I rarely (if ever) play games in windows, and the DX games I do play are translated to OGL calls by WINE. My only concern is OpenGL. (wich is faster. some site did a benchmark under Windows and under Linux/Wine, the game running under Linux/Wine ran faster. Chew on that)
I would really like to see that website, i tried playing a few games i commonly play on my windows machine on wine about a month back. I found it to be much slower, almost to the point where it was unplayable. My guess would be his machine was designed to run in linux a little more so then my old windows machine converted into a mp3 player(mostly)/linux box.
Even so i would think you would care about it since im sure 99% of anyone who plays your game will be playing it via true DX.... Assuming you took the time to compile a DX version that is .
Why not add DX support to allegro via something similar to AllegroGL while haveing the OpenGL support built in. DX does have its benefits but its not multiplatform like GL. In most cases DX support through WINE is going to result in a very slow game (and at the moment unstable too).
ank 2: Yes, that is possible. Although that requires people who are both familiar with DX and motivated enought to write the add-on.
SpongeBob SquarePants: Unfortunately, you can only compare a game running on an emulator against the same one running outside the enumator to judge the speed of emulation, and not the speed of the video drivers
Hey bob: WINE == Wine Is Not an Emulator
The version of wine used for the test was called WINEX I think. Some company has been doing some serious work on the DX code.
I hate to say it seeing how you are a fellow bob and all,but Thomas is right:). Ohh kudos on the new version **GREAT** job guys.
PS: Bob was wrong ha ha ha ha, Looks out side, Ahh a blue moon!!
Fine. It's an "abstraction layer translator". The thing is, depending on where you place that abstraction layer, you'll get yourself an emulator
WINE is just an API. Its basically just a Win32API wrapper. I would call it a translator.
Bump just because I WANNA
im not sure how limiting the GBA is even tho it is beastie for a handheld
A better designed 32bit processor than anything intel have come up with and the option of a normal frame buffer display rather than the very backward tile/sprite display that Nintendo hardware designers and Nintendo hardware designers alone love. However, there is no hardware acceleration in the frame buffered mode, unlike your PC (or even an Atari Lynx), so doing the sort of thing most people will assume is 'right' on a GBA - side scrolling platformers - would be suboptimal via the Allegro library.
WINE is just an API. Its basically just a Win32API wrapper. I would call it a translator.
Well, it also translates the executable format, and probably has to do some funky stuff with respect to memory allocation.
By the way, has anyone tried combining WINE and that intel emulation BOCHS thing?
Thats it busters! Us Bob's ain't taking it no more,ya hear?!?! We is rising up against our oppressors, the Thomas's!! AKA TOMS! They must be dealt with in a ordiarly fasion! THE BOBS!!! THE TIME IS NOW!! WE MUST STRIKE O SAYETH THE LORD!
very interesting.....
The changes sound great. I think the al_draw arguement follows the same line of reasoning that Microsoft has follow, if they don't understand it, rather than force them to learn anything, make it easy for them so they don't strain thier brain. This is programming, I am curious, how long did it take you people to understand what a blit was? 1 maybe 2 seconds?
Besides, it doesn't "DRAW" it copies a block of data, to me drawing is... well... drawing... like drawing a line, a circle etc... it isn't copying blocks of data. Also, this may be just a personal thing but to me, it just doesn't seem natural to have dest then source, my brain cringes when it isn't source first, I hate the way draw_sprite has dest then source, it's just not the way I think and I would love to see draw_sprite changed, maybe to blit_sprite(source, dest.....)
and get rid of draw from functions that clearly move blocks of data and leave drawing to drawing functions.
I have actually had a friend turned off of using Allegro because of things like the source and destination being inconsistant.
Anyhow, I think we should refrain from coddling new programmers and stick to what makes sense and assume they have basic programming skill or are willing to LEARN rather than taking the Microsoft approach <shudder>
(now that I think of it, why keep draw_sprite() at all? Personally I use blit() most of the time and it is very rare that I use sprite. Maybe some sprite functions that rotate etc... could be changed to rotate_bitmap or something. Unless you wanted to keep the draw_sprite strictly for RLE sprites, but couldn't blit handle that? just a thought)
Have a good one.
it just doesn't seem natural to have dest then source
Tell that to the C/C++ language. After all, this:int Foo = Stuff doesn't set the value of Foo into Stuff; it sets the value of Stuff into Foo.
hmmm... I know at least one Assembly language uses Dest->Src Its called Casm
> Support for bitmaps of alternate color spaces (hsv, yuv).
Oh, lovely ...
> al_set_int("/gfx/color_depth", 16);
Oh well, I won't reiterate my dislike of this
That's gross. If you want to remove as many API entries as possible, why not use enums instead
of those strings ? Errors are caught at compile time, it's faster, and it's not ugly.
> al_map_rgb(sprite, &col, 255, 128, 0)
Is it still possible to create colors in a given format, without having a BITMAP in this format first ? If not, I want one
Tell that to the C/C++ language. After all, this:
int Foo = Stuf
doesn't set the value of Foo into Stuff; it sets the value of Stuff into Foo.
No, but you take the source variable Foo and you do something to it, you put Stuff into it, so you place the source first "int Foo" then you do something to it "= Stuf". The same with blitting, you take a source, then you do something to it, you blit it to destination, it just feels more natural (just like it feels more natural to say we blit it there rather than we draw it there )
Anyhow, it is a nitpicky thing, I can live with dest then source or source then dest, doesn't matter to me too much, I prefer source then destination but the main point is that we should make certain that whichever is chosen, ALL functions should follow that form to avoid confusion. If the standard is dest, source then stick with the standard. No need to turn into microsoft and make up our own.
um what are you smoking? in 'int Foo = Bar;' the destination is 'Foo', and the source of the data is 'Bar'. Sounds logical to me.
> al_set_int("/gfx/color_depth", 16);
Oh well, I won't reiterate my dislike of this
That's gross. If you want to remove as many API entries as possible, why not use enums instead
of those strings ?
Oh, you mean like having
al_set_int(AL_COLOR_DEPTH, 16);
?
You can always #define AL_COLOR_DEPTH "/gfx/color_depth", or is there something I'm missing?
The point was to allow backwards and forwards compatibility as much as possible, including access to specific driver's modes of operations.
Errors are caught at compile time, it's faster, and it's not ugly.
I find it nice. Also, I doubt this will ever be a speed bottleneck. Blitting a 4x4 bitmap is more expensive then setting the color depth...
> al_map_rgb(sprite, &col, 255, 128, 0)
Is it still possible to create colors in a given format, without having a BITMAP in this format first ? If not, I want one
Well, in Allegro 4, the screen is implicitly used.
Do you mean that you want to use al_map_rgb without having a screen mode set or having any loaded bitmaps?
I don't see how useful that would be...
> Oh, you mean like having
> al_set_int(AL_COLOR_DEPTH, 16);
> ?
> You can always #define AL_COLOR_DEPTH "/gfx/color_depth", or is there something I'm missing?
That's what I meant. It's indeed possible to
use defines, which make this better, but still
not perfect. At least it removes the ability to
do typos.
> The point was to allow backwards and forwards
> compatibility as much as possible, including
> access to specific driver's modes of operations.
Could this not be achieved by using enums ?
The only compelling argument I've seen for using
strings was that they could be changed at runtime
without a parser, which is cool (well, for some
of them, you don't want to enable the change of
"gfx/screen_width" at runtime, unless it gets
dynamically bound to a set_gfx_mode call). But
I digress.
> Do you mean that you want to use al_map_rgb
> without having a screen mode set or having any
> loaded bitmaps?
> I don't see how useful that would be...
That's what I would want.
An image manipulation/filtering library should
not have to set a gfx mode before doing operations
on an image file. This would be quite handy, and
not hard at all, since all that is required is
a default setup, that the user can define.
eg, if I load an image in a given format, I want
to save it in this same format. Hmm, thinking of
this, I might want to do the operations using a
format with higher precision...
But in any case, I can see the need for being able
to use gfx formats without a gfx mode setup.
You can use al_unpack_pixel to get the individual RGB components (or YUV or HSV or whatever), in whatever precision you want. Do the manipulation you want, then recompose them with al_map_rgb.
You said it yourself, you'll end up manipulating a BITMAP at some point, so just pass it as a param to al_map_rgb so it knows how to convert the color to that bitmap's format.
I won't comment further on strings vs enums, since I guess it becomes a matter of opinion. Shawn made the original suggestion, which is available somewhere on the Allegro site. It's not perfect, but it's pretty damn good
Grr, I hate how you have to relogin every 5
minutes ... I didn't find a setup for a timeout
in my member customization setup :/
> You said it yourself, you'll end up manipulating
> a BITMAP at some point
Does this mean it's fine to use those, even if
a graphics mode is not set ?
With the current Allegro, there are problems
with RGB format (eg RGB, BGR...)
If it's OK, then it's perfect
> I won't comment further on strings vs enums
Yeah, I suppose I should have not commented
when I say I wouldn't
Anyway, pretty good apart from those two issues
(only one, if it's possible to use different RGB
formats without a gfx mode set).
Yes, it's fine to use them without setting a graphics mode. That's why there's the bitmap parameter. al_map_rgb would look at the current bitmap format (AGRB with floating point components for example), and map the color components to the format.
I think it would be a good idea to take in account multiple windows for the graphics system of Allegro. There was already a thread on this some time ago, but I think this is a good place to bring it up too.
Actually, I am now in the situation where multiple windows are needed.
I think it would be a good thing to have a few functions for that in the graphics api, so people don't have to hack about on their home OS - it would improve portability.
As for how to name such functions....
AL_WINDOW* window_handler = al_create_window(params "geometry"); al_destroy_window(AL_WINDOW* wind_hand); al_move_window(AL_WINDOW* wind_hand);
and of course a set of functions that would bind AL_BITMAPs to their respective windows:
al_bind_bitmap(AL_WINDOW* wind_hand, AL_BITMAP* bmp); al_unbind_bitmap(AL_WINDOW* wind_hand, AL_BITMAP* bmp);
It would be nice to have such an ability.
I think that sounds like an exellent idea. Although it may compicate a AL_WINDOW-AL_BITMAP relationship. Could we just have the bitmap itself represent the window?
AL_BITMAP *hwindow = al_open_window("MyWindow");
al_close_window(hwindow);
Although I suppose you would need much more parameters regarding window attributes such as size.
[edit]
Although a second thought (as i read bob's code) it looks as if this is what we're doing when we call 'al_create_display'
[/edit]
How about:
al_set_pointer("/gfx/device", the_screen); al_set_bool("/gfx/window_minimize", TRUE); al_set_int("/gfx/window_x", 54);
?
will the Graphics Core then be able to handle several displays without trashing the one created before?
Having several displays (with necessary windowdecorations) will be fine as well, as long as it makes multiple windows possible.
About the creation and representation of this feature, it doesn't matter how it will be represented, as long as it gets represented ( ok, a bit early over here, so I probably make no sense at all
).
When thinking about this multiple window-stuff, it'll probably have quite some impact on input/output, no?
Yes, and Yes. See above and the previous threads.