Hi all.
I write an example of a5 with ocornut/imgui(https://github.com/ocornut/imgui).
The code is here https://github.com/bggd/a5imgui_example
{"name":"CBUzrCFUcAAs5e5.png:large","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/b\/4\/b427f12945e118eaa1429191eeb5af42.png","w":1024,"h":662,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/b\/4\/b427f12945e118eaa1429191eeb5af42"}
Thanks.
Hm, looks cool. Though I have to say, I'm not a fan of immediate mode guis
Yep, the gui looks decent enough to try. Might come handy in debugging.
That's awesome .
Do we have a space somewhere to list these A5-using projects? Might be useful for the eventual frontpage redesign.
The obvious place would be the "resources directory" right here on this page... although I guess it hasn't been updated in a decade :p So maybe the wiki would be best instead? Or add it to liballeg.org. I think we should clean that up at some point, maybe move some old stuff like digmid, logos or humor to the wiki - and add more useful stuff like GUIs
I have to say it looks great, gonna give it a try.
Thanks for posting.
Interesting concept this immediate-mode GUI, I never heard of it before.
but I'm with Thomas, I think it falls down and gets cumbersome when GUI's get complex.
I don't think it makes GUI programming any easier as they propose.
It just seems to move managing widget state, layout, event stuff out of GUI library and makes it the programmers job to re-invent.
Reminds me of the unix dialog commandline utility.
I've never heard of immediate mode GUIs but they sound like an interesting concept. It might be easy in some ways, but really limiting in others. This guy seems to be very pro immediate mode.
It just seems to move managing widget state, layout, event stuff out of GUI library and makes it the programmers job to re-invent.
That's what came to my mind, too.
I wonder how everybody's GUIs are coming along these days.
I wonder how everybody's GUIs are coming along these days.
I still commit changes to my GUI as needed, and help people out that want to use it I get at least one email per month from people who want guidance on compiling or using it, so it seems useful for some projects other than my own. Factorio uses it too and it's doing very well so that's pretty gratifying
Since I ported my game to iOS, I added many touch-friendly features to Agui, like inertia scrolling, being able to scroll on anything that is already scrollable using your finger on the view, and some touch-compatibility stuff for events.
It just seems to move managing widget state, layout, event stuff
out of GUI library and makes it the programmers job to re-invent.
That's what came to my mind, too.
You are seeing this from the wrong angle.
- You don't "move state out of the GUI library to the application [programmers job]". The point is to remove the idea itself of state duplication and state synchronization which makes retained-mode interfaces so painful to use. The only data that exists is the data that you own and you want to visualize or edit. The existence of data related to the UI library means more work and more bug.
So It is actually the opposite. Retained-mode libraries forces you to manage widget state. Immediate-mode interface tries to avoid you thinking about this at all.
- You don't "move event stuff out of the GUI library to the [programmers job]". You REMOVE the notion of events. They are extremely painful to handle in C/C++, a bit less-so with newly introduced C++11 lambda but still terrible. You need to declare functions, test things elsewhere, possibly in asynchronous context where you need to store and retrieve data to operate and react. It just leads to longer code and more bug.
How is that:
agui::Button button; [...] flow.add(&button); button.setSize(80,40); button.setText("Push Me"); button.addActionListener(&simpleAL); [...] //+ the code in the action listener (reacting to button, probably need to gather and update foreign state, maybe updating other widgets)
Spread in at least 3 locations of your code.
Going to be ever better than:
if (Button("Push me", ImVec2(80,40)) { /* do my stuff */ }
In a single location?
And this is a simple example because
- You only have 1 button. Imagine adding new members for every button, trying to name them, index or enum them.
- They are no user-state sync in example. If you start using anything that hold a value and you need your widget to mirror the state of your highly dynamic data and vice versa, you are in to a world of PAIN. Which imgui principles makes you avoid because only one copy of the data exist and you don't need to sync it.
Remaining is the problem of layout, and that has nothing to do with the library using a retained-facing interface vs a library using an immediate-facing interface. imgui is about the interface provided to the user. The library is free to remember your widgets and lay them out the way it is fit.
EDIT Not having to declare your widgets elevates debugging to a whole new level. With an implementation such as my ImGui above you can literally create a transient widget in the middle of some foreign left code that has nothing to do with interface, expose a local variable into a slider with 1 line of code. You can trace algorithm using ImGui::Text(), etc.
like inertia scrolling, being able to scroll on anything that is already scrollable using your finger on the view, and some touch-compatibility stuff for events
Wow, you've gotten pretty far! You should post screenshots and stuff on your GitHub page (... wait, do people do that?). I still have a ways left to go on mine. A new bug was recently introduced that causes offscreen-rendered widgets to be drawn in all white... so not quite as exciting.
Going to be ever better than:
if (Button("Push me", ImVec2(80,40)) { /* do my stuff */ }
In a single location?
gui.AddWidget(pushmebtn = new Button("Push me" , Area(0,0,80,40))); //... gui.HandleEvent(system->TakeEvent()); if (pushmebtn->Pushed()) { // do stuff }
How is that so much more difficult? With your code you have to cram everything you want your widget to be into your constructor call.
And with your example I'm sure there is much more going on behind the scenes than you let on. For example style and drawing info has to come from somewhere.
And for example, how would an immediate mode gui handle dynamic layout? Ie. How does it handle movement and resizing of dynamic areas?
How would something like an editable text widget work? It has to remember line positions, cursor position, selection, scrollbar positions... is the user supposed to handle all of that?
How is that so much more difficult?
You have two points of edit here definition vs use (instead of one), and often more with most implementation. And buttons is the simplest example since it doesn't carry state. How do you create a slider or checkbox that's in sync with some live game variables?
you have to cram everything you want your widget to be into your constructor call.
The fact that I'm using constructors (and overloaded functions and different entry point for common variations) has doing to do with the imgui<>rmgui debate. You can use state or chained calls. I just choose what I thought was the easiest approach for my specific goal (aka hacking/auditing/debugging tools).
To take your example, a more imgui-like version might be:
gui.AddButton("Push me"); gui.SetArea(0,0,80,40); if (gui.Pushed()) { // do stuff }
The subtle but very important differences are 1) the lack of [...] so the code is in one place. 2) lifetime is defined by running code. What's lacking in your example is that good UI are dynamic. You want items to appear and disappear constantly, grey out, etc. You need to sync data with widgets. You have to maintain this UI somehow and the easiest way is to just "submit" UI based on your existing data.
And with your example I'm sure there is much more going on behind the scenes
than you let on. For example style and drawing info has to come from somewhere.
Certainly, but that can be part of the UI state anyhow. I don't imagine it is the most common use to make every button unique, you want a consistent style so it is more likely to be decided at a more global "style" level.
By the way I don't claim that my implementation of an imgui is feature-ful in term of styling, it was designed for efficiency to run huge amount of tools for games. When you run a game on PS3/PS4 with tens of thousands of stuff going around and debuggers, profilers, etc. you can't afford UI overlay to be taking 3 ms to draw, so I designed ImGui carefully down to its visual to run optimally. New widget don't create additional draw calls, there's no dynamic allocation for the act of submitting or drawing widgets, etc. But you could design an imgui with more elaborate styling features.
And for example, how would an immediate mode gui handle dynamic layout?
It's a wide question, but probably similarly to what a retained mode ui would. As long as the system is able to uniquely identify a widget you can do the same thing you would do with a retained-mode UI. If you really want features totally analogous to the typical layout features of say, css or most rmgui you can do a two-pass thing. However there's hundred of answers and ideas and techniques and helpers you can think of (the same goes for retained-mode UI, layout itself is a very open ended problem). My claim is that layout are better handled programatically with good helpers.
Ie. How does it handle movement and resizing of dynamic areas?
I'm not sure what's the problem or question here.
You can try the demo here. http://www.miracleworld.net/imgui/binaries/imgui-demo-binaries-20150321.zip
EDIT the system doesn't seem to allow me to reply twice.
Elias:
>How would something like an editable text widget work? It has to remember line positions, cursor position, selection, scrollbar positions... is the user supposed to handle all of that?
No of course that's the point. The GUI does that for you. For example in ImGui to create an edit control you can call this function:
ImGui::InputText("Name", &my_buf, 256);
It handle focusing, tabbing, selection, keyboard movement, copy/cut/paste, undo/redo. I don't have a multi-line text edition in there yet, probably will do one at some point.
I certainly see the value in it for rapid tool development on a platform like Ps4, XBone, etc.
I'm still not convinced I would want to use it in a project like my game. I have some very custom behaviors that I want out of my Widgets:
For example, some of my TextBoxes, when text is appended to them, they scroll to the bottom no matter what. For my chat boxes, if the user has scrolled up to look at old chat, I just silently append new chat, and only scroll automatically if the scrollbar is at the bottom.
For my needs, I also needed to have colored text, and what's more, my textbox can parse text using custom rules (unfortunately not yet using regular expressions) and it will put out events for specific rules. For example, I can create a rule called "url" and it will detect when the user hovers a url and give me an action for that.
It's reasons like this that I wrote my own rm gui in the first place.
I have several other areas where I required much more than what standard gui events offered and I doubt it would be particularly easy for an im gui to deal with these feature requests.
That said, I think for the reasons you mentioned, I see the value in it, it seems useful for rapid tool development. But for a massive gui-oriented project like mine, I think mixing logic and presentation would lead to an ugly code base. I would probably have to subclass many widgets to get what I want too.
As for the argument of a single button or single style, if you're developing a game, I highly doubt you will only have just one button. I have 4 in mine. One of which only does circle collision detection.
Players demand more and more and developers have to keep up, it is not unreasonable to want different style buttons in a game.
Another issue I see is when I click a button and each time, a new one pops up. I have no choice but to maintain an array, and possibly retain all the attributes for each custom button, so basically I end up having to create a custom copy of the state.
In my game, every server can have an indeterminable amount of tables (they are Widgets) and each table has 4 chairs. Every chair has custom rendering to render a player's name and avatar on the chair.
The player can filter certain tables based on criteria. Tables not matching the criteria must be darkened.
That seems like a lot of state that I think an imgui might have trouble with. It can be done with imgui but it seems impractical to do so.
I see the value of imgui as I said before, only for rapid tool development, and not for a practical large game project.
My game raised a lot of GUI-related challenges for me that I'm not sure I could have solved [elegantly] without having written my own API.
I might be wrong though and imguis might be practical for large game projects. If you can solve the issues I listed above I would be very interested in hearing those solutions.
I appreciate and understand the need to have custom UI behaviors and rewriting your own UI. Even more so for your own game! (the same way I wouldn't advise using GTK/QT for a game that wouldn't be flexible enough).
But again this has nothing to do with the rmgui/imgui debate.
A lot of your example seem to imply that the feature would be necessarily harder to implement with an imgui.
e.g.
For example, some of my TextBoxes, when text is appended to them, they scroll to the bottom no matter what. For my chat boxes, if the user has scrolled up to look at old chat, I just silently append new chat, and only scroll automatically if the scrollbar is at the bottom.
This is already possible in mine and actually happen to be done in one of the ui demo, so as colored text. If you want to parse URL within a text box and display a menu on click I'm sure you could do that as well (in neither ui it would be provided as default). But I imagine you'd want some sort of markup language that's applied globally rather than just in a text box.
My approach for customization is that I see ImGui as a basic set of helpers, you can combine the lower-level helpers to create custom widgets.
As for the argument of a single button or single style, if you're developing a game, I highly doubt you will only have just one button. I have 4 in mine. One of which only does circle collision detection.
Of course, but I meant the 4 styles are probably defined somewhere globally rather than in each button. So all the styling settings (which can be dozens of parameters) won't be specified for each button. You would rather have an approach like:
Button("OK", style)
Rather than the unpractical:
Button("OK", five millions style parameters).
In ImGui the style is part of the state (akin to OpenGL), so you can do:
PushStyleCol(ImGuiCol_Button, ImColor(1.f,0.f,1.f));
PushStyleVar(ImGuiStyleVar_FrameRounding, 4.0f);
Button()
Button()
etc.
Which if you start using often would probably lead you to create helpers,
PushMyStyleB()
Button()
Button()
PopMyStyleB()
I don't quite follow the problem with your examples.
>In my game, every server can have an indeterminable amount of tables (they are Widgets) and each table has 4 chairs. Every chair has custom rendering to render a player's name and avatar on the chair.
>The player can filter certain tables based on criteria. Tables not matching the criteria must be darkened.
It seems like all that would be easier done with an imgui style library, since all you have to do if you reflect the state of your server/game state. But I'm not sure what you mena by "tables" and "chairs" and if those are UI elements or sprite/3d objects.
I routinely display and filter thousands of active dynamic items with ImGui with no issue. I suppose it depends on the implementation (mine tries to be fast) rather than the interface paradigm. What's guaranteed is that you would have a hard time displaying a list of 10 million item with an rmgui whereas you can do it with an imgui because the objects don't need to exist or be stored anywhere. You can even seek according to your current scrolling and just display what you need.
Those are some good points, I'll want to have a further look at some of your examples. Do you know of any large open sourced games that use imgui? I would be very curious to see the structure of the code. It worries me to mix presentation with logic, but I might not be thinking outside the box.
This is what I meant about the tables and chairs:
{"name":"609365","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/a\/4\/a4bbc19720bf2cd1274da487fdf3627a.png","w":663,"h":392,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/a\/4\/a4bbc19720bf2cd1274da487fdf3627a"}
Every chair can be clicked, and every table can be clicked. The tables have custom text that changes based on the type of game. Both the Chair and Table subclass Button (but that does not count toward my 4 button styles).
I do all my gui hookup work in scene begin event of my game:
You can check the demo linked above, the entirely of the features in the demo are done from a function called void ImGui::ShowTestWindow(). It's a long function because it's showing a lot of features individually.
https://github.com/ocornut/imgui/blob/master/imgui.cpp#L9219
I don't really know the state of open-source games, been developing proprietary console games for a long time now. I intend to replace the old horrible horrible UI for my emulator Meka (using Allegro, mostly done in 1998-1999!) with ImGui so perhaps that'll be a demo. But the demo above has a fair amount of stuff.
I'm not sure your table/chairs are really "UI" in the classic widgety sense, it's more general programming with visuals and interactions and animations. If you start to desire detailed interactive animations you want to manipulate state that's not gonna be part of any "ui" engine. e.g. the position of your character, where they are looking, the position of the cards while they fly off the table. If you try to retrofit this data in a generic way in your ui library it'll feel awkward and constrained in the first place, so it's perfectly reasonable to do something custom but I don't think that custom thing classify as a UI library.
I'm not sure what's the problem or question here.
You can try the demo here.
http://www.miracleworld.net/imgui/binaries/imgui-demo-binaries-20150321.zip
My laptop only has DX9 or 10? with Vista and only has OpenGL 2. So all four demo programs crash for me. The DX ones are both missing a dll, and the OpenGL 3 one fails to load an extension and the OpenGL one just crashes.
Well that's very interesting, I suppose I need to test on Vista but Visual Studio 10 should make stuff that are Vista/XP compatible (the later is disabled by default in the option). This is the first time I hear of a crash but usually people build themselves. Because the DX9 demo and OpenGL 2 demo should work on old hardware. Thanks!
(Which DLL are missing for DX9? Don't you have DX9 runtime installed?
If you have time to try the .sln project it would be helpful. But I'll inspect.)
The demo is not wait in the game loop. it is high load for an laptop's iGPU probably.
Yes because I'm using it to measure performance so it runs with no limit.
I should add a wait/VSync option on by default, and make unthrottled an option. Thanks!
By the way, the Unity editor UI is essentially an imgui api
http://docs.unity3d.com/ScriptReference/EditorGUI.FloatField.html
I was missing d3dx9_43 or something like that.
I got imgui and its opengl example program built after compiling glfw. It's very impressive what you can do with immediate mode guis. One thing , how do you handle input and events? And where do you store your data? You can't just pass everything into the function or that defeats the purpose. Is your style global state that gets copied into every object as it is at the time the object was created?
11,000 lines is a lot to cram into one source file, and the widget constructors are a bit out of control :
It's a little tedious to learn all the different function signatures for your widgets. It would be far easier to have a single default constructor function, wouldn't it? In my gui I plan to have a widget factory. You just pass in a string, and it deciphers the widget and any attributes set. Users can create their own widget factories through callback or forwarding functions.
One other thing, how would you go about implementing a program like this, with dynamic resizing (you can resize the window by any corner and middle click drag the window (I mean the orange bordered object) as well as resize the inner cells through their splitter handles (hover with the mouse and press the button grid to change the pointer)) :
EagleTest.7z
{"name":"609369","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/6\/2\/62c18959c8c36535419444ea4f2dab5a.png","w":812,"h":632,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/6\/2\/62c18959c8c36535419444ea4f2dab5a"}
I've attached the source for the program here :
GuiTestMain.cpp
Edit
The function being run is GuiTestMain2 to be specific.
Thanks for looking into it!
One thing , how do you handle input and events?
Not sure exactly at which level you are talking about. ImGui receives inputs from the application (e.g. mouse position, mouse buttons, keyboard state) and stores that.
Everything is processed as you call the function. ImGui maintains what's the 'cursor position' (the current layout position). When you call Button(), it calculate the bounding box of the button in the screen, and test e.g. if the mouse is hovering the button, then it can react to mouse inputs. If the mouse is clicking in this area we return true. Aside from that it pushes vertices into buffer that can be rendered later (this buffer will present the entire UI rendering). And it moves the cursor, e.g.: to the next line.
That's actually a very simplified version of what's actually happening, because there's lots of details under the hood. But it doesn't "store" data per widget or very rarely. The only data that's stored is the transient render data that it pushes and gets batch-drawn at the end of the frame, basically a bunch of textured triangles. It doesn't render immediately to allow merging draw calls for efficiency and also because not touching the render immediately allows you to use ImGui within your own engine rendering to debug it.
ImGui infer an unique identifier per widget, based on the stack of items, labels and other information. A button press is actually press+release so that happens over multiple frames, that's where the unique identifier comes in. When you click the button I store the unique identifier into a variable, 'ActiveID'. Next frame if you release the mouse while hovering the button with the same identifier it knows that you pressed and clicked on the same button and that's where it returns true.
If you have time to get into details, any of those 4 articles should give you a better understanding at how it works:
https://github.com/ocornut/imgui#references
And where do you store your data? You can't just pass everything into the function or that defeats the purpose. Is your style global state that gets copied into every object as it is at the time the object was created?
Neither those questions really apply. There's no "object", they don't exist. When you call Button() it handles the "logic" of a button and push triangles to render the button, but the button doesn't exist anywhere. There's no "Button" struct.
There are things that are stored. For example, when you click a tree node to open/collapse it, it stores a boolean that associated to the ID of that tree node. Resizeable column stores a float (associated to the ID of that columns). But those storage that persist for more than 1 frame are actually rather infrequent. Stuff like text editing information (e.g. cursor position), there's only one of them, since by definition you can't type into 2 text fields simultaneously.
Btw I think it's probably a little harder to write a great imgui-type library rather than write a great rmgui-type library (even if both are hard). But I believe it's so much better for the user so it's worth doing it.
11,000 lines is a lot to cram into one source file, and the widget constructors are a bit out of control :
Number of line is a stylistic choice. The number 1 priority for ImGui was to be ultra-portable and not necessitate a lot of custom-building-setup. Librairies are a PAIN to deal with under Windows and it's frequent that people give up using a library just because it doesn't build, doesn't link, have conflict with standard libs variants, etc.. It's hell. So by providing one single .cpp file and no build file (only the examples have build files, not the library), the message is: you can copy this into your folder and it'll just build.
Contrast that to any major library (e.g. Allegro) that are not easy to build. Unfortunately it's probably impossible to solve this problem well for big libraries. It's a really hard issue with no clear "winning answer". Developers are already spending a lot of time improving their portable build process (like Allegro did) but it is such a time-sink full of hazard and not the most exciting feature to work on.
For a small library shipping with no dependency like ImGui it's possible to just avoid the building issue. Similarly to the STB library with all fit into a single .h file (image loader, ttf loader, ogg loader, scripting languages, compressors, voxel renderer..)
https://github.com/nothings/stb/
and the widget constructors are a bit out of control
I disagree
The easiest (until I get to write better web documentation) is to browse the ShowTestWindow() function.
It would be far easier to have a single default constructor function, wouldn't it?
No because the parameters are tailored and make sense only in the context of a specific widget.
In my gui I plan to have a widget factory. You just pass in a string, and it deciphers the widget and any attributes set.
Using strings to pass variable numbers of argument and named variables is a useful pattern in C/C++ for this sort of situation, that would work. This is what AntTweakBar used. The reason ImGui's isn't doing that is that it's hard to construct settings dynamically with strings. However I use format strings extensively (they are very powerful).
One other thing, how would you go about implementing a program like this, with dynamic resizing (you can resize the window by any corner and middle click drag the window (I mean the orange bordered object) as well as resize the inner cells through their splitter handles (hover with the mouse and press the button grid to change the pointer)) :
To answer your question about mimicing the example: you can't exactly do that at the moment. I haven't added movable horizontal separators yet. It has resizeable columns but they are vertical and don't have the exact same properties you'd expect for this sort of setup. To-do list is enormous but I'll let you know when I sort that out. It's a good example to try to mimic. Thanks!
A button press is actually press+release so that happens over multiple frames, that's where the unique identifier comes in.
In my gui I only look for a button press, not a release. I could change it, but I don't know, I like the immediacy of the button press, instead of a 2 step process.
If you have time to get into details, any of those 4 articles should give you a better understanding at how it works:
I'll take a closer look some time.
It's very interesting to remove the data from the widgets, but in some cases doesn't that make it the users job to store the data?
How do you control your layout in imgui? It is just a single global flow layout that all the widgets use? Or do you have others, and if so, are they difficult to create in imgui?
In my gui I only look for a button press, not a release. I could change it, but I don't know, I like the immediacy of the button press, instead of a 2 step process.
For buttons only that's ok but other widgets may need a press-release scheme and then you don't want to have inconsistent behaviour. Notice that pretty much all modern OS adopt this push-release way, which also plays better with inputs that aren't very precise (e.g. touch screen). It also allows to integrate other features such as drag and drop. Again there's no single answer it depends what you do, but 2 step is more in line with standard OSes.
It's very interesting to remove the data from the widgets, but in some cases doesn't that make it the users job to store the data?
Hmm.. Yes and no. If the user needs to read and write data from a widget in a rmgui context it usually means you'll have a copy of the data. There might be occasional cases were you need to store data, but compare that to rmgui where you always have to store data (you always need to store a handle to widgets).
How do you control your layout in imgui? It is just a single global flow layout that all the widgets use? Or do you have others, and if so, are they difficult to create in imgui?
That's the part that is less obvious to get right with an imgui implementation, and personally I haven't tackled layout very much yet (and hope to do so eventually). For the sort of tools I made, layout was rarely an issue. In practice, simple stacking horizontally or vertically, grid, or filling remaining space is generally enough. Creating resizeable spaces (like I'm doing with columns) isn't a problem. What's hard to get right is if you want to measure the size of items and feed that back in the layout in an submission-order independent way. It probably has to be done in two passes. Unity solves that by running the gui code multiple times (e.g. maybe in the first pass it only does size measurement, then layout, etc. just a guess). How I think I'll solve more intricate layout is that I'll have layout primitive store state and the layout would only be applied on the second frame (effectively there would be a frame of lag, perhaps for a new window it means the window isn't showing during the first frame. During resizing interaction the frame of lag shouldn't be noticeable). But this is a rather open topic that I suppose people are trying to address in different ways.
In my gui I only look for a button press, not a release.
Mine has the full press-on-hover-then-release-on-hover behavior.
I'm cool like that.
In Agui, I made a mouseClick event, which requires the press and release to be on the same Widget. If it is a left mouseClick on a button I give an actionPerformed event.
It would be silly easy to add in the press release scheme alongside my own. Just make a second event. One for the widget button press and one for the press hover release.
You guys do realize this thread nearly made all our libraries obsolete at once, don't you? I for one do not care. I will continue working on my retained mode gui regardless. I want it to come to some minimum kind of completion and I still enjoy working on it.
And I have too much code invested in my gui hahah. But this is really cool once you understand its potential. I'm surprised I had never heard of it before.
The most important think is to keep writing your own new code as much as possible this is how we all learn. Btw ImGui still have lots of issues and lacking features it's not like its making anything obsolete
I dislike passing the properties of the widgets as parameters in your RenderWidget functions,
but that's easy to move into the gui context.
But on the whole I think this is a very elegant API.
I'm not converted to the immediate-mode paradigm,
but this has inspired me to allow immediate-mode processing of events in my gui library.
If your needs are simple (which is probably the case often enough) it really is very convenient to just handle all your events in one function vs. sub-classing widgets and adding listener routines.
I'm a firm believer in "creating options" for the programmer and not dictating methodology.
Thanks.
The most important think is to keep writing your own new code as much as possible this is how we all learn. Btw ImGui still have lots of issues and lacking features it's not like its making anything obsolete
You're too kind.
I have to admit I really like the idea of batching geometry. It would be interesting to implement the same in my own gui, but I doubt I would get to that anytime soon. It's so much more efficient from a hardware point of view.