It should be stressed that just because I'm a fan of the guy doesn't mean I do everything he says on every project. I find his alternative style a breathe of fresh air, and apply the techniques where applicable.
In my current project, my objects are so bloody complex and abstract (where scripts do the actual input), that so far, I'm just biting the bullet and keeping them treated individually so I can keep my head wrapped around them. Later on, I may try to switch things in to arrays of data. (Though I am using static arrays / pools, and D's template system allows me to build and adjust the size of these plain arrays very easily.)
On the other hand, I'm working on a message system and a "two phase" object execution system so that all objects can be executed completely (or mostly) independent of each other, and then their "effects" are queued up and applied after all scripts have run. So all objects can be split across as many CPU cores as you have, and the only things that have to be synced are what I call "bridges" between "domains" (a distinct set of objects and memory for a core). That's been pretty fun to design. Since my world can change, and object densities can move around, I'm still looking into my options for dynamically splitting the world into domains ala BSP/quadtrees, or something more context-aware.
I invite you to try appending to a built in array, using built-in associative arrays and using stateful closures without the GC enabled. You're drinking the D kool-aid a little too strongly. D doesn't have the ecosystem and users for a reason.
Again, to be noted, I'm still learning the language. I'm not saying it's perfect. I'm saying the biggest problem is simply the lack of more users. And the reason for the lack of users? It's simple. It doesn't have a big company backing it (Rust/Mozilla. Go/Google. C#/Microsoft.) It isn't taught in most schools because it's a system language. It's not "cool" because it's not a new, crappy, hipster web language... because it's a system language. C and C++ aren't "cool" anymore but have plenty of pre-existing systems.
Here's an actual project, that stripped out all GC:
(As well as the linking article on Stack Overflow about GC-less D.)
Up to 300% improvement by replacing the GC and standard lib. That's surely a lot. But it's not orders-of-a-magnitude either. Which is when I would start making a Hard Decision (TM). 300% in 2012, is about equivalent to me starting a project today with a modern CPU.
(Another guy did a GC-less Phobos stdlib for games back in ... 2005.)
That was way back in 2012, and things have improved. (e.g. IIRc, LDC didn't even exist then.) He didn't say it was "impossible" or even complain at all in that article. (I'm sure it was a lot of work, but so is programming in general.) When I ran some memory intensive tests on raw data a year ago, I had about 200% slowdown over C++ with the same code--with zero compiler tweaks or D-specific workarounds.
Yeah, 200% is a lot. But for an indie developer, the time and effort to develop is also a critical factor--not just speed of execution. There's TONS of games that have come out that were written in C# or even freaking JAVA (Minecraft?!). And yet, Minecraft is one of the most popular games of all time. (Ran like piss on my Netbook when it first came out. But my netbook is in parts in a box, and Minecraft is still selling copies.)
I'm building my game in D as an real-world, full project-sized, experiment. And I'll have much more to say about building a game in D when it's done.
So my point with D is not that it's perfect. It's that it's viable. And yes, GC-less D is possible if you're willing to put the effort in and restrict your feature-set a little.
But that's a HUGE DIFFERENCE from trying to run say, C#, without a garbage collector, which is actually impossible.
If taken to the extreme, Booleans have no place in platform/data-centric development... so do you never use Booleans?
While I can't answer what you said, what blew me away in his OGRE slides (see my first post) is that he said, "Why can't a float be used as a boolean?" (instead of casting a rapidly used function from float back to bool before returning the condition value.)
That blew my mind. Floats CAN, and that takes out an unnecessary conversion. Oh sure, we're "taught" in college and by fellow programmers to FEAR floating point numbers. "Never compare a float!!! (except less than or greater than!)" and that "floats aren't accurate!". Yet, I ran a test (and a Google would also suffice) that an IEEE float will actually store ANY INTEGER up to 16,777,216 completely accurately. Nobody bothered to tell me that, yet it's a fact that can be relied on.
So some times, yeah, it'd be faster to leave a float as a float. But we're always taught to needlessly encapsulate things into black boxes, out of the fear as if "nobody could possibly understand these things in real situations, so we should just restrict ourselves to a subset of use cases."
And what's so refreshing about Mike Acton is he says the opposite. "Understand the architecture or you're not an engineer."
Which IS something taught in real engineering departments, like my Mechanical Engineering degree. We were never told, "Hey, physics is hard. So let's just ignore it and design things that are bigger, fatter, and more expensive than they need to be." (Enjoy getting fired!) But somehow, that mentality is pervasive in the programming community. Just like (taking it back to my first post) how Mike Acton wants to slap someone anytime they want to abuse the "Premature Optimization is the root of all evil." quote.
WAIT, I remember ONE thing related to your boolean question. Bools represent DECISIONS. An object with one bool could actually be two separate object cases. Two bools, four.
So often, instead of using bools, they'll SPLIT into separate arrays and/or SORT in the same single array. Hence the "print the result and zip it." You want your BRANCH decision (for a one bool situation) to all occur ONCE. All bool=false objects at say, the top, and then a sudden switch to all the bool=true objects at the bottom. Or vice-versa. The point is that all the like objects are together so the branch prediction doesn't have to explode as three objects come in =true with their code, and then 1 object with =false which uses a different set of code, and then back and forth, each time killing the branch prediction and destroying the pipeline.
Split data where possible and treat them as independent arrays.
Sort data in the other situation, so that branches only occur at the change-over point.
I've seen that mentioned in a bunch of game programmer conferences... not just Mike Acton.
Now, I'm still not sure the best way to address objects that are FLIPPING state dynamically. You could "sort" it, but then, you're still branching in that sort. However, if the sort takes the "branch hit" once, and then the rest of your code references those objects multiple times (where they'd hit those branches many times) that may work... also, you could do a subset sorting algorithm with only "reduces branching" but not "eliminates" yet still improves the overall performance.
That is, even with a program that only reads a dataset ONCE per frame, you might be able to SORT some subset of that dataset to reduce the entropy a little bit and have the additional sorting cost of that reduction be less than the branching cost of an unsorted data set. So you "clean up" your data set a little bit each time.
Then again, that would still require the main program to HAVE those if(bool) statements, where as the splitting method removes them altogether. So like I said, I haven't figured out the best solution yet. But if they were all in the same array, reducing the entropy a little bit each frame, may work great when your dataset only increases entropy a little bit each frame. (That is, some objects change bool state, but not ALL. So where they WERE sorted, they're now "mostly sorted" and you're constantly reducing that state by a little bit with your sort algorithm.)