Allegro.cc - Online Community

Allegro.cc Forums » Off-Topic Ordeals » Programming Paradigms and Useful Features

This thread is locked; no one can reply to it. rss feed Print
 1   2   3   4 
Programming Paradigms and Useful Features
Kibiz0r
Member #6,203
September 2005
avatar

I've been wanting to see CQRS implemented as a first-class language feature.

So instead of "x += 3" meaning "replace x with x + 3", it means "store an event on x that adds 3 to it".

And you can link these "state change events" to a parent event, such that you can say...

move_event = my_sprite.move_right() // x += 3
move_event.undo()

...and any state change events that were produced as a result of move_right are also undone.

Imagine what that would mean for a client-server architecture where two clients could normally make contradictory requests.

With state-based interaction:

Client A says "set x to 7"
Client B says "set x to 4"
Last one wins, I guess?

With event-based interaction:

Client A says "add 2 to x"
Client B says "subtract 1 from x"
This, we can make sense of!

I have seen so many client-server architectures fall apart as they get bigger, trying to work around these issues of contention. They either try to firewall the mutability of objects by giving ownership to a specific client (which does work if you can actually guarantee that separation) or detecting specific kinds of conflicts and doing a "best-guess" patch to recover. And if neither one of those works, screw the client and drop their request.

It's especially hard if you need to support an offline mode, where you need a plan to get back to a sane state without losing all of your work if the server rejects one of your requests. If all of your state changes are captured as high-level ideas like "move_right" rather than "x = 7", you can easily drop that one rejected change and be in the correct state.

So yeah, I wish more people knew about CQRS.

Also reactive programming, which has the same problem of being really useful but kind of clunky to use in most popular languages, which could probably be helped by more people being interested in using it.

m c
Member #5,337
December 2004
avatar

Crazy.

Function call overhead is a bit of a thing but inlining and other optimizations can remove the problem. Old code would use preprocessor macros instead of sub routines to reduce function call overhead. Now you can just use static inline functions and get the same executable but with better error checking and debugging.

Also some calling conventions are much faster than others.

Have you ever tried to make your program one big giant state machine?

(\ /)
(O.o)
(> <)

Striker
Member #10,701
February 2009
avatar

There are different types of functions and compilers. However, IBM cares for function call overhead:

IBM function call overhead

And i still think the least function call overhead you get if you have no function. :o

bamccaig
Member #7,536
July 2006
avatar

It has been a while since I was trying to learn Haskell, but I found it very refreshing. It's very neat programming in a language that is "functionally pure". I finally figured out what "pattern matching" is, and it is indeed neat.

IIRC, there's no such thing as global state in Haskell! And things that are inherently global or unpure, like the file system or user, are abstracted into "todos" that aren't evaluated until the program actually executes (so in code and at evaluation time they are like stubs with defined "values").

The end result is that your program becomes a combination of logically defined pieces. It's a really neat way to think about programming. Of course, it does become a bit of a challenge to write real-world programs that have a lot of unpure elements in them, but I never got far enough in my learning to really experience that.

I should get back around to learning Haskell again. I think I hit a brick wall where the tutorial I was using wasn't going fast enough, and when I tried to experiment on my own nothing worked. :P

Perl also has refreshing ways of thinking of and doing things.

Thomas Fjellstrom
Member #476
June 2000
avatar

All I have to say about this, is that even worrying about function call overhead in 99.999% of cases is a waste of time and technically wrong.

About the only time I can think it REALLY matters, is in very very tight loops, and on really slow and cheap mcu's. And in both cases, an optimizing compiler can and will optimize function calls out entirely by inlining functions.

So don't worry about it. Just don't.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Striker
Member #10,701
February 2009
avatar

What i have basically written is my paradigma is usability goes over readability. After the principle "form follows function" (Horatio Greenough).

There are two ways a program can be written: for the sight of the processor or for the sight of the reader. We don't need to learn assembler these days, because the compiler optimizes C so that it is not slower anymore. Same may go for some types of functions. Many programmers write for their own sight, they are happy about nice program texts and don't think much about speed.

But function overhead reduction is not the only thing to speed up. There are much more tricks and they all together give the effect an enthusiastic game programmer wants. :)

bamccaig
Member #7,536
July 2006
avatar

Years ago programs used to be small enough that you could actually count the processor instructions, memory bytes, and optimize the program for the machine. These days the machines are so insanely fast, and large, that it's impossible to do this in a timely fashion. You don't need to worry about it. You can be quite wasteful and the user won't even notice. Caring about the performance of your programs is good, but you can't optimize them by avoiding useful programming constructs with negligible overhead (what you're really doing is making it more difficult to write bug free code, and buggy code is worse than slow code). It's a waste of your time to code like that. Instead, check which parts of your application are slow by running it through a profiler (e.g., valgrind) and then optimize only the slow parts with better algorithms.

Chris Katko
Member #1,881
January 2002
avatar

The only time a optimize-last paradigm fails is in the "death by a thousand cuts" problem where nothing in particular is taking all the time, but everything is taking way more than it should.

But those cases are extremely rare, and are more symptomatic of poor logical structure / algorithm selection than things like indirect calls. For example, selecting algorithms that thrash cache, or reading data from the harddrive while running.

It's easy to get sucked into the perfectionist mindset with code, the sleakest, fastest code. But nowadays, the API's a program invokes are taking up more than the amount of the time you think you're spending in your program. I mean, even drawing a single sprite in Allegro 4 has plenty of if statements. And branch mispredicts cause quite a bit of slow down.

The most important thing with programming (unless you're really doing it for a hobby centered on perfectionism) is clear code, that is easy to expand and easy to fix. Optimization is a losing war. Computers get faster every year, and nobody is going to applaud SkiFree for being fast, or even remember that Fallout 1 & 2, and System Shock 2 had ridiculously slow loading times.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

Striker
Member #10,701
February 2009
avatar

Have you ever wondered why the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties? I remember i had optimized 3.1 with a few tricks, used 32 bit instead of 16, used much ram and so on. I think the user surface was quicker than XP. If you look at XP you always wonder how a screenful of icons on the desktop can take seconds to build up in a modern OS from MS.

Now we have 3 GHz duo core processors or better instead of the old 40 MHz 486, but it seems the programmers at MS ar partly unable to make use of it. Why? Because they think like you, we don't need to care for speed, we can waste the ressources?

pkrcel
Member #14,001
February 2012

Striker said:

Have you ever wondered why the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties?

You clearly remember things heavily biased.

Quote:

Because they think like you, we don't need to care for speed, we can waste the ressources?

Maybe, but nobody here you shouldn't strictly care for speed.... your argument was a bit more extreme.

It is unlikely that Google shares your distaste for capitalism. - Derezo
If one had the eternity of time, one would do things later. - Johan Halmén

Striker
Member #10,701
February 2009
avatar

Biased? Say a 3 GHz Duo core is ~ 6000 MHz / 40 = 150. If i have a processor that is 150 times faster and i expect the user interface should be much faster thats biased?

pkrcel
Member #14,001
February 2012

Yes, heavily so.

They do not do the same things, and you are timing different things from different machines, not a meaningful scenario.

I am NOT saying modern computing does not suffer from slow load times and general unoptimization, but to say that Win3.1 is or was FASTER, is incorrect and unfair, on my 486DX Win3.1 took AGES (= several minutes) to load, and had next to NOTHING (around 4Mb of occupied RAM? can't really remember) that my current Win7 which takes (unfortunately, after 3yrs of usage) some minutes to load (maybe even 3) but has a WORLD OF THINGS behind it (2+ GB of RAM usage).

If loading means dumping data into memory (...more or less...), memory consumption today is 1000 times more than in the nineties...memory is faster and processors are faster....still, we float on the compromise.

Again, I don't think that Windows problems are function call overheads in any case.

EDIT: of course I do not have an SSD eh (which reminds me that r/w'ing data to disk is STILL a grudge).

It is unlikely that Google shares your distaste for capitalism. - Derezo
If one had the eternity of time, one would do things later. - Johan Halmén

Arthur Kalliokoski
Second in Command
February 2005
avatar

Striker said:

the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties

I remember running Windows 3.0 on a 286 with the full 640k regular memory with an additional 384 extended memory for 1 meg total. I bought some "Sams Learn C++ in 21 days" or something which had the Borland 4.1 (?) compiler. The only thing I could do with it on that 286 was to browse the PDF documentation, and it took 30 seconds to scroll one line while viewing the PDF's.

They all watch too much MSNBC... they get ideas.

Thomas Fjellstrom
Member #476
June 2000
avatar

My laptop boots in less than 10 seconds. None of my older computers ever came close to this. The interface feels smoother, looks better, and can do a heck of a lot more than my older machines ever could.

If it means I have to suffer through a few function calls, well, I'll just have to live with that.

The problem isn't function call overhead, no matter how much you think it is. Complexity and bloat are at the root of the problem these days. You think windows 3.1 can handle the 8 logical cores my laptop has? Or the 32GB of ram? Or the near 1TB of BLAZING fast SSD space? How about the 1080p screen?

The very nature of computers these days requires more complex code, which takes more resources to run. You should see the insane code the linux kernel has to manage memory, and task scheduling.

Yes, there is bloat that causes extra slowness. Just try not to install crapware like Norton or basically any (non OS) software that came bundled with your PC.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Gassa
Member #5,298
December 2004
avatar

I also have a feeling that windowed UI nowadays, with SSD and other good stuff, is generally slower than Win3.1 on a good Win3.1-age computer. The UI is also more feature-rich, that's for sure. However, a typical user needs a small fraction of this richness, and all the other stuff just slows the things down for them.

That said, picking a single slowdown cause at random (e.g. slowdown from function calls, cache misses, excess branching, expensive instructions such as division) and optimizing it perfectly doesn't usually help getting the fastest program possible. A more experienced programmer will likely start by writing a slow but programmer-friendly prototype, then profile it for bottlenecks, optimize a few of them, and have a faster version of the program (and more manageable, as a bonus) than a single-cause perfectionist - before the latter even gets their program working at all.

And if you think any function call takes 2KB of stack, I suggest you do the research. To help you start, here's a random article from Googling on the subject. The fact that it's about MIPS doesn't change the general picture.

Chris Katko
Member #1,881
January 2002
avatar

Striker said:

Have you ever wondered why the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties? I remember i had optimized 3.1 with a few tricks, used 32 bit instead of 16, used much ram and so on. I think the user surface was quicker than XP. If you look at XP you always wonder how a screenful of icons on the desktop can take seconds to build up in a modern OS from MS.

A better comparison is X Server.

The X server was originally designed for remote applications, and has overhead associated with it.

Then, in the 90's everyone was complaining about the overhead. Which was somewhat noticable for 90 MHz pentiums.

Now, in 2014, nobody cares or notices a single problem with it (except script kiddy idiots) and yet now we're all benefiting from X servers that magically work over any configuration you could possibly want.

Even the supposed replacements (Wayland and Mir) for X aren't really for performance reasons. They're for modern technology like tablet reasons and they still support X in backwards compatibility.

Striker said:

Biased? Say a 3 GHz Duo core is ~ 6000 MHz / 40 = 150. If i have a processor that is 150 times faster and i expect the user interface should be much faster thats biased?

You're either the worlds greatest troll, or missing half the picture.

1) CPU frequency has mostly hit the 5 GHZ wall for years, but that hasn't stopped them from increasing performance drastically from more cycle efficient designs.

2) CPU frequency is a terrible indicator of throughput because almost all applications are I/O bound, not CPU bound.

Striker said:

I remember i had optimized 3.1 with a few tricks, used 32 bit instead of 16, used much ram and so on. I think the user surface was quicker than XP.

Nevermind, you really have no idea what you're talking about. Windows 3.1, NT, and 95 were so damn slow using GDI that they had to come up with direct access to the framebuffer so games could run on it--otherwise they would lose the market. That direct access? They called it DirectX.

Actually, that reminds me of another great example: OpenGL. OpenGL is used on everything today, but when it came out in 1992, it was for CAD. It was so slow that Glide and Direct3D were introduced so that 3-D gaming could exist. But OpenGL was smart. They said, "It's easier to come down to fast gaming, then it is to work upward to proper CAD." And they did it. OpenGL supported floating point and 48-bit color back in 1992 back when we were running 8-bit palette color. They thought ahead, and it paid off.

Meanwhile, Glide was such a hurried piece of junk that it died after a single generation of gaming. It doesn't matter that Glide versions of games ran faster--anymore. Nobody remembers that. Everyone remembers OpenGL/DirectX looked better, and the Voodoo Cards--while initially ground-breaking, couldn't keep up with competition and 3dfx went out of business and became absorbed into nVidia.

Wikipedia said:

Glide was an effort to select primarily features that were useful for real-time rendering of 3D games. The result was an API that was small enough to be implemented entirely in late-1990s hardware. However, this focus led to various limitations in Glide, such as a 16-bit color depth limit in the display buffer.[2]

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

bamccaig
Member #7,536
July 2006
avatar

I would argue that the real waste is in powering a GUI at all. A GUI is a terribly inefficient user interface. It is very wasteful. It is quite convenient for inexperienced users, but it's very wasteful. Your computer is essentially in an infinite loop constantly checking the state of hardware, tracking the state of the interface, and triggering events in response to update the state of things and redraw them. A pure command shell console is far superior for most of what we do, but good luck getting average users to use one these days. Hell, it's hard to even get 90% of the Windows "IT professionals" to use one ("coincidentally" Windows' command shell is also horrible).

Microsoft creates terrible software. It isn't because they aren't performance oriented. It's because they're greedy and selfish. They only care about what makes them money. They only care about it if they can add it as a bullet point to an advertisement. They'd rather you spent money on faster hardware than to spend the time to optimize their software. If they do it perfectly the first time how are they going to ever sell it to you again? They want things to be slow and buggy at first because they know you'll buy it anyway. And then it's trivial to gradually improve it and make you buy it again and again. They don't even bother fixing serious bugs and misfeatures when they don't think it would make for a strong selling point (memos have leaked where managers have explicitly said not to fix things because it wouldn't be a good selling point).

Chris Katko
Member #1,881
January 2002
avatar

bamccaig said:

It's because they're greedy and selfish. They only care about what makes them money.

Compared to Comcast, they're the freaking pope. :P

Microsoft wants you to use their newest product.

Comcast wants you to use their old product, forever.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

Striker
Member #10,701
February 2009
avatar

Bamccaig and Gassa, i agree with what you are writing.

I once was a MsDos command line freak and i will never forgive Billy he has almost killed MsDos, naturally for commercial reasons, like always. Programs used to be faster without the windows ballast. And i loved to use the batch system.

We can go even more back in history and learn from these computers. C64 had some advantages.

- When switched on it was immediately present, no booting at all, because he had his OS in ROM.

- In programming you could always rely on his OS structure, it couldn't be changed.

Naturally we can't go back to 64 kb Ram. Unfortunately Commodore went the wrong way. After C64 and C128 the world has waited for the C1024 as competition for Atari ST. They bought Amiga and it was a flop.

Otherwise now we could have the followers of C64 with OS in Rom. I know, today there are a few. I believe one day they will come back to the OS in ROM, because it has many advantages. ;D

Arthur Kalliokoski
Second in Command
February 2005
avatar

Striker said:

I believe one day they will come back to the OS in ROM, because it has many advantages.

How would you patch it when the haxxors find a weakness? Or you'll just stay off the interwebs?

How will you add your shiny new Whatzit gadget with USB when the ROM code has no idea?

They all watch too much MSNBC... they get ideas.

Striker
Member #10,701
February 2009
avatar

Maybe they become technically so advanced one day that there are no changes necessary?

Arthur Kalliokoski
Second in Command
February 2005
avatar

Striker said:

Maybe they become technical so advanced one day that there are no changes necessary?

It's a two way street. The only constant is change.

They all watch too much MSNBC... they get ideas.

Striker
Member #10,701
February 2009
avatar

Today in simple mathematics there are no changes. Some things never change. Or maybe in astronomical units. :)

We still have the BIOS in ROM, so it would be a process to put more of the OS in the BIOS.

Gideon Weems
Member #3,925
October 2003

bamccaig said:

I would argue that the real waste is in powering a GUI at all.

Interesting posit...

UI designers abstract underlying hardware in an effort to make the system easier to understand. Take this too far, however, and you end up demolishing the logical, mechanical system that lies underneath--and it just so happens that humans understand such systems quite naturally.

I don't see any need to distinguish GUI from CLI. They are functionally identical and feature overlapping characteristics. The key difference is their level of abstraction, but this difference is one of tradition and not definition.

The only goal of a UI is to convey desired information while requiring as little user effort as possible. Nothing else matters. If, instead of a keyboard, I had a box of magical gnomes at my desk and could tap one on the head, waking him up, and issue a verbal command--and that gnome would shrink to the size of an atom and dive inside my computer at the speed of light, moving the right boxes of bits into just the right places, executing my command--there would be no need for either a GUI or a CLI (unless "GUI" stood for "Gnome User Interface," though perhaps a better name would be VUI, for "verbal").

I therefore submit that graphical level, while historically a good barometer, is not a reliable indicator of UI efficiency. Rather, the amount of user effort is. The CLI, with its massive command set, requires more effort upfront, but pays off in the end. GUIs are the inverse of this.

Really, though, I just wanted to talk about gnomes.

Arthur Kalliokoski
Second in Command
February 2005
avatar

Striker said:

We still have the BIOS in ROM, so it would be a process to put more of the OS in the BIOS.

http://www.extremetech.com/computing/96985-demystifying-uefi-the-long-overdue-bios-replacement

They all watch too much MSNBC... they get ideas.

 1   2   3   4 


Go to: