Programming Paradigms and Useful Features
Chris Katko

What methods, paradigms, langauge features have you seen that you find either useful, or wish the general public knew about?

For example, as I mentioned in a de-railed thread, Verilog (and apparently Objective-C) supports "Named Parameters" or "Pass by Name" (though the second term can apply to something else as well).

[window addNewControlWithTitle:@"Title"
                     xPosition:20
                     yPosition:50
                         width:100
                        height:50
                    drawingNow:TheDrawingVariable];

Notice how we can have optional parameters (and not just on the end!), the order is whatever we choose, and the intent of the call is clear and concise.

Thomas Fjellstrom

I'm pretty sure ObjC's arguments are not re-orderable. the actual name of the symbol is baked into the argument's order.

Slartibartfast

Python's are reorderable.

def foo(a=4, b=10):
    print a*b

foo()
> 40

foo(b=2, a=2)
> 4

But also

def bar(**kwdargs):
    for arg in kwdargs:
        print arg, kwdargs[arg]

bar(a=9, b="cool")
>>> a 9
 b cool

Chris Katko

I'm pretty sure ObjC's arguments are not re-orderable.

Noted, I'm not actually familiar with Objective-C. Only Verilog.

Steve Terry

not the first time I've seen named parameters, I believe OS/390 even had them.

m c

Me and my friends from uni liked the idea of manually specified sequence points are good I think, that way lines are assumed to be operating in parallel by default, which affects how you program.

I guess that would make auto-vecotrizing compilers easier to work, and allow more parallel type of CPUs to work better, because software would then have to be engineered with parallelism in mind.

Although most of the code that I write is inherently sequential anyway, or at least the dependencies are quite complicated.

I like the eval() statement from dynamic languages, that is great. Would have been cool if C had included a C JIT compiler in the stdlib, like stdjit.h or something, that allowed compiling, linking and executing c strings of c code at runtime, though that would make the language less portable I am sure. Would be nice in a few cases though.

I like the signal / slot thing in QT C++.

I like the numerical ranged types in languages such as ADA.

I liked how in watcom C you could define custom calling conventions. I found that to be cool.

I find the concept of domain specific languages interesting, though I honestly find it silly to try to do that using C++ template meta programming.

Gideon Weems

This is gonna sound awful, but I like goto. I have never used it in a C program--not once. Rather, I feel safe just knowing that goto is there, ready to get me out of any jam at a moment's notice. At the same time, I hope that I never have to use it.

Goto is like keeping a fire extinguisher in the house.

Otherwise, I'm a fan of anything that saves work. If more work now means less in the long run, I'm all for it--and the less that more work is, the happier I am.

Chris Katko

goto receives too much hate from people who are "half good" at programming.

goto is a useful construct. There's a reason it exists in many languages. If goto makes a structure more obvious to the reader than rows of if's and switch statements, then use it. If it makes sense? Use it.

Like you said, it's a "last resort" because using it often means you're losing all of the benefits of OOP. The biggest one is that it violates your intuition of scope. You have no way to easily tell if you're leaving things dangling and unfinished. But just because it "can" be bad, doesn't mean it is always. Pointers can be bad in the same, if not more ways than goto, but we use them every day. There's just more typical benefits from pointers to justify their danger. So everyone in general is magically okay with pointers.

Gideon Weems

You make a good point. I seem to remember a famous programmer saying something similar about dynamic memory allocation (though I'm unable to provide a reference).

Slartibartfast
#SelectExpand
1int foo(int **ppiFood) 2{ 3 int retcode = STATUS_UNDEFINED; 4 int *piFood = NULL; 5 6 DEBUG_PRINT("foo called"); 7 8 piFood = malloc(sizeof(*piFood)*10); 9 if (NULL == piFood) 10 { 11 DEBUG_PRINT("foo: malloc failed"); 12 retcode = STATUS_ERROR_FOO_MALLOC; 13 goto lblFinish; 14 } 15 16 retcode = bar(piFood); 17 if (FAILED(retcode)) 18 { 19 DEBUG_PRINT("foo: bar failed"); 20 goto lblFinish; 21 } 22 23 **ppiFood = piFood; 24 piFood = NULL; 25 retcode = STATUS_SUCCESS; 26 27lblFinish: 28 29 if (NULL != piFood) 30 { 31 free(piFood); 32 } 33 34 DEBUG_PRINT("foo returns"); 35 36 return retcode; 37}

Noble use of the goto.

bamccaig

I routinely use goto in a sort of catch and finally way. Errors jump to a catch label which jumps to a finally label, but normal execution falls through to finally and returns. Saves on redundant cleanup.

m c

Yeah goto is excellent for goto badend; type of uses, and also for multi-level breaks.

sometimes a reverse goto is good too.

Someone once said that reverse goto (going to a spot earlier than where you are) is never necessary, and that code can always be restructured to not need it, that MIGHT be true, but by making the code more convoluted in some cases, I used reverse goto to good effect when I was experimenting with setcontext.

Striker

My most important paradigma is to write programs for executability, mot readability. Speed goes over readability and maintainability as i am a private person and no other people needs to fiddle around with my code. I add useful comments and speaking variable names, so that i later find through.

For efficiency it is important to use as less functions as possible, because a function call is expensive. It costs time and space (~2 KB each for local variables). For the compiler a method is a function. Thats why with C++ you tend to use more functions. If you call a function in a loop 1000 times you end up with a "gigantic" waste of ressources.

Considered that the most efficient program would be in C with only main, no functions. Naturally you don't always need the highest speed. But in game programming it is useful to optimize the code in time critical parts. 8-)

Chris Katko
Striker said:

For efficiency it is important to use as less functions as possible,

No no no no no no no no no!

Programs should be written to be portable and to last. System function calls take up huge vasts of time. When a single graphics drawing routine costs more than half your game logic, you should be picking a design pattern that allows the most logical and straight-forward layout of the problem you're trying to solve.

If that wasn't the case, most design patterns wouldn't exist because they involve overhead.

1) Programs may be "slower" than older generation ones because of that. But they can benefit greatly from it. If speed was wholly important, we'd still be running DOS and have direct framebuffer access.

2) System calls and system API's are ridiculusly more exhaustive than they used to be. So the same OLD program will take more today than it did before, for example, QMMP (an MP3 player) on first release VS QMMP today running on PulseAudio. Where PulseAudio, a software mixer for Linux, takes a bloody 10-30% of my netbook's CPU.

3) Threading and memory access can blow up your code's time much more than the sum of it's individual parts.

I'm all for GOTO, but it's not for CPU reasons. It's for logical design reasons. Unless I'm running an embedded controller, or an emulator, the structure is way more important than speed of individual lines of code because you should never optimize before you need to:

Donald Knuth said:

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil"[2]

Audric
Striker said:

a function call is expensive. It costs time and space (~2 KB each for local variables).

ôÔ ??? 2Kb sounds huge. Local variables and parameters use space relative to their size, so either your sub-function is allocating lot of things on the stack (and it would cost exactly the same space if the code was in the parent function), or you're passing raw objects as arguments instead of pointers.

Striker said:

If you call a function in a loop 1000 times you end up with a "gigantic" waste of ressources.

Hmm, in terms of space, no, you re-use 1000 times the exact same piece of memory. And in terms of time, no, unless you accidentally pass a lot of data as arguments, because they get copied to the stack. Ex: passing an array of 40000 characters (3.9Kb), instead of passing the address of the first character (8 byte)

bamccaig

The overhead of a function call is negligible. If your program is so performance sensitive that it can't handle any function calls then you're either on a very crappy embedded system or you're doing it very, very wrong.

Thomas Fjellstrom

If any of you think he was serious... You all need help.

Striker

I knew this would start a discussion, especially with those who have C++ as holy cow. Sorry i am not commenting everything in detail.

Chris, different people have different paradigmas. For you it is portability, for me it is to get maximal speed. Thats OK.

Audric, i have read 2 KB somewhere. It will be different at different compilers. There are a few things stored like an adress to return after the function and so on. These 2 KB become allocated on the heap or stack and later freed. But it is all activity that makes the program slower.

Bamccaig, a few thousand negligible things can build a sum which is not negligible anymore.

Thomas, you mean me? It`s my full seriousity.

After all, if you have two programs doing the same, one has all in main and the other has a few thousand function calls you can measure a speed difference, thats sure. ;D

Edgar Reynaldo
bamccaig said:

The overhead of a function call is negligible. If your program is so performance sensitive that it can't handle any function calls then you're either on a very crappy embedded system or you're doing it very, very wrong.

This ^-^

l j

A function call will put the return address of the previous method on the stack, that's 4 or 8 bytes. When calling a method with no paramaters you write 8 or 16 bytes to the stack (return address + object pointer). The stack is a fixed size and thus never dynamically reallocates. Everything on the stack is nicely packed together and as far as I'm aware quite cache friendly. Functions/methods also get cached. Avoiding using functions is about the most stupid thing I've heard so far.

Apart from passing whole arrays by value I can not imagine how you'd get to the 2 KB value, debuggers will add extra information, but certainly not that much.

Chris Katko
Striker said:

After all, if you have two programs doing the same, one has all in main and the other has a few thousand function calls you can measure a speed difference, thats sure. ;D

Post the results.

An operating system pre-empting your program to run another, or an interrupt, is going to cause more of a delay.

Many people don't realize that CPU's are extremely fast... but they take a long time to "charge up." To fill the pipeline and prediction tables. And throwing away your entire pipeline every time you task switch is going to be a bigger problem than saving/restoring the function stack to cache--which costs on the order of ~20 cycles so if you're running 500 cycles of code in a function, than you're dwarfing the overhead. So if the thing "we can't control", that is, task scheduling, is dwarfing the time it costs to use a function, then by extension, the costs of functions don't matter. (*DUH, we're talking about most cases, not extreme 1000 deep recursion of single line functions.)

Moreover, many compilers inline functions automatically. What's an inlined function's overhead? Zero.

bamccaig

If any of you think he was serious... You all need help.

ahem You were saying?

Chris Katko
bamccaig said:

ahem You were saying?

And even if he is, I find discussions on paradigm overhead to be both fun and informative. Though, technically, this thread was supposed to be about useful / lesser known programming paradigms more than overhead.

But this forum is so small these days all threads kind of merge together.

[edit] One sec, while I fix das code

Code for 100 million runs of 10 distance calculations on doubles:

Test case 1 - No functions:

#SelectExpand
1#include <cmath> 2#include <iostream> 3 4using namespace std; 5 6int depth = 1; 7double d = 0; 8 9int main() 10{ 11 double x = 5; 12 double y = 10; 13 14 for(int i = 0; i < 100000000; i++) 15 { 16 if(depth > 0)depth++; //comparable if statement and increment (see next code) 17 d = sqrt(x*x + y*y); 18 d = sqrt(x*x + y*y); 19 d = sqrt(x*x + y*y); 20 d = sqrt(x*x + y*y); 21 d = sqrt(x*x + y*y); 22 d = sqrt(x*x + y*y); 23 d = sqrt(x*x + y*y); 24 d = sqrt(x*x + y*y); 25 d = sqrt(x*x + y*y); 26 d = sqrt(x*x + y*y); 27 } 28 cout << "Done." << endl; 29}

Test Case 2, 5, and 10 recursive function calls each iteration of 10 distance calculations:

#SelectExpand
1#include <cmath> 2#include <iostream> 3 4using namespace std; 5 6double d = 0; 7 8void my_function1( int depth) 9 { 10 if(depth > 0) 11 { 12 depth--; 13 my_function1(depth); 14 } 15 } 16 17int main() 18{ 19 double x = 5; 20 double y = 10; 21 22 for(int i = 0; i < 100000000; i++) 23 { 24 my_function1(2); //5 and 10 for other cases, hardcoded so we don't incur a variable penalty. 25 d = sqrt(x*x + y*y); 26 d = sqrt(x*x + y*y); 27 d = sqrt(x*x + y*y); 28 d = sqrt(x*x + y*y); 29 d = sqrt(x*x + y*y); 30 d = sqrt(x*x + y*y); 31 d = sqrt(x*x + y*y); 32 d = sqrt(x*x + y*y); 33 d = sqrt(x*x + y*y); 34 d = sqrt(x*x + y*y); 35 } 36 cout << "Done." << endl; 37}

Compiled exactly the same.

g++ main1.cpp -o main1 
g++ main5.cpp -o main5 //etc

Results:

#SelectExpand
1$ time ./main1 2Done. 3real 0m9.431s 4user 0m9.423s 5sys 0m0.000s 6 7// 2 deep 8$ time ./main2 9Done. 10real 0m10.724s 11user 0m10.706s 12sys 0m0.004s 13 14// 5 deep 15$ time ./main5 16Done. 17real 0m12.154s 18user 0m12.135s 19sys 0m0.004s 20 21 22// 10 deep 23$ time ./main10 24Done. 25real 0m15.869s 26user 0m15.868s 27sys 0m0.004s

And that's 10 deep, for a function computing only 10 sqrt's. I can't think of any paradigm that would be more than a couple calls deep. Someone can modify it to be indirect (virtual) calls if they like. Those will be slower. But again, they're never ten deep.

[edit]

Results for -O0 disabled optimizations.

#SelectExpand
1novous@ubuntu:~/Desktop/dev/speedtest$ time ./main1 2Done. 3 4real 0m9.939s 5user 0m9.943s 6sys 0m0.000s 7novous@ubuntu:~/Desktop/dev/speedtest$ time ./main2 8Done. 9 10real 0m11.000s 11user 0m10.993s 12sys 0m0.000s 13novous@ubuntu:~/Desktop/dev/speedtest$ time ./main5 14Done. 15 16real 0m12.867s 17user 0m12.850s 18sys 0m0.003s 19novous@ubuntu:~/Desktop/dev/speedtest$ time ./main10 20Done. 21 22real 0m14.797s 23user 0m14.797s 24sys 0m0.000s

And even funnier, with -O3, the functions are faster! Perhaps G++ doesn't expect you to not use functions. But it's probably eating redundant data, so careful interpreting those results.

#SelectExpand
1time ./main1 2Done. 3 4real 0m0.128s 5user 0m0.128s 6sys 0m0.000s 7novous@ubuntu:~/Desktop/dev/speedtest$ time ./main2 8Done. 9 10real 0m0.007s 11user 0m0.000s 12sys 0m0.007s 13novous@ubuntu:~/Desktop/dev/speedtest$ time ./main5 14Done. 15 16real 0m0.003s 17user 0m0.003s 18sys 0m0.000s 19novous@ubuntu:~/Desktop/dev/speedtest$ time ./main10 20Done. 21 22real 0m0.003s 23user 0m0.003s 24sys 0m0.000s

Thomas Fjellstrom

I still don't think he's serious.

For one, the optimizer will have a fit with any sufficiently large function and it will start producing some pretty crappy code.

Polybios

I like Lua's tables.

Kibiz0r

I've been wanting to see CQRS implemented as a first-class language feature.

So instead of "x += 3" meaning "replace x with x + 3", it means "store an event on x that adds 3 to it".

And you can link these "state change events" to a parent event, such that you can say...

move_event = my_sprite.move_right() // x += 3
move_event.undo()

...and any state change events that were produced as a result of move_right are also undone.

Imagine what that would mean for a client-server architecture where two clients could normally make contradictory requests.

With state-based interaction:

Client A says "set x to 7"
Client B says "set x to 4"
Last one wins, I guess?

With event-based interaction:

Client A says "add 2 to x"
Client B says "subtract 1 from x"
This, we can make sense of!

I have seen so many client-server architectures fall apart as they get bigger, trying to work around these issues of contention. They either try to firewall the mutability of objects by giving ownership to a specific client (which does work if you can actually guarantee that separation) or detecting specific kinds of conflicts and doing a "best-guess" patch to recover. And if neither one of those works, screw the client and drop their request.

It's especially hard if you need to support an offline mode, where you need a plan to get back to a sane state without losing all of your work if the server rejects one of your requests. If all of your state changes are captured as high-level ideas like "move_right" rather than "x = 7", you can easily drop that one rejected change and be in the correct state.

So yeah, I wish more people knew about CQRS.

Also reactive programming, which has the same problem of being really useful but kind of clunky to use in most popular languages, which could probably be helped by more people being interested in using it.

m c

Crazy.

Function call overhead is a bit of a thing but inlining and other optimizations can remove the problem. Old code would use preprocessor macros instead of sub routines to reduce function call overhead. Now you can just use static inline functions and get the same executable but with better error checking and debugging.

Also some calling conventions are much faster than others.

Have you ever tried to make your program one big giant state machine?

Striker

There are different types of functions and compilers. However, IBM cares for function call overhead:

IBM function call overhead

And i still think the least function call overhead you get if you have no function. :o

bamccaig

It has been a while since I was trying to learn Haskell, but I found it very refreshing. It's very neat programming in a language that is "functionally pure". I finally figured out what "pattern matching" is, and it is indeed neat.

IIRC, there's no such thing as global state in Haskell! And things that are inherently global or unpure, like the file system or user, are abstracted into "todos" that aren't evaluated until the program actually executes (so in code and at evaluation time they are like stubs with defined "values").

The end result is that your program becomes a combination of logically defined pieces. It's a really neat way to think about programming. Of course, it does become a bit of a challenge to write real-world programs that have a lot of unpure elements in them, but I never got far enough in my learning to really experience that.

I should get back around to learning Haskell again. I think I hit a brick wall where the tutorial I was using wasn't going fast enough, and when I tried to experiment on my own nothing worked. :P

Perl also has refreshing ways of thinking of and doing things.

Thomas Fjellstrom

All I have to say about this, is that even worrying about function call overhead in 99.999% of cases is a waste of time and technically wrong.

About the only time I can think it REALLY matters, is in very very tight loops, and on really slow and cheap mcu's. And in both cases, an optimizing compiler can and will optimize function calls out entirely by inlining functions.

So don't worry about it. Just don't.

Striker

What i have basically written is my paradigma is usability goes over readability. After the principle "form follows function" (Horatio Greenough).

There are two ways a program can be written: for the sight of the processor or for the sight of the reader. We don't need to learn assembler these days, because the compiler optimizes C so that it is not slower anymore. Same may go for some types of functions. Many programmers write for their own sight, they are happy about nice program texts and don't think much about speed.

But function overhead reduction is not the only thing to speed up. There are much more tricks and they all together give the effect an enthusiastic game programmer wants. :)

bamccaig

Years ago programs used to be small enough that you could actually count the processor instructions, memory bytes, and optimize the program for the machine. These days the machines are so insanely fast, and large, that it's impossible to do this in a timely fashion. You don't need to worry about it. You can be quite wasteful and the user won't even notice. Caring about the performance of your programs is good, but you can't optimize them by avoiding useful programming constructs with negligible overhead (what you're really doing is making it more difficult to write bug free code, and buggy code is worse than slow code). It's a waste of your time to code like that. Instead, check which parts of your application are slow by running it through a profiler (e.g., valgrind) and then optimize only the slow parts with better algorithms.

Chris Katko

The only time a optimize-last paradigm fails is in the "death by a thousand cuts" problem where nothing in particular is taking all the time, but everything is taking way more than it should.

But those cases are extremely rare, and are more symptomatic of poor logical structure / algorithm selection than things like indirect calls. For example, selecting algorithms that thrash cache, or reading data from the harddrive while running.

It's easy to get sucked into the perfectionist mindset with code, the sleakest, fastest code. But nowadays, the API's a program invokes are taking up more than the amount of the time you think you're spending in your program. I mean, even drawing a single sprite in Allegro 4 has plenty of if statements. And branch mispredicts cause quite a bit of slow down.

The most important thing with programming (unless you're really doing it for a hobby centered on perfectionism) is clear code, that is easy to expand and easy to fix. Optimization is a losing war. Computers get faster every year, and nobody is going to applaud SkiFree for being fast, or even remember that Fallout 1 & 2, and System Shock 2 had ridiculously slow loading times.

Striker

Have you ever wondered why the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties? I remember i had optimized 3.1 with a few tricks, used 32 bit instead of 16, used much ram and so on. I think the user surface was quicker than XP. If you look at XP you always wonder how a screenful of icons on the desktop can take seconds to build up in a modern OS from MS.

Now we have 3 GHz duo core processors or better instead of the old 40 MHz 486, but it seems the programmers at MS ar partly unable to make use of it. Why? Because they think like you, we don't need to care for speed, we can waste the ressources?

pkrcel
Striker said:

Have you ever wondered why the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties?

You clearly remember things heavily biased.

Quote:

Because they think like you, we don't need to care for speed, we can waste the ressources?

Maybe, but nobody here you shouldn't strictly care for speed.... your argument was a bit more extreme.

Striker

Biased? Say a 3 GHz Duo core is ~ 6000 MHz / 40 = 150. If i have a processor that is 150 times faster and i expect the user interface should be much faster thats biased?

pkrcel

Yes, heavily so.

They do not do the same things, and you are timing different things from different machines, not a meaningful scenario.

I am NOT saying modern computing does not suffer from slow load times and general unoptimization, but to say that Win3.1 is or was FASTER, is incorrect and unfair, on my 486DX Win3.1 took AGES (= several minutes) to load, and had next to NOTHING (around 4Mb of occupied RAM? can't really remember) that my current Win7 which takes (unfortunately, after 3yrs of usage) some minutes to load (maybe even 3) but has a WORLD OF THINGS behind it (2+ GB of RAM usage).

If loading means dumping data into memory (...more or less...), memory consumption today is 1000 times more than in the nineties...memory is faster and processors are faster....still, we float on the compromise.

Again, I don't think that Windows problems are function call overheads in any case.

EDIT: of course I do not have an SSD eh (which reminds me that r/w'ing data to disk is STILL a grudge).

Arthur Kalliokoski
Striker said:

the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties

I remember running Windows 3.0 on a 286 with the full 640k regular memory with an additional 384 extended memory for 1 meg total. I bought some "Sams Learn C++ in 21 days" or something which had the Borland 4.1 (?) compiler. The only thing I could do with it on that 286 was to browse the PDF documentation, and it took 30 seconds to scroll one line while viewing the PDF's.

Thomas Fjellstrom

My laptop boots in less than 10 seconds. None of my older computers ever came close to this. The interface feels smoother, looks better, and can do a heck of a lot more than my older machines ever could.

If it means I have to suffer through a few function calls, well, I'll just have to live with that.

The problem isn't function call overhead, no matter how much you think it is. Complexity and bloat are at the root of the problem these days. You think windows 3.1 can handle the 8 logical cores my laptop has? Or the 32GB of ram? Or the near 1TB of BLAZING fast SSD space? How about the 1080p screen?

The very nature of computers these days requires more complex code, which takes more resources to run. You should see the insane code the linux kernel has to manage memory, and task scheduling.

Yes, there is bloat that causes extra slowness. Just try not to install crapware like Norton or basically any (non OS) software that came bundled with your PC.

Gassa

I also have a feeling that windowed UI nowadays, with SSD and other good stuff, is generally slower than Win3.1 on a good Win3.1-age computer. The UI is also more feature-rich, that's for sure. However, a typical user needs a small fraction of this richness, and all the other stuff just slows the things down for them.

That said, picking a single slowdown cause at random (e.g. slowdown from function calls, cache misses, excess branching, expensive instructions such as division) and optimizing it perfectly doesn't usually help getting the fastest program possible. A more experienced programmer will likely start by writing a slow but programmer-friendly prototype, then profile it for bottlenecks, optimize a few of them, and have a faster version of the program (and more manageable, as a bonus) than a single-cause perfectionist - before the latter even gets their program working at all.

And if you think any function call takes 2KB of stack, I suggest you do the research. To help you start, here's a random article from Googling on the subject. The fact that it's about MIPS doesn't change the general picture.

Chris Katko
Striker said:

Have you ever wondered why the Windows surface today is the same slow or even slower than in Windows 3.1 in the nineties? I remember i had optimized 3.1 with a few tricks, used 32 bit instead of 16, used much ram and so on. I think the user surface was quicker than XP. If you look at XP you always wonder how a screenful of icons on the desktop can take seconds to build up in a modern OS from MS.

A better comparison is X Server.

The X server was originally designed for remote applications, and has overhead associated with it.

Then, in the 90's everyone was complaining about the overhead. Which was somewhat noticable for 90 MHz pentiums.

Now, in 2014, nobody cares or notices a single problem with it (except script kiddy idiots) and yet now we're all benefiting from X servers that magically work over any configuration you could possibly want.

Even the supposed replacements (Wayland and Mir) for X aren't really for performance reasons. They're for modern technology like tablet reasons and they still support X in backwards compatibility.

Striker said:

Biased? Say a 3 GHz Duo core is ~ 6000 MHz / 40 = 150. If i have a processor that is 150 times faster and i expect the user interface should be much faster thats biased?

You're either the worlds greatest troll, or missing half the picture.

1) CPU frequency has mostly hit the 5 GHZ wall for years, but that hasn't stopped them from increasing performance drastically from more cycle efficient designs.

2) CPU frequency is a terrible indicator of throughput because almost all applications are I/O bound, not CPU bound.

Striker said:

I remember i had optimized 3.1 with a few tricks, used 32 bit instead of 16, used much ram and so on. I think the user surface was quicker than XP.

Nevermind, you really have no idea what you're talking about. Windows 3.1, NT, and 95 were so damn slow using GDI that they had to come up with direct access to the framebuffer so games could run on it--otherwise they would lose the market. That direct access? They called it DirectX.

Actually, that reminds me of another great example: OpenGL. OpenGL is used on everything today, but when it came out in 1992, it was for CAD. It was so slow that Glide and Direct3D were introduced so that 3-D gaming could exist. But OpenGL was smart. They said, "It's easier to come down to fast gaming, then it is to work upward to proper CAD." And they did it. OpenGL supported floating point and 48-bit color back in 1992 back when we were running 8-bit palette color. They thought ahead, and it paid off.

Meanwhile, Glide was such a hurried piece of junk that it died after a single generation of gaming. It doesn't matter that Glide versions of games ran faster--anymore. Nobody remembers that. Everyone remembers OpenGL/DirectX looked better, and the Voodoo Cards--while initially ground-breaking, couldn't keep up with competition and 3dfx went out of business and became absorbed into nVidia.

Wikipedia said:

Glide was an effort to select primarily features that were useful for real-time rendering of 3D games. The result was an API that was small enough to be implemented entirely in late-1990s hardware. However, this focus led to various limitations in Glide, such as a 16-bit color depth limit in the display buffer.[2]

bamccaig

I would argue that the real waste is in powering a GUI at all. A GUI is a terribly inefficient user interface. It is very wasteful. It is quite convenient for inexperienced users, but it's very wasteful. Your computer is essentially in an infinite loop constantly checking the state of hardware, tracking the state of the interface, and triggering events in response to update the state of things and redraw them. A pure command shell console is far superior for most of what we do, but good luck getting average users to use one these days. Hell, it's hard to even get 90% of the Windows "IT professionals" to use one ("coincidentally" Windows' command shell is also horrible).

Microsoft creates terrible software. It isn't because they aren't performance oriented. It's because they're greedy and selfish. They only care about what makes them money. They only care about it if they can add it as a bullet point to an advertisement. They'd rather you spent money on faster hardware than to spend the time to optimize their software. If they do it perfectly the first time how are they going to ever sell it to you again? They want things to be slow and buggy at first because they know you'll buy it anyway. And then it's trivial to gradually improve it and make you buy it again and again. They don't even bother fixing serious bugs and misfeatures when they don't think it would make for a strong selling point (memos have leaked where managers have explicitly said not to fix things because it wouldn't be a good selling point).

Chris Katko
bamccaig said:

It's because they're greedy and selfish. They only care about what makes them money.

Compared to Comcast, they're the freaking pope. :P

Microsoft wants you to use their newest product.

Comcast wants you to use their old product, forever.

Striker

Bamccaig and Gassa, i agree with what you are writing.

I once was a MsDos command line freak and i will never forgive Billy he has almost killed MsDos, naturally for commercial reasons, like always. Programs used to be faster without the windows ballast. And i loved to use the batch system.

We can go even more back in history and learn from these computers. C64 had some advantages.

- When switched on it was immediately present, no booting at all, because he had his OS in ROM.

- In programming you could always rely on his OS structure, it couldn't be changed.

Naturally we can't go back to 64 kb Ram. Unfortunately Commodore went the wrong way. After C64 and C128 the world has waited for the C1024 as competition for Atari ST. They bought Amiga and it was a flop.

Otherwise now we could have the followers of C64 with OS in Rom. I know, today there are a few. I believe one day they will come back to the OS in ROM, because it has many advantages. ;D

Arthur Kalliokoski
Striker said:

I believe one day they will come back to the OS in ROM, because it has many advantages.

How would you patch it when the haxxors find a weakness? Or you'll just stay off the interwebs?

How will you add your shiny new Whatzit gadget with USB when the ROM code has no idea?

Striker

Maybe they become technically so advanced one day that there are no changes necessary?

Arthur Kalliokoski
Striker said:

Maybe they become technical so advanced one day that there are no changes necessary?

It's a two way street. The only constant is change.

Striker

Today in simple mathematics there are no changes. Some things never change. Or maybe in astronomical units. :)

We still have the BIOS in ROM, so it would be a process to put more of the OS in the BIOS.

Gideon Weems
bamccaig said:

I would argue that the real waste is in powering a GUI at all.

Interesting posit...

UI designers abstract underlying hardware in an effort to make the system easier to understand. Take this too far, however, and you end up demolishing the logical, mechanical system that lies underneath--and it just so happens that humans understand such systems quite naturally.

I don't see any need to distinguish GUI from CLI. They are functionally identical and feature overlapping characteristics. The key difference is their level of abstraction, but this difference is one of tradition and not definition.

The only goal of a UI is to convey desired information while requiring as little user effort as possible. Nothing else matters. If, instead of a keyboard, I had a box of magical gnomes at my desk and could tap one on the head, waking him up, and issue a verbal command--and that gnome would shrink to the size of an atom and dive inside my computer at the speed of light, moving the right boxes of bits into just the right places, executing my command--there would be no need for either a GUI or a CLI (unless "GUI" stood for "Gnome User Interface," though perhaps a better name would be VUI, for "verbal").

I therefore submit that graphical level, while historically a good barometer, is not a reliable indicator of UI efficiency. Rather, the amount of user effort is. The CLI, with its massive command set, requires more effort upfront, but pays off in the end. GUIs are the inverse of this.

Really, though, I just wanted to talk about gnomes.

Arthur Kalliokoski
Striker said:

We still have the BIOS in ROM, so it would be a process to put more of the OS in the BIOS.

http://www.extremetech.com/computing/96985-demystifying-uefi-the-long-overdue-bios-replacement

Chris Katko

I've given up trying to counter someone's rapidly changing stances. ::)

Meanwhile, I beat the human campaign for Warcraft 2 in a day. It's amazing how crude the game is by today's standards. 9 unit selection max, terrible AI, completely scripted single player missions that feel completely scripted, two teams that are identical except for their graphics set (and four spells if you want to be a nitpicker), tiny maps, and a very shallow tech tree. The unit AI is so stupid that you'll lose a battle that you should win by shear numbers, because your units are too stupid to continue attacking without you specifically helping them. And if you play on the fastest speed (which I do because otherwise the game is boring), the combat is so quick that you don't have time to micromanage them all while the AI gets to update its units every clock tick so they always get a targeting advantage. Oh, and the pathfinding is atrocious. It will literally just "give up" if the pathfinding graph is too long, so you have to help them get there by moving them half or quarter of the way each time. Oh, and don't forget no unit queuing. Man, is that annoying. You'll find yourself building multiple unit factories just so you can que a unit on each one--not because you need them faster, but constantly coming back to the factory every 3 seconds a unit is created and selecting another is terrible. Oh, and no default waypoints sucks. We've become so used to unit factories having a default location. Warcraft 2, they poop out next to it and that's all you can do.

It really makes me appreciate how much Age of Empires 2 murdered the competition and changed the scene. It's hilarious how far everyone thought Age of Empires (1) was just "Microsoft ripping off Warcraft and Civilization." (Actual quote) When now AOE2 is lauded as one of the greatest games ever made.

I still to this day haven't noticed any new RTS game come up with a feature that Age of Empires didn't already implement wonderfully.

[edited]

bamccaig

The bit that distinguishes GUIs from CLIs is that the computer recognizes millions upon millions of inputs that mean nothing to it, and thousands upon thousands more that are duplicates. I.e., there are x * y pixels on your screen and depending on your mouse's resolution they can all be clicked, but in practice any given screen only has a select few meaningful "clicks". I assert that the amount of effort required by our brains to aim a mouse is substantially more than typing a command, even for a lamer. It isn't really very difficult to learn how to use well documented commands. It's quite difficult to figure out how to use unintuitive, poorly documented commands; but then the same is true of GUIs. Not only is the GUI much more exhausting to use, but it's also much more exhausting to program, debug, and test. GUIs are substantially more complex than CLIs, and even TUIs; that complexity necessarily introduces more programming errors.

Arthur Kalliokoski

OTOH, any particular command line based program only has a set amount of "options" via getopt() or whatever. The power of the command line is based upon pipes, redirection and scripts.

bamccaig

OTOH, any particular command line based program only has a set amount of "options" via getopt() or whatever. The power of the command line is based upon pipes, redirection and scripts.

The number of subcommands and complexity of parameters available to a command line program is equivalent to, if not greater than, the number of inputs available to a graphical program. After all, at the end of the day graphical programs are implemented on top of command line programs. Even if that weren't the case there's no reason that a command line interface (in theory) ever has to accept fewer subcommands than a graphical interface.

Chris Katko

The power of the command line is based upon pipes, redirection and scripts.

There is no real reason graphical applications can't have pipes, redirection, and scripts.

VST audio programs have been chaining data together and linking GUI controls to MIDI sources since the 90's.

Arthur Kalliokoski

There is no real reason graphical applications can't have pipes, redirection, and scripts.

Well, sure, if you're going to go to the trouble of programming them in yourself, instead of letting a shell do it.

bamccaig

Well, sure, if you're going to go to the trouble of programming them in yourself, instead of letting a shell do it.

In Linux a program is a program. There is no fundamental difference between command line programs and graphical programs. All of the standard interfaces are still available for you (and if the program is launched from the GUI via a shortcut or something I imagine the standard streams are opened as /dev/null). In Windows things are a little bit different because Microsoft as a company is incompetent. Still, I think that things mostly work normally. You may need to link your program a certain way, and/or may need to code in the creation of a console window... Yes, it's far from sensible, but it's Microsoft... In any case, the actual interfaces are still standard. You just may need to coerce Windows to let you use them.

Thomas Fjellstrom

With new storage technologies like memristors, we could store ALL of our stuff "in rom" except its not ROM, it's non volatile storage as fast as RAM is. So everything will load and save stupid fast, and no more waiting for boot, it'll already be booted since everything is running from non volatile media.

Chris Katko

and no more waiting for boot,

Until you have a system crash and have to reset your whole HDD! ;D

Thomas Fjellstrom

Until you have a system crash and have to reset your whole HDD! ;D

Hehe, well I assume there would be an option to "ignore" current state, and reload all processes.

pkrcel
Striker said:

They bought Amiga and it was a flop.

Okay, I offically give up.

With new storage technologies like memristors

Let's only hope then M$ and pther evil companies will not be all that much more wasteful :P

I don't see any need to distinguish GUI from CLI. They are functionally identical and feature overlapping characteristics. The key difference is their level of abstraction, but this difference is one of tradition and not definition.

Command line is KING when in need for "direct access", if you let me call it that...but I think there is need to distinguish between GUI and CLI.

Also, I don't get WHY there should not be GUIs....seems we think "computers" are only those tools with a keyboard and a mouse....

Striker

Pkrcel, i am disappointed that such simple facts cause you to "give up". 8-)

Slartibartfast

I find that keyboard driven GUIs are superior to both mouse driven GUIs and CLIs. They have the advantages that a GUI provides (discoverability and immediate and obvious/visible consequences to your actions), while operating as quickly, or even faster than a CLI[1].

References

  1. You can't deny that for example hitting alt+up is much quicker than typing "cd .." and hitting return, or that immediately seeing what is in a directory is not quicker than typing "ls" and reading the output.
bamccaig

You can't deny that for example hitting alt+up is much quicker than typing "cd .." and hitting return, or that immediately seeing what is in a directory is not quicker than typing "ls" and reading the output.

Except most of those things are not "discoverable" from the GUI. For example, I had no idea Alt+Up would move up a directory (and I don't know which file explorers support that). It would be fine if you learned a shortcut for every single function, but most software has too many functions for the available shortcuts (there's a very good reason that vi is modal and emacs is horrid :P).

In any case, there's no reason that a command line file explorer couldn't also support that same shortcut. In fact, your command shell could even be modal and allow shortcuts like that too. Essentially, everything a GUI can do a CLI can do better (except for graphics). >:(

Slartibartfast
bamccaig said:

Except most of those things are not "discoverable" from the GUI.

They are if it is a well designed GUI :)
Something like CTRL+F is a classic example, since it is both discoverable (generally whatever button you click to "Find" usually has the shortcut written on it), easily memorable, and consistent across all applications.

Quote:

For example, I had no idea Alt+Up would move up a directory (and I don't know which file explorers support that).

I think all of them do. Definitely Explorer(Windows) and Thunar(XFCE).

Quote:

It would be fine if you learned a shortcut for every single function, but most software has too many functions for the available shortcuts (there's a very good reason that vi is modal and emacs is horrid :P).

And you don't enter anything into the commandline in vi to move the cursor, because not everything needs to be controlled by the command line

Quote:

In any case, there's no reason that a command line file explorer couldn't also support that same shortcut.

Indeed, and then it would stray further from being command line driven and into being keyboard driven. Then you could also add some GUI magic to make things easier (for example, an icon near files so you can immediately tell what kind of file they are, or maybe a small thumbnail next to the image so you know what is in it without opening it) and end up with a pretty nifty program :)

Quote:

Essentially, everything a GUI can do a CLI can do better (except for graphics).

Except graphics are a huge part of usability, one example is the one I gave earlier (Explorer immediately shows you the contents of the current directory), but there are others. Everything is another good example[1], since Everything shows you everything that matches your query immediately (and fits in the screen), as soon as you started entering your query you know how good it is, and if you need to refine it you don't need to enter another command line, you simply continue typing your query. Additionally, once you found your result, you can navigate to it with the keyboard and hit enter to open the relevant file. Had I used a commandline I'd have to enter another command for the relevant program to open the file as well as enter the previously found path by hand.

References

  1. Everything is a program the indexes your HD and allows extremely quick searches in several ways. It is similar to "locate" on Linux except it indexes in real time so it responds to changes in real time making it x2 as nice just because of that :)
bamccaig

Perhaps I should have been a bit more strict in my terminology because it isn't clear that I mean text-based (i.e., text terminal) programs. I personally don't require a command shell only. Interactive, visual programs are great. Vi is an example. The point is, the text-based interface keeps things simple. There's no question where focus is. The keyboard always inputs to the right place. Etc. It's simpler, but it can do everything a GUI can. Graphics are completely unnecessary and generally don't add any value that cannot be achieved in plain text. I would know because I use a text interface for 99% of what I do on a computer. A well designed text-based program can be discovered just as easily as a GUI.

Chris Katko

Hey guys, I'm going to de-rail this thread by asking this question:

What methods, paradigms, langauge features have you seen that you find either useful, or wish the general public knew about?

Kibiz0r
Chris Katko said:

Something totally irrelevant

I wish more people were interested in having alternatives to Null. It's an outdated concept, and it speaks more to the low-level value of a pointer than the implicit meaning we give it in context.

Depending on the context, Null can mean:
- This variable hasn't been initialized; nobody has decided what the value should be yet, so it is Null by default.
- The variable has been initialized, but the value isn't known at time of initialization, so it is left Null until the value is known.
- The value is known, but the value is "no value", so Null is deliberately assigned like a placeholder for "None".
- There was an error in calculating the value, so the value can't be known, so Null is a signal to fail gracefully.

I'd love it if a language came along that had specific kinds of Nulls for these four cases, especially if you still got the old Null behavior by default but you could opt in to being more specific.

I actually saw a talk where a guy implemented these in Ruby, and it was pretty interesting. The errors you get from relying on a more-specifically-null value really tell you a lot more about where to start debugging.

bamccaig

That reminds me of VB's Empty, Null, Nothing nightmare. :'( Please don't tell me you're encouraging VB-ness.

Striker

The paradigma "form follows function" means more than only to speed up.

It means, the most important is the purpose of the program. Some programs have evil purposes, like viruses. Many are only time waster, like games, although you can say some are training for quick reaction or brain jogging. And there are technical, office and learning programs you need for serious work.

After Vedanta philosophy the best programs are those whose intention is to serve God. Such a program could be about herbs and healing plants. To show how senseful nature has created healing plants for all purposes together with the humans. Bear in mind that most modern pharma products are nothing but isolated active ingredients of plants. But in plants you have a natural combination which makes it easier for the body to handle without byeffect. And through byeffects of medication more people die than through car accidents.

Many programmers want to program something, but they don't know what. The result often is a time waster for the user. If you make a program that makes people realise God you are going in the direction of heaven. Thats the most important paradigma.

Arthur Kalliokoski
#SelectExpand
1#include <stdio.h> 2 3int main(void) 4{ 5 long long int i; 6 for(i=0; i<9000000000; i++) 7 { 8 printf("God "); 9 if( !(i & 15)) 10 { 11 printf("\n"); 12 } 13 } 14 return 0; 15}

Fishcake

I found out a few years ago that you can "simulate" named parameters in C++ via the Builder pattern. Useful if you have an object with lots of variables to construct it:

FooBuilder builder;
builder
    .setName("Foo")
    .setDescription("Lorem ipsum dolor sit amet")
    // ...

Foo foo = builder.create();

Chris Katko

That uses, Method Chaining, yes?

J-Gamer

That uses, Method Chaining, yes?

Those functions will return a reference to this. So yes, that's the wiki definition of method chaining.

append: This is not a language feature, but then it kind of is since it's parsed by a meta object compiler: Qt's signal/slot mechanism.

Kibiz0r
bamccaig said:

Please don't tell me you're encouraging VB-ness.

Totally! Who doesn't love Visual Basic?

But for real, if you could do all of the things you normally do with Null, but also infer more about why something is null based on what "kind" of Null it is, wouldn't that be nice?

SiegeLord

From my POV, algebraic datatypes have solved the null problem. E.g. in Rust you can use the Option<T> to wrap a type T. Option<T> has two possible states None and Some(T). If it's None, then it contains no value (the memory associated with T is undefined, possibly uninitialized). Additionally, you cannot get a T out of Option<T> if it's None. If it is Some(T) then it wraps a valid instance of T that you can extract with the guarantee that it is valid. I.e. null is treated differently from the valid instance of T on a type level.

In principle you can distinguish multiple None states if you wanted, but usually a single None is sufficient for all the cases Kibiz0r mentioned.

J-Gamer
SiegeLord said:

From my POV, algebraic datatypes have solved the null problem. E.g. in Rust you can use the Option<T> to wrap a type T. Option<T> has two possible states None and Some(T). If it's None, then it contains no value (the memory associated with T is undefined, possibly uninitialized). Additionally, you cannot get a T out of Option<T> if it's None. If it is Some(T) then it wraps a valid instance of T that you can extract with the guarantee that it is valid. I.e. null is treated differently from the valid instance of T on a type level.

In C++ you have a similar thing in boost: boost::optional<T>.

Thread #614364. Printed from Allegro.cc