![]() |
|
rant about the state of c++ software |
weapon_S
Member #7,859
October 2006
![]() |
Quote: When has that ever been an argument to use one piece of software over another? I wish the whole point was in that sentence: I'd put in my signature. Screw benchmarks! What does teh internetz say? |
Evert
Member #794
November 2000
![]() |
Quote: double distance = sqrt(dx * dx + dy * dy + dz * dz); Here's the one I get if I just follow C++ / nbody from the first link. http://shootout.alioth.debian.org/u32q/benchmark.php?test=nbody&lang=gpp&id=4 |
Isaac Gouy
Member #10,785
March 2009
|
Evert on 03/16/2009 2:17 PM There are currently 3 C++ nbody programs. http://shootout.alioth.debian.org/u32q/benchmark.php?test=nbody&lang=all |
Evert
Member #794
November 2000
![]() |
Point is, you're not comparing languages or compilers fairly if you're not running essentially the same code in each case. To clarify that: the code should use the same algorithm to solve the exact same problem, using only standard language features. It should also probably be optimised to the same level in both cases. |
SiegeLord
Member #7,827
October 2006
![]() |
Quote: Point is, you're not comparing languages or compilers fairly if you're not running essentially the same code in each case. To clarify that: the code should use the same algorithm to solve the exact same problem, using only standard language features. It should also probably be optimised to the same level in both cases. Why? Not all algorithms translate well to the languages. You wouldn't use the same algorithm in Haskell that you would in C for example. Not all optimizations would have the same effect too. I don't think that's a problem. They seem do disallow the graver cheating (no implementing a function in C, and then calling it from a language X and calling that a program in language X). The bigger problem (plaguing most benchmarks) is that their programs are unrealistic and artificial, being small and unflexible. And this point is why it's called a language benchmarks game. Isaac Gouy, please use the <quote> </quote> tags "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18 |
Evert
Member #794
November 2000
![]() |
Quote: Why? Not all algorithms translate well to the languages. You wouldn't use the same algorithm in Haskell that you would in C for example. Not all optimizations would have the same effect too.
That's not quite what I meant. Comparing random code that uses different algorithms is not a good way to compare programming languages. What you should not do is take C++ code that uses compiler-specific optimised functions and solves an N-body problem in effectively two dimensions (the example I linked to effectively does this), compare it to a FORTRAN program that solves the same problem in three dimensions using standard library functions and then claim that the C++ code is faster, because you're not doing a fair comparison. Quote: The bigger problem (plaguing most benchmarks) is that their programs are unrealistic and artificial, being small and unflexible. And this point is why it's called a language benchmarks game. Well yes, there is that too. Point one way or the other being, those bench marks don't mean too much. |
Tobias Dammers
Member #2,604
August 2002
![]() |
IMO, the main reason why performance-critical applications are still written in C++ is that most of them stem from a codebase from way back when C++ was simply the most advanced language available. Parsing SQL commands, optimizing queries, managing data streams and all that are a major pain to implement, and if you have an existing codebase that solve those problems well enough, you better stick to it rather than re-code the entire thing in a newer language. Then there's the JIT compilation overhead; but since JIT compilation happens only at load-time, it doesn't affect runtime performance once the application is fully loaded; especially with databases, the startup time is relatively uninteresting (when the server goes down, you're screwed anyway unless you have a failover). Yet another possibility I expect future applications to use is to split up the codebase into a performance-critical lower layer (written in C++) and a higher level 'user code' layer (written in a more comfortable language). Microsoft makes this fairly easy, although managed C++ (which can be used to glue plain C++ together with other managed code) has very ugly syntax. I also think that your definition of 'major' (performance critical) doesn't match what the rest of us can agree on (large code base, large user base, large number of features). --- |
axilmar
Member #1,204
April 2001
|
Thomas Fjellstrom said: You know, except for USER BASE. It's not that important for Allegro. There are not many users who use Allegro through less popular programming languages anyway. Writing it in C is good for marketing reasons though: "our library is accessible from every language! yeay!" BAF said: Your argument is morphing again! You went from must be in C++ to should be in C++ in one post! Ok, if you want to be so pedantic. There is a minor difference between 'must' and 'should', and although my native language is not English, I understand your comment. Let's stick with 'should'. Evert said:
Bear in mind that you're speaking to a non-native speaker, for whom nuances of language may not be interpreted in the same way as for you. Exactly. In Greek, there is one word for both ('must' and 'should'). Speedo said: A quick scan with a code counter shows Allegro 4.2.2 having some 75,000 lines of code. Are YOU volunteering to rewrite all of them? Allegro 4.9.8 is 66,000 lines of code. It's not that big. With Java, it could probably be a lot less. I would certainly volunteer, if I did not have to work a full time job. kazzmir said: This seems to be axilmar's main point. Garbage collection (mark and sweep) in every major high level language with an associated vm tends to take up more time than manual memory management. Additional time hogs are error checking and reflection. Without these things any error is likely to completely crash a C/C++ program, but assuming errors don't occur then C/C++ will probably be a winner in terms of performance. Of course any sufficiently sophisticated program is likely to have a host of errors and it seems more useful to reduce the number of errors by 5% than increase the performance by 5% but it depends on the application domain. Exactly. Quote: Anyway axilmar, why do you keep claiming that things should be written in C++? C++ is an incredibly thin veil over C and the only other thing going for it (the template system) isn't that impressive compared to other things. Besides the fact that the template system has nothing to do with performance. I disagree about the template system. Compare Java Generics to c++ templates: autoboxing vs native values. Native values win. Quote: If you took out closures I'm sure you could compile Scheme or ML down to C/C++ using explicit memory management. Of course without closures those languages are basically worthless. I am not so sure about that. They both do not have structs, do they? they fake structs by using the function 'nth' for accessing the 'nth' member of a list. And then in Scheme, polymorphism is achieved through tagged values, which might be slower than vtables. Feel free to correct me if I am wrong. Martin Kalbfuß said: Kill C++. There are a lot of better languages out there. I agree! but replace it with something with equal performance. It's possible. There is no need to sacrifice performance. Tobias Dammers said: About the GC: C# at least allows for manual memory management when needed (using unsafe code), so the argument that GC is what breaks performance isn't entirely valid. If you manage memory manually, then you are not using the GC. So you can't really say that "the GC does not break performance". Quote: There may even be cases where JIT compilation yields an advantage: the JIT compiler may choose to use optimizations specifically for the system it currently runs on, taking advantages for which a C++ version would have to jump through quite some hoops for(e.g. producing 64 bit code when it runs on a 64 bit CPU, taking advantage of vendor-specific CPU instructions, optimizing for memory usage or cpu clocks based on the system specs, ...). Modern C++ compilers do most of that. Quote: No matter what application you write, as soon as a certain critical mass in terms of program size is reached, it becomes very likely that memory management needs to be organized in some way; in C++, people usually end up implement either a smart pointer or a garbage collector anyway There is only one garbage collector for C++, and that is the concervative Boehm collector. It is not possible to use the Boehm garbage collector in C++, due to preprocessor and linking issues, unless your app does not contain 3rd party libraries and very little STL (I've been there and tried it). Smart pointers are good. I've recently finished a 75,000 lines of code project, which use smart pointers. I have no memory leaks, but I needed to copy the boost code, since my Qt library (old version, 3.0.5) is not compatible with boost. Quote: Then there's the JIT compilation overhead; but since JIT compilation happens only at load-time, it doesn't affect runtime performance once the application is fully loaded; especially with databases, the startup time is relatively uninteresting (when the server goes down, you're screwed anyway unless you have a failover). Are you sure? because the VM engine monitors the execution of virtual calls in order to replace them with 'if'. Quote: Yet another possibility I expect future applications to use is to split up the codebase into a performance-critical lower layer (written in C++) and a higher level 'user code' layer (written in a more comfortable language). Microsoft makes this fairly easy, although managed C++ (which can be used to glue plain C++ together with other managed code) has very ugly syntax. But that would make C++ a necessary tool for these apps. Quote: I also think that your definition of 'major' (performance critical) doesn't match what the rest of us can agree on (large code base, large user base, large number of features). So? that does not make my definition wrong or yours wrong. We simply talk about different things. |
Tobias Dammers
Member #2,604
August 2002
![]() |
axilmar said: Modern C++ compilers do most of that. At compile time. Which, for C++, is before shipping. Halfway compiled languages perform their final compilation step at run time, which means they can take optimizations based on the specific system they are actually running on, instead of the system they were compiled for. Quote: If you manage memory manually, then you are not using the GC. So you can't really say that "the GC does not break performance". Let me rephrase then: The fact that C# offers GC, and that C#'s GC is turned on by default, doesn't necessarily break runtime performance. In C#, you are free to mix GC'ed code (a.k.a. 'managed') and manual-allocation code ('unsafe code' in C# lingo), with all the consequences this implies. However, where GC is hard to implement or use in C++ (as you state yourself, it's so hard that most people resort to smart pointers, which are typically faster but have a problem with reference cycles), and manual allocation is relatively 'easy', C# reverses the situation by making manual allocation slightly harder (you need to put relevant code inside an unsafe {} block, and your application needs to acquire permission to run unsafe code), and GC'ed code very easy. Quote: Are you sure? because the VM engine monitors the execution of virtual calls in order to replace them with 'if'. I'm not sure, no. What the VM does at runtime, is basically a secret to me. However, when and if the most basic method (a plain compilation from the IM / bytecodes into the current platform's machine code) is NOT chosen, the only obvious reason I can think of is that whatever they use instead is faster in most cases. Quote: But that would make C++ a necessary tool for these apps.
Not necessary, no. Only if the development team chooses to use this option. It would, however, turn a 'major application written in C++' into a 'major application written in C# / Java / ..., with some low-level functions written in C++'. The fact that the Paint.NET team chose to use C# even for low-level functions shows that even things like image processing can be coded in C# efficiently enough to not mandate C++. --- |
SiegeLord
Member #7,827
October 2006
![]() |
Tobias Dammers said: At compile time. Which, for C++, is before shipping. Halfway compiled languages perform their final compilation step at run time, which means they can take optimizations based on the specific system they are actually running on, instead of the system they were compiled for.
Quote: It would, however, turn a 'major application written in C++' into a 'major application written in C# / Java / ..., with some low-level functions written in C++'. Huh? If you are writing all of your low level high performance functions in C++, then you are not taking advantage of JIT compilation. Why have the JIT if you still have to use the non-JIT compiled code for performance critical components? Saying that you can write high performance code in C++ and then use it from C#, and then declare that C# is high performance is silly. Quote: The fact that the Paint.NET team chose to use C# even for low-level functions shows that even things like image processing can be coded in C# efficiently enough to not mandate C++. As for that, I've heard reports that Paint.NET is a slow program, so I'm not so sure about that. I haven't used it enough to check. EDIT: Quote: Halfway compiled languages perform their final compilation step at run time, which means they can take optimizations based on the specific system they are actually running on, instead of the system they were compiled for. What about OSS software? Gentoo et al? Plenty of people compile C/C++ software on end user systems. I remember when I first used Paint.NET is did this compillation stage, and it took about as long as compiling C/C++ programs from source takes. I don't know how widespread this is, but maybe it's possible to halfway compile C/C++ too, distribute ready to link object code for non-performance critical parts, and the source to the performance critical parts, and then have the compiler compile only the latter files, and the linker link everything together. "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18 |
Tobias Dammers
Member #2,604
August 2002
![]() |
SiegeLord said: Huh? If you are writing all of your low level high performance functions in C++, then you are not taking advantage of JIT compilation. Why have the JIT if you still have to use the non-JIT compiled code for performance critical components? Saying that you can write high performance code in C++ and then use it from C#, and then declare that C# is high performance is silly. I didn't say you have to. I just presented it as an alternative to using C++ for everything, so as to draw the benefits from both worlds. This doesn't mean that it is impossible to use C# for everything. Quote: What about OSS software? Gentoo et al? Plenty of people compile C/C++ software on end user systems. I remember when I first used Paint.NET is did this compillation stage, and it took about as long as compiling C/C++ programs from source takes. I don't know how widespread this is, but maybe it's possible to halfway compile C/C++ too, distribute ready to link object code for non-performance critical parts, and the source to the performance critical parts, and then have the compiler compile only the latter files, and the linker link everything together.
I have never heard of any such thing, except for Microsoft's 'managed C++', which is basically C++ with some syntax extensions for garbage-collected types that compiles into IL instead of machine code. --- |
kazzmir
Member #1,786
December 2001
![]() |
axilmar said: Quote: If you took out closures I'm sure you could compile Scheme or ML down to C/C++ using explicit memory management. Of course without closures those languages are basically worthless. I am not so sure about that. They both do not have structs, do they? they fake structs by using the function 'nth' for accessing the 'nth' member of a list. And then in Scheme, polymorphism is achieved through tagged values, which might be slower than vtables. Feel free to correct me if I am wrong. Thats almost right. Structs are made with vectors (basically constant sized lists) and doing the nth element of a vector is basically the same as array lookup (i.e. extremely fast). Yes polymorphism is achieved with tagged values but vtables don't come into play there. All data types are tagged using the lower two bits so that native types can be used (you are right with the autoboxing vs native type thing). The class systems don't use those tags and have their own vtables somewhere. (I can really only talk about mzscheme since I know the implementation, other schemes might do things differently but the fast ones probably do things the way I described.) |
|
|