|
addition problem with double |
verthex
Member #11,340
September 2009
|
This is the output I got for the code below it? It seems that counting by decimals gets fuzzy around 0 for doubles or at least it does on my computer. What am I doing wrong? {"name":"602008","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/e\/9e2235a137dc19c5df285ffe79ccd9d8.png","w":377,"h":313,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/e\/9e2235a137dc19c5df285ffe79ccd9d8"} 1
2#include <cmath>
3#include <cstdio>
4#include <iostream>
5#include <iomanip>
6
7using namespace std;
8
9int main()
10{
11 double val = -1.0;
12
13 do
14 {
15
16 cout<<val<<endl;
17 val += 0.1;
18 } while(val <= 1.0);
19
20 return 0;
21}
|
SadSido
Member #12,211
August 2010
|
Nothing is wrong with your code. Floating point arithmetic is not precise, especially when the result is around zero. |
verthex
Member #11,340
September 2009
|
SadSido said: Nothing is wrong with your code. Floating point arithmetic is not precise, especially when the result is around zero. Thanks, its good enough I guess.
|
Elias
Member #358
May 2000
|
You don't even need any arithmetic, many numbers simply can't be represented exactly in floating point. E.g. if you write: printf("%.20f", 0.1); it prints 0.10000000000000000555 because there is no 0.1 IEEE floating point value. -- |
Bob Keane
Member #7,342
June 2006
|
I thought Intel fixed the floating point bug? By reading this sig, I, the reader, agree to render my soul to Bob Keane. I, the reader, understand this is a legally binding contract and freely render my soul. |
Arthur Kalliokoski
Second in Command
February 2005
|
Bob Keane said: I thought Intel fixed the floating point bug? Sarcasm? No, they meant you can't have 0.1 in floating point any more than you can express 1/3 in decimal exactly. They all watch too much MSNBC... they get ideas. |
Billybob
Member #3,136
January 2003
|
|
Bob Keane
Member #7,342
June 2006
|
Arthur Kalliokoski said: Sarcasm? No, they meant you can't have 0.1 in floating point any more than you can express 1/3 in decimal exactly. If you look at the output, it goes from -0.1 to -1.38778e-16 to 0.1. The computer appears to have a problem with 0. As to sarcasm, I was asking sincerely if this may be a leftover from that fiasco. By reading this sig, I, the reader, agree to render my soul to Bob Keane. I, the reader, understand this is a legally binding contract and freely render my soul. |
verthex
Member #11,340
September 2009
|
Bob Keane said: I was asking sincerely if this may be a leftover from that fiasco. I'm not sure either but I would rather have my numbers go crazy around 2^64 since I never use those instead of around 0.
|
Evert
Member #794
November 2000
|
Bob Keane said: If you look at the output, it goes from -0.1 to -1.38778e-16 to 0.1. The computer appears to have a problem with 0. No it doesn't. It has a problem representing fractions that cannot be written with a finite number of terms using integer powers of two. It's the same thing as saying that 1/3 cannot be written with a finite number of terms using integer powers of 10. See also http://en.wikipedia.org/wiki/Floating_point, as well as Elias' post above. This is a well-known caveat when working with floating point numbers, and also why you should never test floating point numbers for equality but always allow for a difference of at least the machine precision (about 1e-16). verthex said: I'm not sure either but I would rather have my numbers go crazy around 2^64 since I never use those instead of around 0.
That's a rather ignorant thing to say. Floating point numbers behave the same whatever their size. |
verthex
Member #11,340
September 2009
|
Evert said: This is also the reason people like to use powers-of-two for the number of particles or the size of their computational grids. In my case thats not the reason why I need this but I'm also in no need of superprecision so its ok. I need it for something around 10^-3 precision.
|
Arthur Kalliokoski
Second in Command
February 2005
|
So print out the results with %8.2lf or similar to avoid the weird bits at the end. They all watch too much MSNBC... they get ideas. |
Elias
Member #358
May 2000
|
printf("%8.2lf %8.2lf %8.2lf", 10.1f, 1000.1f, 1000000.1f); > 10.10 1000.10 1000000.12 So no, using "%8.2lf" won't help -- |
Arthur Kalliokoski
Second in Command
February 2005
|
gcc 4.4.4 said said 10.10 1000.10 1000000.10 They all watch too much MSNBC... they get ideas. |
Elias
Member #358
May 2000
|
Yeah, I edited it. Or just use "float" instead of "double". -- |
Arthur Kalliokoski
Second in Command
February 2005
|
Billybob said:
But it ends when it hits the edge of the universe! They all watch too much MSNBC... they get ideas. |
gillius
Member #119
April 2000
|
I like to explain it in terms of going from base 10 to base 3. Everyone knows the 1/3 problem. 1/3 isn't a number in decimal. It's a formula in decimal representing a rational number, which could take finite digits in a base 3 system, but not in decimal. So if you have to resolve that to a number with a maximum number of digits, say 3, then you get 0.333. Well 0.333+0.333+0.333 is 0.999. OK if I "go to doubles" then I get 0.999999 instead. No matter how many digits I have, I will never hit 1. So computers use binary instead of decimal. And not every fraction that has a finite number of digits in decimal has a finite number of digits in binary. Therefore you have that problem. You have three real alternatives:
Gillius |
GullRaDriel
Member #3,861
September 2003
|
<off-topic text=" Always a pleasure to see you, my bad :-) "Code is like shit - it only smells if it is not yours" |
verthex
Member #11,340
September 2009
|
gillius said: The number of "corrent" significant digits you can get depends on the number of operations you perform and the precision of the underlying number (float vs double). Well, thanks I guess you might have had a round of error propagation techniques in a physics course too, but what I needed this for was calculating sinh(x) and cosh(x) and 10^-3 is good enough.
|
gillius
Member #119
April 2000
|
Yes, it is exactly like you learn in physics. The floating point numbers are in scientific notation, which means they are rounded to a certain number of figures. And just with decimal math with scientific notation, you can keep track of significant figures and error from calculations. It's all the same. GullRaDriel: is it really since May since I posted last? I used to make several posts a day back in the olden times. I've been a member for over 10 years now, that's crazy. Gillius |
Johan Halmén
Member #1,550
September 2001
|
Think of the problem like this: If you have the value Of course zero is a very special value and therefore having 1E-20 instead of zero is a bigger problem than having 1.00000000000000000001 instead of 1, in some cases. But besides that, if the problem is how it looks, just take care of the formatting of the printout. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Years of thorough research have revealed that what people find beautiful about the Mandelbrot set is not the set itself, but all the rest. |
GullRaDriel
Member #3,861
September 2003
|
Gillius: Hell yeah, you're not as present now as you were before, but I guess that each of us have some ghost time / hyper present time periods. "Code is like shit - it only smells if it is not yours" |
anonymous
Member #8025
November 2006
|
verthex said: I'm not sure either but I would rather have my numbers go crazy around 2^64 since I never use those instead of around 0. They go crazy around 2^64 too. Try adding 0.1 to that |
verthex
Member #11,340
September 2009
|
anonymous said: They go crazy around 2^64 too. Try adding 0.1 to that Isnt that overflow?
|
Evert
Member #794
November 2000
|
verthex said: Isnt that overflow?
Question: what do you think is the largest number that can be stored in a 64-bit (double precision) floating point variable? |
|