Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » addition problem with double

Credits go to SadSido for helping out!
This thread is locked; no one can reply to it. rss feed Print
addition problem with double
verthex
Member #11,340
September 2009
avatar

This is the output I got for the code below it? It seems that counting by decimals gets fuzzy around 0 for doubles or at least it does on my computer.

What am I doing wrong?
edit: I did find this page.

{"name":"602008","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/e\/9e2235a137dc19c5df285ffe79ccd9d8.png","w":377,"h":313,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/e\/9e2235a137dc19c5df285ffe79ccd9d8"}602008

#SelectExpand
1 2#include <cmath> 3#include <cstdio> 4#include <iostream> 5#include <iomanip> 6 7using namespace std; 8 9int main() 10{ 11 double val = -1.0; 12 13 do 14 { 15 16 cout<<val<<endl; 17 val += 0.1; 18 } while(val <= 1.0); 19 20 return 0; 21}

SadSido
Member #12,211
August 2010

Nothing is wrong with your code. Floating point arithmetic is not precise, especially when the result is around zero.

verthex
Member #11,340
September 2009
avatar

SadSido said:

Nothing is wrong with your code. Floating point arithmetic is not precise, especially when the result is around zero.

Thanks, its good enough I guess.

Elias
Member #358
May 2000

You don't even need any arithmetic, many numbers simply can't be represented exactly in floating point. E.g. if you write:

printf("%.20f", 0.1);

it prints

0.10000000000000000555

because there is no 0.1 IEEE floating point value.

--
"Either help out or stop whining" - Evert

Bob Keane
Member #7,342
June 2006

I thought Intel fixed the floating point bug?

By reading this sig, I, the reader, agree to render my soul to Bob Keane. I, the reader, understand this is a legally binding contract and freely render my soul.
"Love thy neighbor as much as you love yourself means be nice to the people next door. Everyone else can go to hell. Missy Cooper.
The advantage to learning something on your own is that there is no one there to tell you something can't be done.

Arthur Kalliokoski
Second in Command
February 2005
avatar

Bob Keane said:

I thought Intel fixed the floating point bug?

Sarcasm? No, they meant you can't have 0.1 in floating point any more than you can express 1/3 in decimal exactly.

They all watch too much MSNBC... they get ideas.

Billybob
Member #3,136
January 2003

can express 1/3 in decimal exactly.

Sure you can:

<math>0.333\bar{3}</math>

Yay for LaTeX! 8-)

Bob Keane
Member #7,342
June 2006

Sarcasm? No, they meant you can't have 0.1 in floating point any more than you can express 1/3 in decimal exactly.

If you look at the output, it goes from -0.1 to -1.38778e-16 to 0.1. The computer appears to have a problem with 0. As to sarcasm, I was asking sincerely if this may be a leftover from that fiasco.

By reading this sig, I, the reader, agree to render my soul to Bob Keane. I, the reader, understand this is a legally binding contract and freely render my soul.
"Love thy neighbor as much as you love yourself means be nice to the people next door. Everyone else can go to hell. Missy Cooper.
The advantage to learning something on your own is that there is no one there to tell you something can't be done.

verthex
Member #11,340
September 2009
avatar

Bob Keane said:

I was asking sincerely if this may be a leftover from that fiasco.

I'm not sure either but I would rather have my numbers go crazy around 2^64 since I never use those instead of around 0.

Evert
Member #794
November 2000
avatar

Bob Keane said:

If you look at the output, it goes from -0.1 to -1.38778e-16 to 0.1. The computer appears to have a problem with 0.

No it doesn't. It has a problem representing fractions that cannot be written with a finite number of terms using integer powers of two. It's the same thing as saying that 1/3 cannot be written with a finite number of terms using integer powers of 10.

See also http://en.wikipedia.org/wiki/Floating_point, as well as Elias' post above. This is a well-known caveat when working with floating point numbers, and also why you should never test floating point numbers for equality but always allow for a difference of at least the machine precision (about 1e-16).

verthex said:

I'm not sure either but I would rather have my numbers go crazy around 2^64 since I never use those instead of around 0.

That's a rather ignorant thing to say. Floating point numbers behave the same whatever their size.
That said, you only get about 16 significant digits to work with, if you need more than that, you will get round-off errors, as you saw.
This is also the reason people like to use powers-of-two for the number of particles or the size of their computational grids.

verthex
Member #11,340
September 2009
avatar

Evert said:

This is also the reason people like to use powers-of-two for the number of particles or the size of their computational grids.

In my case thats not the reason why I need this but I'm also in no need of superprecision so its ok. I need it for something around 10^-3 precision.

Arthur Kalliokoski
Second in Command
February 2005
avatar

So print out the results with %8.2lf or similar to avoid the weird bits at the end.

They all watch too much MSNBC... they get ideas.

Elias
Member #358
May 2000

printf("%8.2lf %8.2lf %8.2lf", 10.1f, 1000.1f, 1000000.1f);
> 10.10  1000.10 1000000.12

So no, using "%8.2lf" won't help :)

--
"Either help out or stop whining" - Evert

Arthur Kalliokoski
Second in Command
February 2005
avatar

gcc 4.4.4 said
t.c:9:31: error: invalid suffix "lf" on floating constant

#include <stdio.h>

int main(void)
{
       double a = 10.1;
       double b = 1000.1;
       double c = 1000000.1;
       
       printf("%8.2lf %8.2lf %8.2f\n",a,b,c);
       printf("%8.2lf %8.2lf %8.2lf\n",a,b,c);
       
       return 0;
}

said

10.10 1000.10 1000000.10
10.10 1000.10 1000000.10

They all watch too much MSNBC... they get ideas.

Elias
Member #358
May 2000

Yeah, I edited it. Or just use "float" instead of "double".

--
"Either help out or stop whining" - Evert

Arthur Kalliokoski
Second in Command
February 2005
avatar

Billybob said:

<math>0.333\bar{3}</math>

But it ends when it hits the edge of the universe!

They all watch too much MSNBC... they get ideas.

gillius
Member #119
April 2000

I like to explain it in terms of going from base 10 to base 3. Everyone knows the 1/3 problem. 1/3 isn't a number in decimal. It's a formula in decimal representing a rational number, which could take finite digits in a base 3 system, but not in decimal. So if you have to resolve that to a number with a maximum number of digits, say 3, then you get 0.333. Well 0.333+0.333+0.333 is 0.999. OK if I "go to doubles" then I get 0.999999 instead. No matter how many digits I have, I will never hit 1.

So computers use binary instead of decimal. And not every fraction that has a finite number of digits in decimal has a finite number of digits in binary. Therefore you have that problem.

You have three real alternatives:

  1. Don't worry about the exact result, and limit the number of calculations you do before "starting over" with "clean" numbers.

  2. Use rational math. That is, if you really know that you are going to increment by 0.1, then increment by 1 and divide by 10 when you want to "output". This solves the 1/3 problem as well: you get 1+1+1 = 3 then you divide by 3, which is 1 exactly in both binary and decimal. I actually do use this in practice. If you are summing up a huge set of integers and weights to find a weighted average, for example, sum up the numerator and denominator separately then divide in the end.

  3. Sacrifice "precision" for "correctness" -- Round your output to a certain number of significant figures (as was suggested earlier), so 0.9999 rounds to 1.000. The number of "corrent" significant digits you can get depends on the number of operations you perform and the precision of the underlying number (float vs double).

Gillius
Gillius's Programming -- https://gillius.org/

GullRaDriel
Member #3,861
September 2003
avatar

<off-topic text="
OMG ! Gillius has posted a new time since May !!

Always a pleasure to see you, my bad :-)
"/>

"Code is like shit - it only smells if it is not yours"
Allegro Wiki, full of examples and articles !!

verthex
Member #11,340
September 2009
avatar

gillius said:

The number of "corrent" significant digits you can get depends on the number of operations you perform and the precision of the underlying number (float vs double).

Well, thanks I guess you might have had a round of error propagation techniques in a physics course too, but what I needed this for was calculating sinh(x) and cosh(x) and 10^-3 is good enough.

gillius
Member #119
April 2000

Yes, it is exactly like you learn in physics. The floating point numbers are in scientific notation, which means they are rounded to a certain number of figures. And just with decimal math with scientific notation, you can keep track of significant figures and error from calculations. It's all the same.

GullRaDriel: is it really since May since I posted last? I used to make several posts a day back in the olden times. I've been a member for over 10 years now, that's crazy.

Gillius
Gillius's Programming -- https://gillius.org/

Johan Halmén
Member #1,550
September 2001

Think of the problem like this:

If you have the value
0.10000000000000000004
you have 20 meaningful digits. That's a lot of info. But if you have
0.00000000000000000004
you have only one meaningful digit. Roughly speaking. Of course there are some additional info, namely the "magnitude" of the value. Perhaps it's easier to represent the values like
1.0000000000000000004E-1
and
4E-20
The latter is shorter, needs less memory. Now, when you have values like 0.3, 0.2 and 0.1, the printout may show them like that, even though the 20th or 30th decimal might be something else than zero (due to the problems with fractions that don't match your base). But then when you expect 0.0, you suddenly see the 20th or 30th decimal, because all preceding decimals are zero. Your compiler/CPU/FPU/whatever can't guess that you actually want 0.0 instead of 0.000000000000000076324. It's not like your values 0.3, 0.2 and 0.1 would be more accurate than the strange values you get instead of zero. They look more accurate in printouts, but they are just rounded values after a bin-to-dec conversion.

Of course zero is a very special value and therefore having 1E-20 instead of zero is a bigger problem than having 1.00000000000000000001 instead of 1, in some cases. But besides that, if the problem is how it looks, just take care of the formatting of the printout.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Years of thorough research have revealed that the red "x" that closes a window, really isn't red, but white on red background.

Years of thorough research have revealed that what people find beautiful about the Mandelbrot set is not the set itself, but all the rest.

GullRaDriel
Member #3,861
September 2003
avatar

Gillius: Hell yeah, you're not as present now as you were before, but I guess that each of us have some ghost time / hyper present time periods.

"Code is like shit - it only smells if it is not yours"
Allegro Wiki, full of examples and articles !!

anonymous
Member #8025
November 2006

verthex said:

I'm not sure either but I would rather have my numbers go crazy around 2^64 since I never use those instead of around 0.

They go crazy around 2^64 too. Try adding 0.1 to that :)

verthex
Member #11,340
September 2009
avatar

anonymous said:

They go crazy around 2^64 too. Try adding 0.1 to that

Isnt that overflow?

Evert
Member #794
November 2000
avatar

verthex said:

Isnt that overflow?

Question: what do you think is the largest number that can be stored in a 64-bit (double precision) floating point variable?
Hint: look it up.

Go to: