To understand why that's happening you have to understand the concept of "Floating Point". The idea is that of the 4 or 8 bytes storing the number, one or more are used to describe the decimal point, the others are used to describe the value. Thus there is a range between the first and last digit you can adhere to and if you start to exit that range your accuracy decreases because the value can no longer get big enough to handle it.
I believe standard floats use the format IEEE, which means one byte (8-bits) describes the decimal point, the other three (24-bits) describe the actual value.
A 24-Bit number has a range of 16,777,216, so you can have at most about 6 or 7 significant digits in your float with any accuracy.
To see this in action, try storing the number 12345.12345 into a standard float. You're going to notice that it gets truncated.
However, you can still compare that number you just stored using == with 12345.12345 as the comparison value and still get TRUE because 12345.12345 as a constant still needs to be converted the same way to a floating point number to do the comparison, thus it turns out to be the same.
That's probably a lot more info than you needed, but the point is, floating point isn't 100% accurate, so integer based calculations will always be off by just a tiny amount.
--- Kris Asick (Gemini)