Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » Looking for math library

This thread is locked; no one can reply to it. rss feed Print
 1   2   3   4 
Looking for math library
james_lohr
Member #1,947
February 2002

but you can't represent every number with powers of 1/2.

Nor can you represent every number with powers of 10: 10 is a totally arbitrary base.

Your posts still strongly suggest that you don't really know what you want because you don't seem to be able to grasp what is going on.

If you want your calculator to round to decimal values, then round to decimal values. Most calculators already do this: if you enter 1/3, then multiply the answer by 3, you will get 1 again, despite the fact that 1/3 can be expressed in neither decimal nor binary.

I still don't see strong enough grounds for using a decimal base for what you are doing. In fact any hand calculator could quite easily be running on base 2 and you'd never even know.

The only real grounds for a decimal base is working financial purposes. It is certainly not appropriate for your purposes, and if you intend to do any sort of numerical integration or any real number crunching for that matter, you will be slowing down your application 10-50 fold by using a decimal base.

weapon_S
Member #7,859
October 2006
avatar

Evert
Member #794
November 2000
avatar

With my current implementation using doubles, my command line calculator / function parser is too stupid to return 51 for 5.1/0.1. That's just not acceptable to me, nor should it be acceptable to anyone using a calculator program.

Utter nonsense.
When using any kind of tool, whether software or not, you have to understand its limitations. One of the limitations of computers is that floating point arithmetic has finite accuracy. There is ultimately no way around it and it's a bad idea to try to fake it in specific circumstances. It'll break down in other points.
Now, it may be appropriate to define operations on rational numbers (which is just a pair of integers), of which a decimal representation is just a subset. If that's what you want, then that's what you should implement (which is not difficult and is somewhat fun to do; it's not generally useful though).

Arthur Kalliokoski
Second in Command
February 2005
avatar

They all watch too much MSNBC... they get ideas.

orz
Member #565
August 2000

My precision requirements are that string numbers have an exact representation when converted to data. With my current implementation using doubles, my command line calculator / function parser is too stupid to return 51 for 5.1/0.1. That's just not acceptable to me, nor should it be acceptable to anyone using a calculator program.

As James Lohr said, you're simply not understanding what you see in calculators. Some of them are in base 2, some are in decimal, and you generally don't notice the difference. Different ones use different precision, but they generally display accurate integer answers to operations with divisors that don't fit in to their base evenly by the simple trick of displaying fewer digits than they calculate internally. If you divide 1 by 3 and then multiply it by 3 it displays the result as 1.0 not because it internally calculates the result as 1.0 (most calculators don't) but because it displays fewer digits then it tracks internally. Which is sort of like the opposite of the property you asked for - the destringification of the stringification is not equal to the original number, and that loss of precision is what gives them the property you like.

Arthur Kalliokoski: I wouldn't recommend actually using the BCD opcodes. It's comparable in efficiency to emulate that using normal integers (the BCD opcodes are slow on many x86s), and compiler support / etc is much better for non-BCD stuff.

Arthur Kalliokoski
Second in Command
February 2005
avatar

orz said:

Arthur Kalliokoski: I wouldn't recommend actually using the BCD opcodes. It's comparable in efficiency to emulate that using normal integers (the BCD opcodes are slow on many x86s), and compiler support / etc is much better for non-BCD stuff.

I'd rather doubt this would be a problem for a calculator app.

They all watch too much MSNBC... they get ideas.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

verthex said:

Theres LIDIA [www.cdc.informatik.tu-darmstadt.de] but I'm not sure what it could do for you.

The Introduction to LIDIA seems to indicate that it is highly specialized in things I don't have a use for, and it makes no mention of trig / power / logarithm / other standard C/C++ functions. Thanks anyway.

Arthur Kalliokoski said:

...code example...

I don't see why strtod would give different results than sscanf would. As written, calling the program with 5.1/0.1 as the argument produces 51 as the answer. The problem is, I can't rely on %lg to correctly round the number in all cases. If I change it to use %.30lg I get 50.999999999999993 as the answer. What if the answer actually was 50.999999999999993 and I used %lg and got 51 as the output instead?

I just can't rely on doubles to do what I want them to do.

The arbitrary precision lib I mentioned earlier does have transcendental functions, but some of them could be optimized with precalculated numbers in a file for identities, which I haven't bothered with.

How does your library compare to MAPM?

Nor can you represent every number with powers of 10: 10 is a totally arbitrary base.

You can represent more with powers of 10 than you can with powers of 1/2. :P If doubles used positive powers of 2 and tracked the decimal point and exponent, we wouldn't be having this discussion. :P

James Lohr said:

Your posts still strongly suggest that you don't really know what you want because you don't seem to be able to grasp what is going on.

Figure it out dude, I want an exact representation (as much as possible) when converting from string to data and back. I've said this several times. Perhaps you have poor reading comprehension, I don't know. I know what is going on, I already stated that powers of 1/2 are insufficient for my needs for data representation. Since you're so smart, why don't you enlighten me and tell me 'what is going on'.

James Lohr said:

If you want your calculator to round to decimal values, then round to decimal values.

The problem is, how much precision should I choose to lose? If I decide to round everything to 10ths then I lose a lot of precision in my operations and the user may want to specify precision in the 1000ths. The MAPM library specifically allows me to choose how many significant digits their numbers should use, which makes it a lot easier for me.

James Lohr said:

I still don't see strong enough grounds for using a decimal base for what you are doing. In fact any hand calculator could quite easily be running on base 2 and you'd never even know.

The base itself doesn't bother me, what bothers me is using negative powers of that base to represent a number, which is stupid because it usually can't be done precisely (but negative powers of base 10 are more precise than negative powers of base 2).

weapon_S said:

Who says Edgar isn't doing a financial program?

Well, actually I'm not, but thanks for the support.

Evert said:

Utter nonsense.
When using any kind of tool, whether software or not, you have to understand its limitations. One of the limitations of computers is that floating point arithmetic has finite accuracy. There is ultimately no way around it and it's a bad idea to try to fake it in specific circumstances. It'll break down in other points.

No, what is nonsense is to return an inexact number for a mathematical operation that should have an exact value. It's the implementation of floating point arithmetic that sucks here, as you can represent any rational number with any positive base using positive powers if you track the decimal point and exponent as well. You shouldn't have to round the result of 5.1/0.1 to get the exact answer that obviously exists already. eg...

5.1 e0 = 51 e-1
/         / 
0.1 e0 =  1 e-1
----------------
         51 e0

When you represent the number in positive powers of any positive base greater than 1, there is no precision lost, ever (not accounting for irrational numbers here).

orz said:

Different ones use different precision, but they generally display accurate integer answers to operations with divisors that don't fit in to their base evenly by the simple trick of displaying fewer digits than they calculate internally.

But the problem is deciding how many digits to round the answer to. 2? 10? When you have an exact answer, you don't need to round anything.

orz said:

Which is sort of like the opposite of the property you asked for - the destringification of the stringification is not equal to the original number, and that loss of precision is what gives them the property you like.

I think you meant that the other way round, but I don't want 0.1 represented as 0.099999999whatever, that's just not acceptable.

In any case, I'm leaning more and more towards using the MAPM library - it has a really nice C++ class called MAPM that wraps all the C code into nice operators and you can set a global precision level for all the operations. I'll have to give it a test run here over the next few days and report back. And its example calculator program knows what 5.1/0.1 is. ;)

Arthur Kalliokoski
Second in Command
February 2005
avatar

How does your library compare to MAPM?

I don't know, it all started when all I had was an assembler and an 8086 computer with no math chip. I'd guess that MAPM uses pure C, so I'd have an advantage for smaller sizes (a few kb per "number") because I can directly use the flags etc. in assembler, but most of the C implementations use FFT's to optimize multiplies etc., so they'd be faster then.

Quote:

You can represent more with powers of 10 than you can with powers of 1/2. :P

So switch to base 12 then. :D

Quote:

I don't want 0.1 represented as 0.099999999whatever, that's just not acceptable.

Try this on your hand held calculator:
1 / 3 = x
x * 3 = x (equal to 1 <-- or so it says)
Now subtract 1 and look at the inaccuracy it was hiding.

They all watch too much MSNBC... they get ideas.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Arthur Kalliokoski
Second in Command
February 2005
avatar

Both the Windows calculator and my CASIO fx-115ES give the answer of zero.

I'm pretty sure all the desktop calculator applets use arbitrary precision math now, but if a CASIO hides it I'm impressed! OTOH, I haven't had a good hand held calculator for many years. I use a cheap $1.00 calculator in the cab, I'm pretty sure it'd give 0.99999999 for 1/3 * 3.

They all watch too much MSNBC... they get ideas.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

It's a CASIO, but it's rather complex. I just opened it up after it sitting there for a year or so. It looks like it can do simple equations, do calculations on regular and complex numbers, handle matrices and vectors and some other stuff. I used to have a really nice TI-80 something graphical calculator back in high school. That thing was great til it crapped out.

Evert
Member #794
November 2000
avatar

What if the answer actually was 50.999999999999993 and I used %lg and got 51 as the output instead?

Read up on finite machine precision and the limitations that imposes. Numbers for which the difference is smaller than the machine precision are effectively the same.

Quote:

You can represent more with powers of 10 than you can with powers of 1/2.

Bullshit. They're equally arbitrary.

Quote:

No, what is nonsense is to return an inexact number for a mathematical operation that should have an exact value.

The output is only as good as the input. Your input is not infinitely precisise, so neither is your output.
Again, understand the limitations of what you're working with.

Quote:

It's the implementation of floating point arithmetic that sucks here, as you can represent any rational number with any positive base using positive powers if you track the decimal point and exponent as well. You shouldn't have to round the result of 5.1/0.1 to get the exact answer that obviously exists already.

Neither of those numbers can be represented exactly using a finite number of powers of 2, so what the computer stores is not 5.1, but 5.1 +/- eps, where eps is ~1e-16 for double precision. It's not broken, you simply don't understand.
Now, again, you could implement a rational number class that allows you to store fractions exactly as a tuple of integers. That way you can work with fractions without rounding. Doesn't help you when using real numbers though.

Try this on your hand held calculator:
1 / 3 = x
x * 3 = x (equal to 1 <-- or so it says)
Now subtract 1 and look at the inaccuracy it was hiding.

Cube root of 8, divide by 2 minus 1 or something like that is also a well-known example that shows finite precision on a calculator.

james_lohr
Member #1,947
February 2002

(but negative powers of base 10 are more precise than negative powers of base 2)

facepalm

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Evert said:

Read up on finite machine precision and the limitations that imposes. Numbers for which the difference is smaller than the machine precision are effectively the same.

If the difference between them was smaller than the machine precision, then you wouldn't be able to tell them apart in the first place.

Evert said:

Edgar said:

You can represent more with powers of 10 than you can with powers of 1/2.

Bullshit. They're equally arbitrary.

The proof is simple - you can represent any power of 2 with powers of 10, but you cannot represent every power of 10 with powers of 2.

Evert said:

The output is only as good as the input. Your input is not infinitely precisise, so neither is your output.
Again, understand the limitations of what you're working with.

The input is perfectly precise, so I expect the output to be perfectly precise as well as long as it is not an irrational number. I understand the limitations of floats and doubles and that is the entire purpose of this thread - to find an implementation that doesn't suffer this problem.

Evert said:

Neither of those numbers can be represented exactly using a finite number of powers of 2, so what the computer stores is not 5.1, but 5.1 +/- eps, where eps is ~1e-16 for double precision. It's not broken, you simply don't understand.

I know they can't be represented exactly in powers of 1/2, that's the point! So yes, it is broken as far as I'm concerned, because it is perfectly reasonable to expect an exact answer when there is one! I understand that floats and doubles can't do this, so stop telling me I don't understand.

James Lohr said:

Edgar said:

(but negative powers of base 10 are more precise than negative powers of base 2)

facepalm

Can you represent 10e-1 in powers of 2? I don't think so. Can every power of 2 be represented in powers of 10 - yes! Can every power of 10 be represented in base 2 - no! Therefore powers of 10 can represent more numbers than powers of 2 can when using negative powers, hence base 10 is more precise when using negative powers to represent numbers.

Facepalm that, sophomore. :P

bamccaig
Member #7,536
July 2006
avatar

<<<

The proof is simple - you can represent any power of 2 with powers of 10, but you cannot represent every power of 10 with powers of 2.

I'm not really sure what you mean by powers of 10 vs. powers of 2... In any case, the inaccuracy of number storage basically come down to finite storage. Infinite numbers cannot be stored in finite space and very long numbers cannot feasibly be processed either. Your basic options are floating-point numbers, which can represent a larger range of numbers with less precision, and fixed-point numbers, which can represent a smaller range of numbers with greater precision (and perhaps slower, since there are no fixed-point processing units in CPUs that I'm aware of). According to Wikipedia, fixed-point data types are split into two categories: binary (base 2) and decimal (base 10). The binary ones can benefit from bitwise operations to increase performance, but suffer from worse accuracy as a result. The decimal ones would therefore be slower, but more accurate. It all depends on exactly what you're using the numbers for.

>>>Prepend

I think what you basically want is a fixed-point data type. :) Apparently GCC partially supports a draft for one in C.[1] Of course, I don't know if the appropriate facilities are implemented to serialize and deserialize it. If you have to convert to/from a floating-point number to serialize/deserialize then you'll lose the precision anyway. :P It certainly would be nice to have fixed-point data types in all programming language standard libraries. There are some applications (financial, in particular) that can benefit a lot from the accuracy, and in those applications speed is often less important than accuracy.

<<<Append

Personally, I find it annoying too to get incorrect results with floating-point numbers. Perhaps that is my obsessive/perfectionist nature. I never have really wrapped my head around how to handle floating-point numbers appropriately. Though since I predominantly write business applications the floating point numbers usually represent money, where accuracy usually does matter...

>>>

Evert
Member #794
November 2000
avatar

The proof is simple - you can represent any power of 2 with powers of 10, but you cannot represent every power of 10 with powers of 2.

And neither of them does a particularly good job with powers of 3 or powers of 7.
The reason that you can represent powers of 2 using a finite number of positions using base 10 and not the other way around is that 2 is prime and 10 is not (in fact, it's 2x5, which is why it can represent powers of 2 and powers of 5 neatly). If you pick a base with even more prime number factorisations, it could do even better (by that measure)!
That doesn't change the fact that both bases are equally valid choices and equally arbitrary.

Quote:

The input is perfectly precise,

No, it isn't. That's what you don't seem to get.

Quote:

I understand the limitations of floats and doubles and that is the entire purpose of this thread - to find an implementation that doesn't suffer this problem.

There isn't one, since whichever base you pick, there are rational numbers that cannot be expressed with a finite number of "decimals" (namely those that have a prime number in their factorisation that is not in your base).
Again, if you want to do operations on rational numbers without converting them to floats, write a class that handles rational numbers.

Quote:

I know they can't be represented exactly in powers of 1/2, that's the point! So yes, it is broken as far as I'm concerned, because it is perfectly reasonable to expect an exact answer when there is one! I understand that floats and doubles can't do this, so stop telling me I don't understand.

Ok, good.
You don't act as though you understand, but if you do, good.
Anyway, no, the input is not exact. Computers operate in binary. You cannot represent "5.1" exactly in binary, so the input is not exact. That's leaving completely aside that, depending on where you're coming from, "5.1" doesn't necessarily mean "51/10", but could mean "a number in the range [5.05, 5.15)".
If you want to represent "51/10" exactly, write a class that handles rational numbers. That way you can store the number exactly and calculate things without rounding errors.
Floating point numbers are not broken, they may simply not be what you're looking to use. In which case, you should be more clear about what it is that you want to do.

Quote:

Though since I predominantly write business applications the floating point numbers usually represent money, where accuracy usually does matter...

If you can't tolerate numerical noise, then you should probably use a fixed-point system.
By the way, the "issue" with floating point numbers is not an issue with accuracy, it's an issue with precision.

EDIT: just one extra remark, but first

Quote:

Therefore powers of 10 can represent more numbers than powers of 2 can when using negative powers, hence base 10 is more precise when using negative powers to represent numbers.

I'm not going to argue the point because I can't be bothered to check it, but I suspect this may not be true. Remember, afterall, that the number of even natural numbers is the same as the total (odd+even) natural numbers (which is not intuitively obvious at all). I think the same sort of proof can show that your statement above is untrue, but as I said, I can't be bothered to check that.

My extra remark is this: independent of which base you choose, you will never be able to represent numbers like sqrt(2), pi or log(3) exactly by a finite number of negative powers of your base. If you do want to represent those "properly", you can certainly do that, but it's a lot more complicated. Computer algebra systems (CASs) like Maple and Mathematica do that. The fact that you cannot represent them exactly as a simple number using a finite number of terms has nothing at all to do with limitations of floating point, or even integer arithmetic.
Again, you have to use the correct tool for the job and understand its limitations. If I want to keep sqrt(2) as sqrt(2), I'll use pen and paper (or a CAS). If I want its numerical value, I rather have it as 1.414... to whichever number of significant digits I'm working with. Which representation is more "correct" or convenient depends on what I'm going to use it for.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Evert said:

And neither of them does a particularly good job with powers of 3 or powers of 7.

I can deal with that. What I can't deal with is not being able to represent powers of 10 exactly, since that is what the decimal system is based on.

Evert said:

If you pick a base with even more prime number factorisations, it could do even better (by that measure)!

That doesn't change the fact that both bases are equally valid choices and equally arbitrary.

You're contradicting yourself. First you say they're equally valid and then you admit that powers of 10 can represent more numbers than powers of 2. Are powers of 10 a better representation of numbers than powers of 2 or not? Well, yes they are.

Evert said:

Edgar said:

The input is perfectly precise,

No, it isn't. That's what you don't seem to get.

What is imprecise about 5.1? It is exactly 51 10ths. The fact that a double can't represent 51 10ths is what bothers me. I get it just fine, thanks.

Evert said:

There isn't one, since whichever base you pick, there are rational numbers that cannot be expressed with a finite number of "decimals" (namely those that have a prime number in their factorisation that is not in your base).
Again, if you want to do operations on rational numbers without converting them to floats, write a class that handles rational numbers.

If they were represented as sums of any base greater than 1 with powers greater than or equal to 0 with a decimal point and exponent tracked separately, then any rational number could be represented exactly by them. The entire problem stems from the fact that negative powers are used to represent numbers less than one.

Evert said:

Computers operate in binary. You cannot represent "5.1" exactly in binary, so the input is not exact.

Yes you can represent it in binary if you use positive powers of two and track the decimal point and exponent.

"5.1" = 0x33 decimal2 e-1

Evert said:

Floating point numbers are not broken, they may simply not be what you're looking to use. In which case, you should be more clear about what it is that you want to do.

Floating point numbers are broken if they can't represent a decimal string exactly. I've said numerous times that I expect a string to be represented exactly in data format and to get the same string out when converting back. Why do I have to keep saying this?

Edit for your edit

Evert said:

I'm not going to argue the point because I can't be bothered to check it, but I suspect this may not be true. Remember, afterall, that the number of even natural numbers is the same as the total (odd+even) natural numbers (which is not intuitively obvious at all). I think the same sort of proof can show that your statement above is untrue, but as I said, I can't be bothered to check that.

Not true.

x != x + n (for n != 0)

Evert said:

My extra remark is this: independent of which base you choose, you will never be able to represent numbers like sqrt(2), pi or log(3) exactly by a finite number of negative powers of your base.

I understand I can't exactly represent irrational numbers. I'm not trying to either.

Arthur Kalliokoski
Second in Command
February 2005
avatar

If it is for financial stuff, just use ints (counting pennies), have a special atoi routine that sticks a decimal point in the right spot, and you'll only have to use doubles for stuff like compound interest (which is never exact anyway).

They all watch too much MSNBC... they get ideas.

Matthew Leverton
Supreme Loser
January 1999
avatar

I love nerd fights.

Floating point numbers are broken if they can't represent a decimal string exactly.

Perhaps they are broken for your particular application, but they make good sense in terms of speed and functionality trade off.

james_lohr
Member #1,947
February 2002

Can you represent 10e-1 in powers of 2? I don't think so. Can every power of 2 be represented in powers of 10 - yes! Can every power of 10 be represented in base 2 - no! Therefore powers of 10 can represent more numbers than powers of 2 can when using negative powers, hence base 10 is more precise when using negative powers to represent numbers.

Facepalm that, sophomore.

No.

Take n bits. How many unique real numbers can be represented in base 2? Easy: 2^n. How many unique real numbers can be represented in base 10? - Always fewer than 2^n, because you need at least 4 bits to represent a single digit.

So on a platform of bits (which happens to be what all ordinary computers use), base 2 has more precision. This is if you define precision to be: "how accurately can an arbitrary real number be represented".

Your definition of precision appears to be "how accurately can a base 10 number be represented", which makes little sense given your intended application.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Take n bits. How many unique real numbers can be represented in base 2? Easy: 2^n. How many unique real numbers can be represented in base 10? - Always fewer than 2^n, because you need at least 4 bits to represent a single digit.

If all it takes is a little extra space to get accurate results, I'm fine with that.

James Lohr said:

So on a platform of bits (which happens to be what all ordinary computers use), base 2 has more precision. This is if you define precision to be: "how accurately can an arbitrary real number be represented".

Negative powers of 2 do not have more precision than negative powers of 10, otherwise they could represent more numbers than negative powers of 10 could, which they can't. Any real number that is rational can be represented in base 10. The same can't be said for base 2 unless you use positive powers to represent the number and track the exponent.

James Lohr said:

Your definition of precision appears to be "how accurately can a base 10 number be represented", which makes little sense given your intended application.

It makes perfect sense, given that my intended application is accurate calculation of expresssions where possible.

In any case, you've babbled on enough about how I don't really know what I want, about how I can't grasp what is going on, blah blah blah. If you don't have any suggestions about how to achieve what I have plainly asked for numerous times, then stop posting in this thread already.

Evert
Member #794
November 2000
avatar

You're contradicting yourself. First you say they're equally valid and then you admit that powers of 10 can represent more numbers than powers of 2.

That's not a contradiction.

Quote:

Are powers of 10 a better representation of numbers than powers of 2 or not? Well, yes they are.

No they're not.

Quote:

Floating point numbers are broken if they can't represent a decimal string exactly.

No they're not. They would be if they couldn't represent a binary string exactly, but they can. Computers use binary, not decimal.

Quote:

I've said numerous times that I expect a string to be represented exactly in data format and to get the same string out when converting back. Why do I have to keep saying this?

Because you keep not getting that you're spouting nonsense?

Quote:

Not true.
x != x + n (for n != 0)

What's not true? The number of even integers being the same as the total number of integers? Shows your ignorance if that's what you meant.
And yes, x = x+n can be true if x is infinite.

Quote:

I understand I can't exactly represent irrational numbers. I'm not trying to either.

No, but you're trying to do something analogous.

Audric
Member #907
January 2001

I think that the statement that put everybody off-track was move from string to double and back.
I'm pretty sure it was not intended with infinite number of decimal places that can be obtained during a computation (ex: 1/3), but rather for the numbers that are input by humans.
The answer is a decimal floating-point or a decimal fixed-point number representation, as already proposed : Not because it would be more exact than a base-2 system, but because the rounding rules are on the same digits that a human would apply, and thus they appear more 'natural'.

Everybody asked "what's it for" because the choice between the two (fixed or floating) depends on what kind of actual numbers you'll handle... if they are "distance in meters", it still depends if you measure the distance between electrons or between galaxies (or a mix of the two).

james_lohr
Member #1,947
February 2002

If you don't have any suggestions about how to achieve what I have plainly asked for numerous times

We're informing you that you're trying to achieve the wrong thing. We know this because you have repeatedly demonstrated your understanding to be flawed.

Quote:

Negative powers of 2 do not have more precision than negative powers of 10, otherwise they could represent more numbers than negative powers of 10 could, which they can't.

I just explained to you that, for the same amount of memory, base 2 numbers do represent more real numbers than base 10 numbers. It just so happens that base 10 can (unsurprisingly) represent decimal numbers exactly - but who care? This is a minuscule subset of the set of real numbers. If you want to evaluate arbitrary mathematical expressions, then base 2 will do the job better on a computer than decimal.

If they were represented as sums of any base greater than 1 with powers greater than or equal to 0 with a decimal point and exponent tracked separately, then any rational number could be represented exactly by them. The entire problem stems from the fact that negative powers are used to represent numbers less than one.

That's the type of naive approach to representing real numbers that students come up with in their introductory lessons to understanding floating numbers. Usually the majority of students have managed to grasp the elegance of base 2 floating point numbers within a few lectures. Of course, there are some who don't. :P

I think your problem is that you are so used to seeing and using decimal numbers that that is what a number is to you. If you study mathematics or computer science at degree level, you will almost never see decimal numbers, and the few times that you do, they are crude approximations of real numbers (e.g. 3.14) used for crude calculations.

orz
Member #565
August 2000

Any real number that is rational can be represented in base 10. The same can't be said for base 2 unless you use positive powers to represent the number and track the exponent.

As has already been pointed out in this thread, there are rational real numbers that cannot be represented in base-10 with non-repeating digits (1/3rd was previously mentioned, but any simple integer fraction with a divisor that has any prime factor other than 2 or 5 also qualifies).

Use of "positive powers to represent the number and track the exponent" (ie a decimal point and/or scientific notation aka floating point) is generally equally necessary or unnecessary for binary or decimal representations.
It is (sort of) true that a wider variety of numbers can be represented exactly without repeating digits in decimal than binary since 10 has more prime factors than 2, but that does not mean that base 10 is more precise, and if you desired that basic property then you would be better off using base 210 than base 10 (as 210 has a wider variety of prime factors than 10 or 2). Such base are neither more nor less precise because for any fixed amount of information the number of values that can precisely be represented is the same.

The only significant advantage to decimal on modern computers is that it is often less complicated to interface to other decimal systems (ie input or output to/from human-readable decimal sources, standard decimal precisions/rounding rules in financial codes, etc). There are numerous strategies for dealing with the binary-decimal interface issues, and they mostly work well.

Your discussion of the subject repeatedly appears to show a misunderstanding of the concepts involved, though it is hard to tell for sure how much is miscommunication vs misunderstanding.

The binary-vs-decimal issues are almost completely unrelated to fixed-vs-floating point issues, fixed-vs-variable-vs-infinite precision issues, etc.

 1   2   3   4 


Go to: