On Wed, Feb 12, 2014 at 10:44 PM, Ben Finney <ben+pyt...@benfinney.id.au> wrote: > Chris Angelico <ros...@gmail.com> writes: > >> On Wed, Feb 12, 2014 at 9:07 PM, Ben Finney <ben+pyt...@benfinney.id.au> >> wrote: >> > That's why I think you need to be clear that your point isn't >> > “computers don't work with real numbers”, but rather “computers work >> > only with a limited subset of real numbers”. >> >> Hmm, I'm not sure that my statement is false. If a computer can work >> with "real numbers", then I would expect it to be able to work with >> any real number. > > Likewise, if you claim that a computer *does not* work with real > numbers, then I would expect that for any real number, the computer > would fail to work with that number. > > Which is why neither of those is a good statement of your position, IMO, > and you're better off saying the *limitations* you're describing.
I think we're using different words to say the same thing here :) What I mean is that one cannot accurately say that a computer works with real numbers, because it cannot work with them all. Of course a computer can work with _some_ real numbers; but only some. (An awful lot of them, of course. A ridiculously huge number of numbers. More numbers than you could read in a lifetime! While the number is extremely large, it still falls pitifully short of infinity.[1]) And so we do have optimizations for some subset of reals: in approximate order of performance, an arbitrary-precision integer type, a limited precision floating point type, and two types that handle fractions (vulgar and decimal). They're all, in a sense, optimizations. In pure theory, we could have a single "real number" type and do everything with that; all the other types are approximations to that. [1] http://tools.ietf.org/search/rfc2795 -- https://mail.python.org/mailman/listinfo/python-list