On Wed, Jan 5, 2011 at 4:59 PM, Steven D'Aprano <[email protected]> wrote:
> Wayne Werner wrote: > >> <snip> > > I never said rounding errors - I said "pesky floating point errors". When >> > > Which ARE rounding errors. They're *all* rounding errors, caused by the > same fundamental issue -- the impossibility of representing some specific > exact number in the finite number of bits, or digits, available. > > Only the specific numbers change, not the existence of the errors. So truncation == rounding. I can agree with that, though they've always seemed distinct entities before, because you can round up or round down, but truncation simply removes what you don't want, which is equivalent to rounding down at whatever precision you want. Round down the tens place (or truncate anything lower than 1e2): 10000010 => 10000000 Round down the thousandth place (or truncate everything "past" 1e-2): 1/3 = .3333(repeating) => .330 Having re-read and thought about it for a while, I think my argument simply distills down to this: using Decimal both allows you control over your significant figures, and (at least for me) *requires* you to think about what sort of truncation/rounding you will experience, and let's be honest - usually the source of errors is we, the programmers, not thinking enough about precision - and the result of this thought process is usually the elimination, not of truncation/rounding, but of not accounting for these errors. Which, to me, equates to "eliminating those pesky floating point errors". Although to be more accurate I should have said "eliminates those pesky programmer errors when dealing with floating point arithmetic." apoligetically, Wayne
_______________________________________________ Tutor maillist - [email protected] To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
