On Sat, Jan 31, 2009 at 10:05 PM, <gslindst...@gmail.com> wrote: > I am using the decimal module to work with money (US dollars and cents) and > do not understand the precision. The documentation states: > > "The decimal module incorporates a notion of significant places so that 1.30 > + 1.20 is 2.50. The trailing zero is kept to indicate significance. This is > the customary presentation for monetary applications." > > But I get: >>>> from decimal import Decimal >>>> a = Decimal('1.25') >>>> a > Decimal('1.25') >>>> b = Decimal('2.50') >>>> b > Decimal('2.50') >>>> a+b > Decimal('3.8') > > I expect (and would like) a+b to be '3.75'. I've read through the > getcontext() section but must be missing something. Can you help?
I get a different result, are you sure you didn't set the precision before you did the above? In [31]: from decimal import * In [32]: a=Decimal('1.25') In [33]: b=Decimal('2.50') In [34]: a+b Out[34]: Decimal('3.75') The precision is the number of significant digits, not the number of decimal places: In [37]: getcontext().prec=1 In [38]: a+b Out[38]: Decimal('4') In [39]: getcontext().prec=2 In [40]: a+b Out[40]: Decimal('3.8') In [41]: a*Decimal('11111') Out[41]: Decimal('1.4E+4') The example is the docs is perhaps not the best one because 1.3 + 1.2 works correctly with normal floating point. 1.1 + 1.1 gives a different result with Decimal vs floating point: In [46]: 1.1 + 1.1 Out[46]: 2.2000000000000002 In [48]: Decimal('1.1') + Decimal('1.1') Out[48]: Decimal('2.2') Kent _______________________________________________ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor