Mark Dickinson <dicki...@gmail.com> added the comment:

As Daniel says, from_float expects a float object, not a Decimal instance.

What did you want to achieve in the following line:

   self.from_float(value * decimal.Decimal(1.0))/decimal.Decimal(1.0)

?

By the way: in all current versions of Python, from_float is redundant:  you 
can create a Decimal directly from a float:


Python 2.7.1+ (2.7:d52b1faa7b11+, Mar 25 2011, 21:48:24) 
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal
[64140 refs]
>>> Decimal(2.3)
Decimal('2.29999999999999982236431605997495353221893310546875')
[64149 refs]

----------
nosy: +mark.dickinson
resolution:  -> invalid
status: open -> closed

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue11680>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to