Re: [Python-Dev] Adventures with Decimal
I sense a religious fervor about this so go ahead and do whatever you want. Please register my -1 for the following reasons: a.) It re-introduces representation error into a module that worked so hard to overcome that very problem. The PEP explicitly promises that a transformation from a literal involves no loss of information. Likewise, it promises that context just affects operations' results. b.) It is inconsistent with the idea of having the input specify its own precision: http://www2.hursley.ibm.com/decimal/decifaq1.html#tzeros c.) It is both untimely and unnecessary. The module is functioning according to its tests, the specification test suite, and the PEP. Anthony should put his foot down as this is NOT a bugfix, it is a change in concept. The Context.create_decimal() method already provides a standard conforming implementation of the to-number conversion. http://www.python.org/peps/pep-0327.html#creating-from-context . d.) I believe it will create more problems than it would solve. If needed, I can waste an afternoon coming up with examples. Likewise, I think it will make the module more difficult to use (esp. when experimenting with the effect of results of changing precision). e.) It does not eliminate the need to use the plus operation to force rounding/truncation when switching precision. f.) To be consistent, one would need to force all operation inputs to have the context applied before their use. The standard specifically does not do this and allows for operation inputs to be of a different precision than the current context (that is the reason for the plus operation). g.) It steers people in the wrong direction. Increasing precision is generally preferable to rounding or truncating explicit inputs. I included two Knuth examples in the docs to show the benefits of bumping up precision when needed. h.) It complicates the heck out of storage, retrieval, and input. Currently, decimal objects have a meaning independent of context. With the proposed change, the meaning becomes context dependent. i.) After having been explicitly promised by the PEP, discussed on the newsgroup and python-dev, and released to the public, a change of this magnitude warrants a newsgroup announcement and a comment period. A use case: --- The first use case that comes to mind is in the math.toRadians() function. When originally posted, there was an objection that the constant degToRad was imprecise to the last bit because it was expressed as the ratio of two literals that compiler would have rounded, resulting in a double rounding. Link to rationale for the spec: --- http://www2.hursley.ibm.com/decimal/IEEE-cowlishaw-arith16.pdf See the intro to section 4 which says: The digits in decimal are not significands; rather, the numbers are exact. The arithmetic on those numbers is also exact unless rounding to a given precision is specified. Link to the discussion relating decimal design rationale to schoolbook math --- I can't find this link. If someone remembers, please post it. Okay, I've said my piece. Do what you will. Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
Addenda: j.) The same rules would need to apply to all forms of the Decimal contructor, so Decimal(someint) would also need to truncate/round if it has more than precision digits -- likewise with Decimal(fromtuple) and Decimal(fromdecimal). All are problematic. Integer conversions are expected to be exact but may not be after the change. Conversion from another decimal should be idempotent but implicit rounding/truncation will break that. The fromtuple/totuple round-trip can get broken. You generally specify a tuple when you know exactly what you want. k.) The biggest client of all these methods is the Decimal module itself. Throughout the implementation, the code calls the Decimal constructor to create intermediate values. Every one of those calls would need to be changed to specify a context. Some of those cases are not trivially changed (for instance, the hash method doesn't have a context but it needs to check to see if a decimal value is exactly an integer so it can hash to that value). Likewise, how do you use a decimal value for a dictionary key when the equality check is context dependent (change precision and lose the ability to reference an entry)? Be careful with this proposed change. It is a can of worms. Better yet, don't do it. We already have a context aware constructor method if that is what you really want. Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 344: Explicit vs. Implicit Chaining
Guido van Rossum wrote: Do we really need both __context__ and __cause__? Well, it depends whose needs we're trying to meet. If we want to satisfy those who have been asking for chaining of unexpected secondary exceptions, then we have to provide that on some attribute. If we also want to provide the facility that Java and C# provide with initCause/InnerException, then we need a separate attribute devoted to explicit chaining. The Java and C# documentation is clear that the cause/inner exception is to be set only on an exception that is caused or a direct result of the primary exception, which i've taken as a sign that this is an important distinction. I wanted to give a shot at making both camps happy. If the two were unified, we'd still be better off than we are now, but we should be aware that we would not be providing the functionality that Java and C# provide. -- ?!ng ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
Raymond Hettinger wrote: Be careful with this proposed change. It is a can of worms. Better yet, don't do it. We already have a context aware constructor method if that is what you really want. And don't forgot that 'context-aware-construction' can also be written: val = +Decimal(string_repr) Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 344: Exception Chaining and Embedded Tracebacks
Ka-Ping Yee wrote: [...] (a) ban string exceptions (b) require all exceptions to derive from Exception (c) ban bare except: (d) eliminate sys.exc_* I think somewhere in this list should be: (?) Remove string exceptions from the Python stdlib and perhaps: (?) Make Exception a new style class Bye, Walter Dörwald ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
[Tim and Raymond are slugging it out about whether Decimal constructors should respect context precision] Tim, I find Raymond's arguments to be much more persuasive. (And that's even BEFORE I read his 11-point missive.) I understood the concept that *operations* are contex-dependent, but decimal *objects* are not, and thus it made sense to me that *constructors* were not context-dependent. On the other hand, I am NOT a floating-point expert. Can you educate me some? What is an example of a case where users would get wrong results because constructors failed to respect context precision? (By the way... even if other constructors begin to respect context precision, the constructor from tuple should NOT -- it exists to provide low-level access to the implementation. I'll express no opinion on the constructor from Decimal, because I don't understand the issues.) -- Michael Chermside ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 344: Exception Chaining and Embedded Tracebacks
Michael Hudson wrote: Walter Dörwald [EMAIL PROTECTED] writes: Ka-Ping Yee wrote: [...] (a) ban string exceptions (b) require all exceptions to derive from Exception (c) ban bare except: (d) eliminate sys.exc_* I think somewhere in this list should be: (?) Remove string exceptions from the Python stdlib I think this is done, more or less. There's one in test_descr, I think (probably testing that you can't raise str-subclasses but can raise strs). There are a few appearances in docstrings, and the Demo, Mac/Tools and Tools directories. Those should be rather simple to fix. The only problem might be if there is code that catches these exceptions. and perhaps: (?) Make Exception a new style class I have a patch for this on SF, of course. http://www.python.org/sf/1104669 it seems. Bye, Walter Dörwald ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
[Michael Chermside] Tim, I find Raymond's arguments to be much more persuasive. (And that's even BEFORE I read his 11-point missive.) I understood the concept that *operations* are context- dependent, but decimal *objects* are not, and thus it made sense to me that *constructors* were not context-dependent. On the other hand, I am NOT a floating-point expert. Can you educate me some? Sorry, I can't make more time for this now. The short course is that a module purporting to implement an external standard should not deviate from that standard without very good reasons, and should make an effort to hide whatever deviations it thinks it needs to indulge (e.g., make them harder to spell). This standard provides 100% portable (across HW, across OSes, across programming languages) decimal arithmetic, but of course that's only across standard-conforming implementations. That the decimal constructor here deviates from the standard appears to be just an historical accident (despite Raymond's current indefatigable rationalizations wink). Other important implementations of the standard didn't make this mistake; for example, Java's BigDecimal|(java.lang.String) constructor follows the rules here: http://www2.hursley.ibm.com/decimalj/deccons.html Does it really need to be argued interminably that deviating from a standard is a Big Deal? Users pay for that eventually, not implementors. Even if a standard is wrong (and leaving aside that I believe this standard asks for the right behavior here), users benefit from cross-implementation predictability a lot more than they can benefit from a specific implementation's non-standard idiosyncracies. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 344: Explicit vs. Implicit Chaining
[Guido van Rossum] Do we really need both __context__ and __cause__? [Ka-Ping Yee] Well, it depends whose needs we're trying to meet. If we want to satisfy those who have been asking for chaining of unexpected secondary exceptions, then we have to provide that on some attribute. If we also want to provide the facility that Java and C# provide with initCause/InnerException, then we need a separate attribute devoted to explicit chaining. The Java and C# documentation is clear that the cause/inner exception is to be set only on an exception that is caused or a direct result of the primary exception, which i've taken as a sign that this is an important distinction. I wanted to give a shot at making both camps happy. If the two were unified, we'd still be better off than we are now, but we should be aware that we would not be providing the functionality that Java and C# provide. But what difference does it make in practice? In first approximation, the only time the context is interesting is when a traceback is printed. Since you propose to print __context__ when __cause__ isn't set, from the POV of the user reading the traceback the effect is the same as if there was only one link (let's call it __cause__) and the APIs for setting it simply override the default. (PS I'm still thinking about the equivalence of the chaining algorithms; I've got a proof sketch in my head but getting the details in email is taking time.) -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 344: Exception Chaining and Embedded Tracebacks
[Guido] Here's a bunch of commentary: [Ping] Thanks. Sorry it's taken me a couple of days to get back to this. I think i'm caught up on the mail now. No problem! snip Also, in that same example, according to your specs, the TypeError raised by bar() has the ZeroDivisionError raised in foo() as its context. Do we really want this? I don't think it's absolutely necessary, though it doesn't seem to hurt. We agree that if the TypeError makes it up to foo's frame, it should have the ZeroDivisionError as its __context__, right? Yes. If so, do i understand correctly that you want the __context__ to depend on where the exception is caught as well as where it is raised? I think so, but that's not how I think about it. IMO the only time when the context becomes *relevant* is when a finally/except clause is left with a different exception than it was entered. In your thinking, is this mainly a performance or a cleanliness issue? Hard to say; the two are often difficult to separate for me. The performance in this case bothers me because it means unnecessary churn when exceptions are raised and caught. I know I've said in the past I don't care about the performance of exceptions, but that's not *quite* true, given that they are quite frequently used for control flow (e.g. StopIteration). I don't know how to quantify the performance effect though (unless it means that exceptions would have to be *instantiated* sooner than currently; in those cases where exception instantiation is put off, it is put off for one reason, and that reason is performance! The cleanliness issue is also important to me: when I have some isolated code that raises and successfully catches an exception, and that code happens to be used by a logging operation that is invoked from an exception handler, why would the context of the exception have to include something that happened way deeper on the stack and that is totally irrelevant to the code that catches *my* exception? Basically i was looking for the simplest description that would guarantee ending up with all the relevant tracebacks reported in chronological order. I thought it would be more complicated if we had to keep modifying the traceback on the way up, but now that i've re-learned how tracebacks are constructed, it's moot -- we're already extending the traceback on the way through each frame. I have a proposal for the implicit chaining semantics that i'll post in a separate message so it isn't buried in the middle of this one. OK, looking forward to it. Do we really need new syntax to set __cause__? Java does this without syntax by having a standard API initCause() (as well as constructors taking a cause as argument; I understand why you don't want to rely on that -- neither does Java). That seems more general because it can be used outside the context of a raise statement. I went back and forth on this. An earlier version of the PEP actually proposes a 'setcause' method. I eventually settled on a few reasons for the raise ... from syntax: 1. (major) No possibility of method override; no susceptibility to manipulation of __dict__ or __getattr__; no possibility of another exception happening while trying to set the cause. Hm; the inability to override the method actually sounds like a major disadvantage. I'm not sure what you mean with __dict__ or __getattr__ except other ways to tweak the attribute assignment; maybe you forgot about descriptors? Another exception could *still* happen and I think the __context__ setting mechanism will take care of it just fine. 2. (moderate) There is a clear, distinct idiom for exception replacement requiring that the cause and effect must be identified together at the point of raising. Well, nothing stops me from adding a setCause() method to my own exception class and using that instead of the from syntax, right? I'm not sure why it is so important to have a distinct idiom, and even if we do, I think that a method call will do just fine: except EnvironmentError, err: raise MyApplicationError(boo hoo).setCause(err) 3. (minor) No method namespace pollution. 4. (minor) Less typing, less punctuation. The main thing is that handling exceptions is a delicate matter, so it's nice to have guarantees that the things you're doing aren't going to suddenly raise more exceptions. I don't see that there's all that much that can go wrong in setCause(). After all, it's the setCause() of the *new* exception (which you can know and trust) that could cause trouble; a boobytrap in the exception you just caught could not possibly be set off by simply using it as a cause. (It can cause the traceback printing to fail, of course, but that's not a new issue.) Why insert a blank line between chained tracebacks? Just to make them easier to read. The last line of each traceback is important because it identifies the
Re: [Python-Dev] Adventures with Decimal
Does it really need to be argued interminably that deviating from a standard is a Big Deal? The word deviate inaccurately suggests that we do not have a compliant method which, of course, we do. There are two methods, one context aware and the other context free. The proposal is to change the behavior of the context free version, treat it as a bug, and alter it in the middle of a major release. The sole argument resembles bible thumping. Now for a tale. Once upon a time, one typed the literal 1.1 but ended up with the nearest representable value, 1.1001. The representation error monster terrorized the land and there was much sadness. From the mists of Argentina, a Palidan set things right. The literal 1.1 became representable and throughout the land the monster was believed to have been slain. With their guard down, no one thought twice when a Zope sorcerer had the bright idea that long literals like 1.1001 should no longer be representable and should implicitly jump to the nearest representable value, 1.1. Thus the monster arose like a Phoenix. Because it was done in a bugfix release, without a PEP, and no public comment, the citizens were caught unprepared and faced an eternity dealing with the monster so valiantly assailed by the Argentine. Bible thumping notwithstanding, this change is both unnecessary and undesirable. Implicit rounding in the face of explicit user input to the contrary is a bad idea. Internally, the implementation relies on the existing behavior so it is not easily changed. Don't do it. Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
On 5/20/05, Michael Chermside [EMAIL PROTECTED] wrote: In other words, Java's behavior is much closer to the current behavior of Python, at least in terms of features that are user-visible. The default behavior in Java is to have infinite precision unless a context is supplied that says otherwise. So the constructor that takes a string converts it faithfully, while the constructor that takes a context obeys the context. Are we hitting that point where the most important players (Python and Java, ;) implement the standard almost fully compliant, and then the standard revises *that* behaviour? For the record, I'm -0 for changing the actual behaviour: I'd really like to implement exactly the Spec, but I think it's more important the practical reasons we have to don't do it. .Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 344: Explicit vs. Implicit Chaining
On May 20, 2005, at 4:31 AM, Ka-Ping Yee wrote: Guido van Rossum wrote: Do we really need both __context__ and __cause__? Well, it depends whose needs we're trying to meet. If we want to satisfy those who have been asking for chaining of unexpected secondary exceptions, then we have to provide that on some attribute. I still don't see why people think the python interpreter should be automatically providing __context__. To me it seems like it'll just clutter things up for no good reason. If you really want the other exception, you can access it via the local variable in the frame where it was first caught. Of course right now you don't get a traceback, but the proposal fixes that. def test(): ... try: ... 1/0 ... except Exception, e: ... y ... test() Traceback (most recent call last): File stdin, line 1, in ? File stdin, line 5, in test NameError: global name 'y' is not defined pdb.pm() stdin(5)test() (Pdb) locals() {'e': exceptions.ZeroDivisionError instance at 0x73198} James ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
[Raymond Hettinger] The word deviate inaccurately suggests that we do not have a compliant method which, of course, we do. There are two methods, one context aware and the other context free. The proposal is to change the behavior of the context free version, treat it as a bug, and alter it in the middle of a major release. I didn't suggest changing this for 2.4.2. Although, now that you mention it ... wink. The sole argument resembles bible thumping. I'm sorry, but if you mentally reduced everything I've written about this to the sole argument, rational discussion has become impossible here. In the meantime, I've asked Mike Cowlishaw what his intent was, and what the standard may eventually say. I didn't express a preference to him. He said he'll think about it and try to get back to me by Sunday. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] First PyPy (preview) release
The PyPy 0.6 release *The PyPy Development Team is happy to announce the first public release of PyPy after two years of spare-time and half a year of EU funded development. The 0.6 release is eminently a preview release.* What it is and where to start - Getting started:http://codespeak.net/pypy/index.cgi?doc/getting_started.html PyPy Documentation: http://codespeak.net/pypy/index.cgi?doc PyPy Homepage: http://codespeak.net/pypy/ PyPy is a MIT-licensed reimplementation of Python written in Python itself. The long term goals are an implementation that is flexible and easy to experiment with and retarget to different platforms (also non-C ones) and such that high performance can be achieved through high-level implementations of dynamic optimisation techniques. The interpreter and object model implementations shipped with 0.6 can be run on top of CPython and implement the core language features of Python as of CPython 2.3. PyPy passes around 90% of the Python language regression tests that do not depend deeply on C-extensions. Some of that functionality is still made available by PyPy piggy-backing on the host CPython interpreter. Double interpretation and abstractions in the code-base make it so that PyPy running on CPython is quite slow (around 2000x slower than CPython ), this is expected. This release is intended for people that want to look and get a feel into what we are doing, playing with interpreter and perusing the codebase. Possibly to join in the fun and efforts. Interesting bits and highlights - The release is also a snap-shot of our ongoing efforts towards low-level translation and experimenting with unique features. * By default, PyPy is a Python version that works completely with new-style-classes semantics. However, support for old-style classes is still available. Implementations, mostly as user-level code, of their metaclass and instance object are included and can be re-made the default with the ``--oldstyle`` option. * In PyPy, bytecode interpretation and object manipulations are well separated between a bytecode interpreter and an *object space* which implements operations on objects. PyPy comes with experimental object spaces augmenting the standard one through delegation: * an experimental object space that does extensive tracing of bytecode and object operations; * the 'thunk' object space that implements lazy values and a 'become' operation that can exchange object identities. These spaces already give a glimpse in the flexibility potential of PyPy. See demo/fibonacci.py and demo/sharedref.py for examples about the 'thunk' object space. * The 0.6 release also contains a snapshot of our translation-efforts to lower level languages. For that we have developed an annotator which is capable of infering type information across our code base. The annotator right now is already capable of successfully type annotating basically *all* of PyPy code-base, and is included with 0.6. * From type annotated code, low-level code needs to be generated. Backends for various targets (C, LLVM,...) are included; they are all somehow incomplete and have been and are quite in flux. What is shipped with 0.6 is able to deal with more or less small/medium examples. Ongoing work and near term goals - Generating low-level code is the main area we are hammering on in the next months; our plan is to produce a PyPy version in August/September that does not need to be interpreted by CPython anymore and will thus run considerably faster than the 0.6 preview release. PyPy has been a community effort from the start and it would not have got that far without the coding and feedback support from numerous people. Please feel free to give feedback and raise questions. contact points: http://codespeak.net/pypy/index.cgi?contact contributor list: http://codespeak.net/pypy/index.cgi?doc/contributor.html have fun, Armin Rigo, Samuele Pedroni, Holger Krekel, Christian Tismer, Carl Friedrich Bolz PyPy development and activities happen as an open source project and with the support of a consortium funded by a two year EU IST research grant. Here is a list of partners of the EU project: Heinrich-Heine University (Germany), AB Strakt (Sweden) merlinux GmbH (Germany), tismerysoft GmbH(Germany) Logilab Paris (France), DFKI GmbH (Germany) ChangeMaker (Sweden) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
It looks like if you pass in a context, the Decimal constructor still ignores that context: import decimal as d d.getcontext().prec = 4 d.Decimal(1.234567890123456789012345678901234567890123456789, d.getcontext()) Decimal(1.234567890123456789012345678901234567890123456789) I think this is contrary to what some here have claimed (that you could pass an explicit context to cause it to round according to the context's precision). -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
[Guido] It looks like if you pass in a context, the Decimal constructor still ignores that context: import decimal as d d.getcontext().prec = 4 d.Decimal(1.234567890123456789012345678901234567890123456789, d.getcontext()) Decimal(1.234567890123456789012345678901234567890123456789) I think this is contrary to what some here have claimed (that you could pass an explicit context to cause it to round according to the context's precision). I think Michael Chermside said that's how a particular Java implementation works. Python's Decimal constructor accepts a context argument, but the only use made of it is to possibly signal a ConversionSyntax condition. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
Guido writes: It looks like if you pass in a context, the Decimal constructor still ignores that context No, you just need to use the right syntax. The correct syntax for converting a string to a Decimal using a context object is to use the create_decimal() method of the context object: import decimal decimal.getcontext().prec = 4 decimal.getcontext().create_decimal(1.234567890) Decimal(1.235) Frankly, I have no idea WHAT purpose is served by passing a context to the decimal constructor... I didn't even realize it was allowed! -- Michael Chermside ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
[Guido] It looks like if you pass in a context, the Decimal constructor still ignores that context: import decimal as d d.getcontext().prec = 4 d.Decimal(1.234567890123456789012345678901234567890123456789, d.getcontext()) Decimal(1.234567890123456789012345678901234567890123456789) I think this is contrary to what some here have claimed (that you could pass an explicit context to cause it to round according to the context's precision). [Tim] I think Michael Chermside said that's how a particular Java implementation works. Python's Decimal constructor accepts a context argument, but the only use made of it is to possibly signal a ConversionSyntax condition. You know that, but Raymond seems confused. From one of his posts (point (k)): Throughout the implementation, the code calls the Decimal constructor to create intermediate values. Every one of those calls would need to be changed to specify a context. But passing a context doesn't help for obtaining the desired precision. PS I also asked Cowlishaw and he said he would ponder it over the weekend. Maybe Raymond can mail him too. ;-) -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
[Michael Chermside] Frankly, I have no idea WHAT purpose is served by passing a context to the decimal constructor... I didn't even realize it was allowed! Quoth the docs for the Decimal constructor: The context precision does not affect how many digits are stored. That is determined exclusively by the number of digits in value. For example, Decimal(3.0) records all five zeroes even if the context precision is only three. The purpose of the context argument is determining what to do if value is a malformed string. If the context traps InvalidOperation, an exception is raised; otherwise, the constructor returns a new Decimal with the value of NaN. Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
Michael Chermside wrote: Frankly, I have no idea WHAT purpose is served by passing a context to the decimal constructor... I didn't even realize it was allowed! As Tim pointed out, it's solely to control whether or not ConversionSyntax errors are exceptions or not: Py decimal.Decimal(a) Traceback (most recent call last): File stdin, line 1, in ? File c:\python24\lib\decimal.py, line 571, in __new__ self._sign, self._int, self._exp = context._raise_error(ConversionSyntax) File c:\python24\lib\decimal.py, line 2266, in _raise_error raise error, explanation decimal.InvalidOperation Py context = decimal.getcontext().copy() Py context.traps[decimal.InvalidOperation] = False Py decimal.Decimal(a, context) Decimal(NaN) I'm tempted to suggest deprecating the feature, and say if you want invalid strings to produce NaN, use the create_decimal() method of Context objects. That would mean the standard construction operation becomes genuinely context-free. Being able to supply a context, but then have it be mostly ignored is rather confusing. Doing this may also fractionally speed up Decimal creation from strings in the normal case, as the call to getcontext() could probably be omitted from the constructor. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
[Guido] You know that, but Raymond seems confused. From one of his posts (point (k)): [Raymond] Throughout the implementation, the code calls the Decimal constructor to create intermediate values. Every one of those calls would need to be changed to specify a context. [Facundo] The point here, I think, is that intermediate Decimal objects are created, and the whole module assumes that the context does not affect that intermediate values. If you change this and start using the context at Decimal creation time, you'll have to be aware of that in a lot of parts of the code. OTOH, you can change that and run the test cases, and see how bad it explodes (or not, ;). Bingo! That is point (k) from the big missive. Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
It looks like if you pass in a context, the Decimal constructor still ignores that context: import decimal as d d.getcontext().prec = 4 d.Decimal(1.234567890123456789012345678901234567890123456789, d.getcontext()) Decimal(1.234567890123456789012345678901234567890123456789) I think this is contrary to what some here have claimed (that you could pass an explicit context to cause it to round according to the context's precision). That's not the way it is done. The context passed to the Decimal constructor is *only* used to determine what to do with a malformed string (whether to raise an exception or set a flag. To create a decimal with a context, use the Context.create_decimal() method: import decimal as d d.getcontext().prec = 4 d.getcontext().create_decimal(1.234567890123456789012345678901234567890 123456789) Decimal(1.235) Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adventures with Decimal
On Fri, May 20, 2005, Raymond Hettinger wrote: k.) The biggest client of all these methods is the Decimal module itself. Throughout the implementation, the code calls the Decimal constructor to create intermediate values. Every one of those calls would need to be changed to specify a context. Some of those cases are not trivially changed (for instance, the hash method doesn't have a context but it needs to check to see if a decimal value is exactly an integer so it can hash to that value). Likewise, how do you use a decimal value for a dictionary key when the equality check is context dependent (change precision and lose the ability to reference an entry)? I'm not sure this is true, and if it is true, I think the Decimal module is poorly implemented. There are two uses for the Decimal() constructor: * copy constructor for an existing Decimal instance (or passing in a tuple directly to mimic the barebones internal) * conversion constructor for other types, such as string Are you claiming that the intermediate values are being constructed as strings and then converted back to Decimal objects? Is there something else I'm missing? I don't think Tim is claiming that the copy constructor needs to obey context, just string conversions. Note that comparison is not context-dependent, because context only applies to results of operations, and the spec's comparison operator (equivalent to cmp()) only returns (-1,0,1) -- guaranteed to be within the precision of any context. ;-) Note that hashing is not part of the standard, so whatever makes most sense in a Pythonic context would be appropriate. It's perfectly reasonable for Decimal's __int__ method to be unbounded because Python ints are unbounded. All these caveats aside, I don't have a strong opinion about what we should do. Overall, my sentiments are with Tim that we should fix this, but my suspicion is that it probably doesn't matter much. -- Aahz ([EMAIL PROTECTED]) * http://www.pythoncraft.com/ The only problem with Microsoft is they just have no taste. --Steve Jobs ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com