Re: [Python-Dev] Adventures with Decimal

2005-06-01 Thread Mike Cowlishaw
 Raymond Hettinger wrote:
  IMO, user input (or
  the full numeric strings in a text data file) is sacred and presumably
  done for a reason -- the explicitly requested digits should not be
  throw-away without good reason.
 
 I still don't understand what's so special about the
 input phase that it should be treated sacredly, while
 happily desecrating the result of any *other* operation.

The 'difference' here is, with unlimited precision decimal 
representations, there is no input phase.  The decimal number can 
represent the value, sign, and exponent in the character string the user 
provided _exactly_, and indeed it could be implemented using strings as 
the internal representation -- in which case the 'construction' of a new 
number is simply a string copy operation.

There is no operation taking place as there is no narrowing necessary. 
This is quite unlike (for example) converting an ASCII string 1.01 to a 
binary floating-point double which has a fixed precision and no base-5 
component.

mfc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Greg Ewing
Raymond Hettinger wrote:
 Did you see Mike Cowlishaw's posting where he described why he took our
 current position (wysiwig input) in the spec, in Java's BigDecimal, and
 in Rexx's numeric model?

Yes, it appears that you have channeled him correctly
on that point, and Tim hasn't. :-)

But I also found it interesting that, while the spec
requires the existence of a context for each operation,
it apparently *doesn't* mandate that it must be kept
in a global variable, which is the part that makes me
uncomfortable.

Was there any debate about this choice when the Decimal
module was being designed? It seems to go against
EIBTI, and even against Mr. Cowlishaw's own desire
for WYSIWIG, because WYG depends not only on what
you can see, but a piece of hidden state as well.

Greg

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Nick Coghlan
Raymond Hettinger wrote:
Py decimal.Decimal(a, context)
Decimal(NaN)

I'm tempted to suggest deprecating the feature, and say if you want
invalid
strings to produce NaN, use the create_decimal() method of Context
objects.
 
 
 The standard does require a NaN to be produced.

In that case, I'd prefer to see the behaviour of the Decimal constructor 
(InvalidOperation exception, or NaN result) always governed by the current 
context.

If you want to use a different context (either to limit the precision, or to 
alter the way malformed strings are handled), you invoke creation via that 
context, not via the standard constructor.

 Unless something is shown to be wrong with the current implementation, I
 don't think we should be in a hurry to make a post-release change.

The fact that the BDFL (and others, me included) were at least temporarily 
confused by the ability to pass a context in to the constructor suggests there 
is an interface problem here.

The thing that appears to be confusing is that you *can* pass a context in to 
the Decimal constructor, but that context is then almost completely ignored. It 
gives me TOOWTDI concerns,  even though passing the context to the constructor 
does, in fact, differ slightly from using the create_decimal() method (the 
former does not apply the precision, as Guido discovered).

Cheers,
NIck.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Michael Chermside
I'd like to respond to a few people, I'll start with Greg Ewing:

Greg writes:
 I don't see how it
 helps significantly to have just the very first
 step -- turning the input into numbers -- be
 exempt from this behaviour. If anything, people
 are going to be even more confused. But it
 can obviously cope with 1.101,
 so why does it give the wrong answer when I add
 something to it?

As I see it, there is a meaningful distinction between constructing
Decimal instances and performing arithmatic with them. I even think
this distinction is easy to explain to users, even beginners. See,
it's all about the program doing what you tell it to.

If you type in this:
x = decimal.Decimal(1.13)
as a literal in your program, then you clearly intended for that
last decimal place to mean something. By contrast, if you were to
try passing a float to the Decimal constructor, it would raise an
exception expressly to protect users from accidently entering
something slightly off from what they meant.

On the other hand, in Python, if you type this:
z = x + y
then what it does is completely dependent on the types of x and y.
In the case of Decimal objects, it performs a perfect arithmetic
operation then rounds to the current precision.

The simple explanation for users is Context affects *operations*,
but not *instances*. This explains the behavior of operations, of
constructors, and also explains the fact that changing precision
doesn't affect the precision of existing instances. And it's only
6 words long.

 But I also found it interesting that, while the spec
 requires the existence of a context for each operation,
 it apparently *doesn't* mandate that it must be kept
 in a global variable, which is the part that makes me
 uncomfortable.

 Was there any debate about this choice when the Decimal
 module was being designed?

It shouldn't make you uncomfortable. Storing something in a global
variable is a BAD idea... it is just begging for threads to mess
each other up. The decimal module avoided this by storing a SEPARATE
context for each thread, so different threads won't interfere with
each other. And there *is* a means for easy access to the context
objects... decimal.getcontext().

Yes, it was debated, and the debate led to changing from a global
variable to the existing arrangement.

--
As long as I'm writing, let me echo Nick Coghlan's point:
 The fact that the BDFL (and others, me included) were at least temporarily
 confused by the ability to pass a context in to the constructor suggests there
 is an interface problem here.

 The thing that appears to be confusing is that you *can* pass a context in to
 the Decimal constructor, but that context is then almost completely ignored.

Yeah... I agree. If you provide a Context, it should be used. I favor changing
the behavior of the constructor as follows:

 def Decimal(data, context=None):
 result = Existing_Version_Of_Decimal(data)
 if context is None:
 result = +result
 return result

In other words, make FULL use of the context in the constructor if a context
is provided, but make NO use of the thread context when no context is
provided.

--
One final point... Thanks to Mike Cowlishaw for chiming in with a detailed
and well-considered explanation of his thoughts on the matter.

-- Michael Chermside

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Aahz
On Mon, May 23, 2005, Greg Ewing wrote:

 But I also found it interesting that, while the spec requires the
 existence of a context for each operation, it apparently *doesn't*
 mandate that it must be kept in a global variable, which is the part
 that makes me uncomfortable.

 Was there any debate about this choice when the Decimal module was
 being designed?

Absolutely.  First of all, as Michael Chermside pointed out, it's
actually thread-local.  But even without that, we were still prepared to
release Decimal with global context.  Look at Java: you have to specify
the context manually with every operation.  It was a critical design
criterion for Python that this be legal::

 x = Decimal('1.2')
 y = Decimal('1.4')
 x*y
Decimal(1.68)

IOW, constructing Decimal instances might be a bit painful, but *using*
them would be utterly simple.
-- 
Aahz ([EMAIL PROTECTED])   * http://www.pythoncraft.com/

The only problem with Microsoft is they just have no taste. --Steve Jobs
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Adventures with Decimal

2005-05-22 Thread Mike Cowlishaw
Several people have pointed me at this interesting thread, and
both Tim and Raymond have sent me summaries of their arguments.
Thank you all!  I see various things I have written have caused
some confusion, for which I apologise.

The 'right' answer might, in fact, depend somewhat on the
programming language, as I'll try and explain below, but let me
first try and summarize the background of the decimal specification
which is on my website at:

  http://www2.hursley.ibm.com/decimal/#arithmetic


Rexx

Back in 1979/80, I was writing the Rexx programming language,
which has always had (only) decimal arithmetic.  In 1980, it was
used within IBM in over 40 countries, and had evolved a decimal
arithmetic which worked quite well, but had some rather quirky
arithmetic and rounding rules -- in particular, the result of an
operation had a number of decimal places equal to the larger of
the number of decimal places of its operands.

Hence 1.23 + 1.27 gave 2.50 and 1.23 + 1.27 gave 2.50.
This had some consequences that were quite predictable, but
were unexpected by most people.  For example, 1.2 x 1.2 gave 1.4,
and you had to suffix a 0 to one of the operands (easy to do in
Rexx) to get an exact result: 1.2 x 1.20 = 1.44.

By 1981, much of the e-mail and feedback I was getting was related
to various arithmetic quirks like this.  My design strategy for
the language was more-or-less to 'minimise e-mail' (I was getting
350+ every day, as there were no newsgroups or forums then) --
and it was clear that the way to minimise e-mail was to make the
language work the way people expected (not just in arithmetic).

I therefore 'did the research' on arithmetic to find out what it
was that people expected (and it varies in some cases, around the
world), and then changed the arithmetic to match that.  The result
was that e-mail on the subject dropped to almost nothing, and
arithmetic in Rexx became a non-issue: it just did what people
expected.

It's strongest feature is, I think, that what you see is what
you've got -- there are no hidden digits, for example.  Indeed,
in at least one Rexx interpreter the numbers are, literally,
character strings, and arithmetic is done directly on those
character strings (with no conversions or alternative internal
representation).

I therefore feel, quite strongly, that the value of a literal is,
and must be, exactly what appears on the paper.  And, in a
language with no constructors (such as Rexx), and unlimited
precision, this is straightforward.  The assignment

  a = 1.1001

is just that; there's no operation involved, and I would argue
that anyone reading that and knowing the syntax of a Rexx
assignment would expect the variable a to have the exact value of
the literal (that is, say a would then display 1.1001).

The Rexx arithmetic does have the concept of 'context', which
mirrors the way people do calculations on paper -- there are some
implied rules (how many digits to work to, etc.) beyond the sum
that is written down.  This context, in Rexx, is used to change
the way in which arithmetic operations are carried out, and does
not affect other operations (such as assignment).



Java

So what should one do in an object-oriented language, where
numbers are objects?  Java is perhaps a good model, here.  The
Java BigDecimal class originally had only unlimited precision
arithmetic (the results of multiplies just got longer and longer)
and only division had a mechanism to limit (round) the result in
some way, as it must.

By 1997, it became obvious that the original BigDecimal, though
elegant in its simplicity, was hard to use.  We (IBM) proposed
various improvements and built a prototype:

  http://www2.hursley.ibm.com/decimalj/

and this eventually became a formal Java Specification Request:

  http://jcp.org/aboutJava/communityprocess/review/jsr013/index.html

which led to the extensive enhancements in BigDecimal that were
shipped last year in Java 5:

  http://java.sun.com/j2se/1.5.0/docs/api/java/math/BigDecimal.html

In summary, for each operation (such as a.add(b)) a new method was
added which takes a context: a.add(b, context).  The context
supplies the rounding precision and rounding mode.  Since the
arguments to an operation can be of any length (precision), the
rounding rule is simple: the operation is carried out as though to
infinite precision and is then rounded (if necessary).  This rule
avoids double-rounding.

Constructors were not a point of debate.  The constructors in the
original BigDecimal always gave an exact result (even when
constructing from a binary double) so those were not going to
change.  We did, however, almost as an afterthought, add versions
of the constructors that took a context argument.

The model, therefore, is essentially the same as the Rexx one:
what you see is what you get.  In Java, the assignment:

  BigDecimal a = new BigDecimal(1.1001);

ends up with a having an object with the value you see in the
string, and for it 

Re: [Python-Dev] Adventures with Decimal

2005-05-22 Thread Greg Ewing
Raymond Hettinger wrote:

 IMO, user input (or
 the full numeric strings in a text data file) is sacred and presumably
 done for a reason -- the explicitly requested digits should not be
 throw-away without good reason.

I still don't understand what's so special about the
input phase that it should be treated sacredly, while
happily desecrating the result of any *other* operation.

To my mind, if you were really serious about treating
precision as sacred, the result of every operation
would be the greater of the precisions of the
inputs. That's what happens in C or Fortran - you
add two floats and you get a float; you add a float
and a double and you get a double; etc.

 Truncating/rounding a
 literal at creation time doesn't work well when you are going to be
 using those values several times, each with a different precision.

This won't be a problem if you recreate the values
from strings each time. You're going to have to be
careful anyway, e.g. if you calculate some constants,
such as degreesToRadians = pi/180, you'll have to
make sure that you recalculate them with the desired
precision before rerunning the algorithm.

 Remember, the design documents for the spec state a general principle:
 the digits of a decimal value are *not* significands, rather they are
 exact and all arithmetic on the is exact with the *result* being subject
 to optional rounding.

I don't see how this is relevant, because digits in
a character string are not digits of a decimal value
according to what we are meaning by decimal value
(i.e. an instance of Decimal). In other words, this
principle only applies *after* we have constructed a
Decimal instance.

-- 
Greg Ewing, Computer Science Dept, +--+
University of Canterbury,  | A citizen of NewZealandCorp, a   |
Christchurch, New Zealand  | wholly-owned subsidiary of USA Inc.  |
[EMAIL PROTECTED]  +--+
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-21 Thread Greg Ewing
Raymond Hettinger wrote:

From the mists of Argentina, a Palidan set things right.  The literal
 1.1 became representable and throughout the land the monster was
 believed to have been slain.

I don't understand. Isn't the monster going to pop
right back up again as soon as anyone does any
arithmetic with the number?

I don't see how you can regard what Decimal does
as schoolbook arithmetic unless the teacher is
reaching over your shoulder and blacking out any
excess digits after everything you do.

And if that's acceptable, I don't see how it
helps significantly to have just the very first
step -- turning the input into numbers -- be
exempt from this behaviour. If anything, people
are going to be even more confused. But it
can obviously cope with 1.101,
so why does it give the wrong answer when I add
something to it?

Greg

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-21 Thread Paul Moore
On 5/21/05, Raymond Hettinger [EMAIL PROTECTED] wrote:
 A root difference is that I believe we have both a compliant
 implementation (using Context.create_decimal) and a practical context
 free extension in the form of the regular Decimal constructor.

Please forgive an intrusion by someone who has very little knowledge
of floating point pitfalls.

My mental model of Decimal is pocket calculator arithmetic (I
believe this was originally prompted by Tim, as I had previously been
unaware that calculators used decimal hardware). In that model, fixed
precision is the norm - it's the physical number of digits the box
displays. And setting the context is an extremely rare operation - it
models swapping to a different device (something I do do in real life,
when I have an 8-digit box and am working with numbers bigger than
that - but with Decimal, the model is a 28-digit box by default, and
that's big enough for me!)

Construction models typing a number in, and this is where the model
breaks down. On a calculator, you physically cannot enter a number
with more digits than the precision, so converting a string with
excess precision doesn't come into it. And yet, Decimal('...') is the
obvious constructor, and should do what people expect.

In many ways, I could happily argue for an exception if the string has
too many digits. I could also argue for truncation (as that's what
many calculators actually do - ignore any excess typing). No
calculator rounds excess input, but I can accept it as what they might
well do if was physically possible. And of course, in a practical
sense, I'll be working with 28-digit precision, so I'll never hit the
situation in any case, and I don't care :-)

 A second difference is that you see harm in allowing any context free
 construction while I see greater harm from re-introducing representation
 error when that is what we were trying to fix in the first place.

The types of rounding errors (to use the naive term deliberately)
decimal suffer from are far more familiar to people because they use
calculators. With a calculator, I'm *used* to (1/3) * 3 not coming out
as exactly 1. And indeed we have

 (Decimal(1)/Decimal(3))*Decimal(3)
Decimal(0.)

Now try that with strings:

 (Decimal(1)/Decimal(3))*Decimal(3)
Decimal(0.)
 (Decimal(1.0)/Decimal(3.0))*Decimal(3.0)
Decimal(0.)

Nope, I don't see anything surprising.

After a bit more experimentation, I'm unable to make *anything*
surprise me, using either Decimal() or getcontext().create_decimal().
Of course, I've never bothered typing enough digits that I care about
(trailing zeroes don't count!) to trigger the rounding behaviour of
the constructor that matters here, but I don't ever epect to in real
life.

Apologies for the rambling discussion - it helped me as a non-expert
to understand what the issue is here. Having done so, I find that I am
unable to care. (Which is good, because I'm not the target audience
for the distinction :-))

So, to summarise, I can't see that a change would affect me at all. I
mildly favour Tim's position - because Raymond's seems to be based on
practicality for end users (where Tim's is based on convenience for
experts), and I can't see any practical effect on me to Tim's change.

OTOH, if end user impact were the driving force, I'd rather see
Decimal(string) raise an Inexact exception if the string would be
rounded:

 # Remember, my argument is that I'd never do the following in
practice, so this is
 # solely for a highly unusual edge case!
 decimal.getcontext().prec=5

 # This confuses me - it silently gives the wrong answer in my
mental model.
 Decimal(1.23456789) * 2
Decimal(2.4691)

 c = decimal.getcontext().copy()
 c.traps[decimal.Inexact] = True

 # This does what I expect - it tells me that I've done something wrong!
 c.create_decimal(1.23456789) * 2
Traceback (most recent call last):
  File stdin, line 1, in ?
  File C:\Apps\Python24\lib\decimal.py, line 2291, in create_decimal
return d._fix(self)
  File C:\Apps\Python24\lib\decimal.py, line 1445, in _fix
ans = ans._round(prec, context=context)
  File C:\Apps\Python24\lib\decimal.py, line 1567, in _round
context._raise_error(Inexact, 'Changed in rounding')
  File C:\Apps\Python24\lib\decimal.py, line 2215, in _raise_error
raise error, explanation
decimal.Inexact: Changed in rounding

I hope this helps,
Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Raymond Hettinger
I sense a religious fervor about this so go ahead and do whatever you
want.

Please register my -1 for the following reasons:

a.) It re-introduces representation error into a module that worked so
hard to overcome that very problem.  The PEP explicitly promises that a
transformation from a literal involves no loss of information.
Likewise, it promises that context just affects operations' results.

b.) It is inconsistent with the idea of having the input specify its own
precision:  http://www2.hursley.ibm.com/decimal/decifaq1.html#tzeros

c.) It is both untimely and unnecessary.  The module is functioning
according to its tests, the specification test suite, and the PEP.
Anthony should put his foot down as this is NOT a bugfix, it is a change
in concept.  The Context.create_decimal() method already provides a
standard conforming implementation of the to-number conversion.
http://www.python.org/peps/pep-0327.html#creating-from-context .

d.) I believe it will create more problems than it would solve.  If
needed, I can waste an afternoon coming up with examples.  Likewise, I
think it will make the module more difficult to use (esp. when
experimenting with the effect of results of changing precision).

e.) It does not eliminate the need to use the plus operation to force
rounding/truncation when switching precision.

f.) To be consistent, one would need to force all operation inputs to
have the context applied before their use.  The standard specifically
does not do this and allows for operation inputs to be of a different
precision than the current context (that is the reason for the plus
operation).

g.) It steers people in the wrong direction.  Increasing precision is
generally preferable to rounding or truncating explicit inputs.  I
included two Knuth examples in the docs to show the benefits of bumping
up precision when needed. 

h.) It complicates the heck out of storage, retrieval, and input.
Currently, decimal objects have a meaning independent of context.  With
the proposed change, the meaning becomes context dependent.

i.) After having been explicitly promised by the PEP, discussed on the
newsgroup and python-dev, and released to the public, a change of this
magnitude warrants a newsgroup announcement and a comment period.



A use case:
---
The first use case that comes to mind is in the math.toRadians()
function.  When originally posted, there was an objection that the
constant degToRad was imprecise to the last bit because it was expressed
as the ratio of two literals that compiler would have rounded, resulting
in a double rounding.

Link to rationale for the spec:
---
http://www2.hursley.ibm.com/decimal/IEEE-cowlishaw-arith16.pdf
See the intro to section 4 which says:  The digits in decimal are not
significands; rather, the numbers are exact.  The arithmetic on those
numbers is also exact unless rounding to a given precision is specified.

Link to the discussion relating decimal design rationale to schoolbook
math

---
I can't find this link.  If someone remembers, please post it.



Okay, I've said my piece.
Do what you will.



Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Raymond Hettinger
Addenda:

j.) The same rules would need to apply to all forms of the Decimal
contructor, so Decimal(someint) would also need to truncate/round if it
has more than precision digits -- likewise with Decimal(fromtuple) and
Decimal(fromdecimal).  All are problematic.  Integer conversions are
expected to be exact but may not be after the change.  Conversion from
another decimal should be idempotent but implicit rounding/truncation
will break that.  The fromtuple/totuple round-trip can get broken.  You
generally specify a tuple when you know exactly what you want.  

k.) The biggest client of all these methods is the Decimal module
itself.  Throughout the implementation, the code calls the Decimal
constructor to create intermediate values.  Every one of those calls
would need to be changed to specify a context.  Some of those cases are
not trivially changed (for instance, the hash method doesn't have a
context but it needs to check to see if a decimal value is exactly an
integer so it can hash to that value).  Likewise, how do you use a
decimal value for a dictionary key when the equality check is context
dependent (change precision and lose the ability to reference an entry)?


Be careful with this proposed change.  It is a can of worms.
Better yet, don't do it.  We already have a context aware
constructor method if that is what you really want.



Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Nick Coghlan
Raymond Hettinger wrote:
 Be careful with this proposed change.  It is a can of worms.
 Better yet, don't do it.  We already have a context aware
 constructor method if that is what you really want.

And don't forgot that 'context-aware-construction' can also be written:

   val = +Decimal(string_repr)

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Michael Chermside
[Tim and Raymond are slugging it out about whether Decimal constructors
 should respect context precision]

Tim, I find Raymond's arguments to be much more persuasive. (And that's
even BEFORE I read his 11-point missive.) I understood the concept that
*operations* are contex-dependent, but decimal *objects* are not, and
thus it made sense to me that *constructors* were not context-dependent.

On the other hand, I am NOT a floating-point expert. Can you educate
me some? What is an example of a case where users would get wrong
results because constructors failed to respect context precision?

(By the way... even if other constructors begin to respect context
precision, the constructor from tuple should NOT -- it exists to provide
low-level access to the implementation. I'll express no opinion on the
constructor from Decimal, because I don't understand the issues.)

-- Michael Chermside

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Tim Peters
[Michael Chermside]
 Tim, I find Raymond's arguments to be much more persuasive.
 (And that's even BEFORE I read his 11-point missive.) I
 understood the concept that *operations* are context-
 dependent, but decimal *objects* are not, and thus it made
 sense to me that *constructors* were not context-dependent.

 On the other hand, I am NOT a floating-point expert. Can you
 educate me some?

Sorry, I can't make more time for this now.  The short course is that
a module purporting to implement an external standard should not
deviate from that standard without very good reasons, and should make
an effort to hide whatever deviations it thinks it needs to indulge
(e.g., make them harder to spell).  This standard provides 100%
portable (across HW, across OSes, across programming languages)
decimal arithmetic, but of course that's only across
standard-conforming implementations.

That the decimal constructor here deviates from the standard appears
to be just an historical accident (despite Raymond's current
indefatigable rationalizations wink).  Other important
implementations of the standard didn't make this mistake; for example,
Java's BigDecimal|(java.lang.String) constructor follows the rules
here:

http://www2.hursley.ibm.com/decimalj/deccons.html

Does it really need to be argued interminably that deviating from a
standard is a Big Deal?  Users pay for that eventually, not
implementors.  Even if a standard is wrong (and leaving aside that I
believe this standard asks for the right behavior here), users benefit
from cross-implementation predictability a lot more than they can
benefit from a specific implementation's non-standard idiosyncracies.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Raymond Hettinger
 Does it really need to be argued interminably that deviating from a
 standard is a Big Deal? 

The word deviate inaccurately suggests that we do not have a compliant
method which, of course, we do.  There are two methods, one context
aware and the other context free.  The proposal is to change the
behavior of the context free version, treat it as a bug, and alter it in
the middle of a major release.  The sole argument resembles bible
thumping.

Now for a tale.  Once upon a time, one typed the literal 1.1 but ended
up with the nearest representable value, 1.1001.  The
representation error monster terrorized the land and there was much
sadness. 

From the mists of Argentina, a Palidan set things right.  The literal
1.1 became representable and throughout the land the monster was
believed to have been slain.  With their guard down, no one thought
twice when a Zope sorcerer had the bright idea that long literals like
1.1001 should no longer be representable and should
implicitly jump to the nearest representable value, 1.1.  Thus the
monster arose like a Phoenix.  Because it was done in a bugfix release,
without a PEP, and no public comment, the citizens were caught
unprepared and faced an eternity dealing with the monster so valiantly
assailed by the Argentine.

Bible thumping notwithstanding, this change is both unnecessary and
undesirable.  Implicit rounding in the face of explicit user input to
the contrary is a bad idea.  Internally, the implementation relies on
the existing behavior so it is not easily changed.  Don't do it.



Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Facundo Batista
On 5/20/05, Michael Chermside [EMAIL PROTECTED] wrote:

 In other words, Java's behavior is much closer to the current behavior
 of Python, at least in terms of features that are user-visible. The
 default behavior in Java is to have infinite precision unless a context
 is supplied that says otherwise. So the constructor that takes a string
 converts it faithfully, while the constructor that takes a context
 obeys the context.

Are we hitting that point where the most important players (Python and
Java, ;) implement the standard almost fully compliant, and then the
standard revises *that* behaviour?

For the record, I'm -0 for changing the actual behaviour: I'd really
like to implement exactly the Spec, but I think it's more important
the practical reasons we have to don't do it.

.Facundo

Blog: http://www.taniquetil.com.ar/plog/
PyAr: http://www.python.org/ar/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Tim Peters
[Raymond Hettinger]
 The word deviate inaccurately suggests that we do not have
 a compliant method which, of course, we do.  There are two
 methods, one context aware and the other context free.  The
 proposal is to change the behavior of the context free version,
 treat it as a bug, and alter it in the middle of a major release.

I didn't suggest changing this for 2.4.2.  Although, now that you
mention it ... wink.

  The sole argument resembles bible thumping.

I'm sorry, but if you mentally reduced everything I've written about
this to the sole argument, rational discussion has become impossible
here.

In the meantime, I've asked Mike Cowlishaw what his intent was, and
what the standard may eventually say.  I didn't express a preference
to him.  He said he'll think about it and try to get back to me by
Sunday.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Guido van Rossum
It looks like if you pass in a context, the Decimal constructor still
ignores that context:

 import decimal as d
 d.getcontext().prec = 4
 d.Decimal(1.234567890123456789012345678901234567890123456789,
d.getcontext())
Decimal(1.234567890123456789012345678901234567890123456789)
 

I think this is contrary to what some here have claimed (that you
could pass an explicit context to cause it to round according to the
context's precision).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Tim Peters
[Guido]
 It looks like if you pass in a context, the Decimal constructor
 still ignores that context:
 
  import decimal as d
  d.getcontext().prec = 4
  d.Decimal(1.234567890123456789012345678901234567890123456789,
 d.getcontext())
 Decimal(1.234567890123456789012345678901234567890123456789)
 
 
 I think this is contrary to what some here have claimed (that you
 could pass an explicit context to cause it to round according to the
 context's precision).

I think Michael Chermside said that's how a particular Java
implementation works.

Python's Decimal constructor accepts a context argument, but the only
use made of it is to possibly signal a ConversionSyntax condition.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Michael Chermside
Guido writes:
 It looks like if you pass in a context, the Decimal constructor still
 ignores that context

No, you just need to use the right syntax. The correct syntax for
converting a string to a Decimal using a context object is to use
the create_decimal() method of the context object:

 import decimal
 decimal.getcontext().prec = 4
 decimal.getcontext().create_decimal(1.234567890)
Decimal(1.235)

Frankly, I have no idea WHAT purpose is served by passing a context
to the decimal constructor... I didn't even realize it was allowed!

-- Michael Chermside

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Guido van Rossum
 [Guido]
  It looks like if you pass in a context, the Decimal constructor
  still ignores that context:
 
   import decimal as d
   d.getcontext().prec = 4
   d.Decimal(1.234567890123456789012345678901234567890123456789,
  d.getcontext())
  Decimal(1.234567890123456789012345678901234567890123456789)
  
 
  I think this is contrary to what some here have claimed (that you
  could pass an explicit context to cause it to round according to the
  context's precision).

[Tim]
 I think Michael Chermside said that's how a particular Java
 implementation works.
 
 Python's Decimal constructor accepts a context argument, but the only
 use made of it is to possibly signal a ConversionSyntax condition.

You know that, but Raymond seems confused.  From one of his posts (point (k)):

Throughout the implementation, the code calls the Decimal
constructor to create intermediate values.  Every one of those calls
would need to be changed to specify a context.

But passing a context doesn't help for obtaining the desired precision.

PS I also asked Cowlishaw and he said he would ponder it over the
weekend. Maybe Raymond can mail him too. ;-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Raymond Hettinger
[Michael Chermside]
 Frankly, I have no idea WHAT purpose is served by passing a context
 to the decimal constructor... I didn't even realize it was allowed!

Quoth the docs for the Decimal constructor:


The context precision does not affect how many digits are stored. That
is determined exclusively by the number of digits in value. For example,
Decimal(3.0) records all five zeroes even if the context
precision is only three. 

The purpose of the context argument is determining what to do if value
is a malformed string. If the context traps InvalidOperation, an
exception is raised; otherwise, the constructor returns a new Decimal
with the value of NaN.





Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Nick Coghlan
Michael Chermside wrote:
 Frankly, I have no idea WHAT purpose is served by passing a context
 to the decimal constructor... I didn't even realize it was allowed!

As Tim pointed out, it's solely to control whether or not ConversionSyntax 
errors are exceptions or not:

Py decimal.Decimal(a)
Traceback (most recent call last):
   File stdin, line 1, in ?
   File c:\python24\lib\decimal.py, line 571, in __new__
 self._sign, self._int, self._exp = context._raise_error(ConversionSyntax)
   File c:\python24\lib\decimal.py, line 2266, in _raise_error
 raise error, explanation
decimal.InvalidOperation
Py context = decimal.getcontext().copy()
Py context.traps[decimal.InvalidOperation] = False
Py decimal.Decimal(a, context)
Decimal(NaN)

I'm tempted to suggest deprecating the feature, and say if you want invalid 
strings to produce NaN, use the create_decimal() method of Context objects. 
That 
would mean the standard construction operation becomes genuinely context-free. 
Being able to supply a context, but then have it be mostly ignored is rather 
confusing.

Doing this may also fractionally speed up Decimal creation from strings in the 
normal case, as the call to getcontext() could probably be omitted from the 
constructor.

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Raymond Hettinger
[Guido]
  You know that, but Raymond seems confused.  From one of his posts
(point
 (k)):

[Raymond]
  Throughout the implementation, the code calls the Decimal
  constructor to create intermediate values.  Every one of those calls
  would need to be changed to specify a context.

[Facundo]
 The point here, I think, is that intermediate Decimal objects are
 created, and the whole module assumes that the context does not affect
 that intermediate values. If you change this and start using the
 context at Decimal creation time, you'll have to be aware of that in a
 lot of parts of the code.
 
 OTOH, you can change that and run the test cases, and see how bad it
 explodes (or not, ;).

Bingo!

That is point (k) from the big missive.


Raymond

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Raymond Hettinger
 It looks like if you pass in a context, the Decimal constructor still
 ignores that context:
 
  import decimal as d
  d.getcontext().prec = 4
 
d.Decimal(1.234567890123456789012345678901234567890123456789,
 d.getcontext())
 Decimal(1.234567890123456789012345678901234567890123456789)
 
 
 I think this is contrary to what some here have claimed (that you
 could pass an explicit context to cause it to round according to the
 context's precision).

That's not the way it is done.  The context passed to the Decimal
constructor is *only* used to determine what to do with a malformed
string (whether to raise an exception or set a flag.

To create a decimal with a context, use the Context.create_decimal()
method:

 import decimal as d
 d.getcontext().prec = 4

d.getcontext().create_decimal(1.234567890123456789012345678901234567890
123456789)
Decimal(1.235)



Raymond

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-20 Thread Aahz
On Fri, May 20, 2005, Raymond Hettinger wrote:

 k.) The biggest client of all these methods is the Decimal module
 itself.  Throughout the implementation, the code calls the Decimal
 constructor to create intermediate values.  Every one of those calls
 would need to be changed to specify a context.  Some of those cases are
 not trivially changed (for instance, the hash method doesn't have a
 context but it needs to check to see if a decimal value is exactly an
 integer so it can hash to that value).  Likewise, how do you use a
 decimal value for a dictionary key when the equality check is context
 dependent (change precision and lose the ability to reference an entry)?

I'm not sure this is true, and if it is true, I think the Decimal module
is poorly implemented.  There are two uses for the Decimal() constructor:

* copy constructor for an existing Decimal instance (or passing in a
tuple directly to mimic the barebones internal)

* conversion constructor for other types, such as string

Are you claiming that the intermediate values are being constructed as
strings and then converted back to Decimal objects?  Is there something
else I'm missing?  I don't think Tim is claiming that the copy
constructor needs to obey context, just string conversions.

Note that comparison is not context-dependent, because context only
applies to results of operations, and the spec's comparison operator
(equivalent to cmp()) only returns (-1,0,1) -- guaranteed to be within
the precision of any context.  ;-)

Note that hashing is not part of the standard, so whatever makes most
sense in a Pythonic context would be appropriate.  It's perfectly
reasonable for Decimal's __int__ method to be unbounded because Python
ints are unbounded.

All these caveats aside, I don't have a strong opinion about what we
should do.  Overall, my sentiments are with Tim that we should fix this,
but my suspicion is that it probably doesn't matter much.
-- 
Aahz ([EMAIL PROTECTED])   * http://www.pythoncraft.com/

The only problem with Microsoft is they just have no taste. --Steve Jobs
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-19 Thread Facundo Batista
On 5/18/05, Raymond Hettinger [EMAIL PROTECTED] wrote:


  from decimal import getcontext
  context = getcontext()
  x = context.create_decimal('3.104')
  y = context.create_decimal('2.104')
  z = context.create_decimal('0.000')
  context.prec = 3
  x + y
 Decimal(5.21)
  x + z + y
 Decimal(5.20)

My point here is to always remind everybody that Decimal solves the
problem with binary floating point, but not with representation
issues. If you don't have enough precision (for example to represent
one third), you'll get misterious results.

That's why, IMO, the Spec provides two traps, one for Rounded, and one
for Inexact, to be aware of what exactly is happening.


 As for why the normal Decimal constructor is context free, PEP 327
 indicates discussion on the subject, but who made the decision and why
 is not clear.

There was not decision. Originally the context didn't get applied in
creation time. And then, the situation arised where it would be nice
to be able to apply it in creation time (for situations when it would
be costly to not do it), so a method in the context was born.

.Facundo

Blog: http://www.taniquetil.com.ar/plog/
PyAr: http://www.python.org/ar/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-19 Thread Tim Peters
[Raymond Hettinger]
 For brevity, the above example used the context free
 constructor, but the point was to show the consequence
 of a precision change.

Yes, I understood your point.  I was making a different point: 
changing precision isn't needed _at all_ to get surprises from a
constructor that ignores context.  Your example happened to change
precision, but that wasn't essential to getting surprised by feeding
strings to a context-ignoring Decimal constructor.  In effect, this
creates the opportunity for everyone to get suprised by something only
experts should need to deal with.

There seems to be an unspoken wow that's cool! kind of belief that
because Python's Decimal representation is _potentially_ unbounded,
the constructor should build an object big enough to hold any argument
exactly (up to the limit of available memory).  And that would be
appropriate for, say, an unbounded rational type -- and is appropriate
for Python's unbounded integers.

But Decimal is a floating type with fixed (albeit user-adjustable)
precision, and ignoring that mixes arithmetic models in a
fundamentally confusing way.  I would have no objection to a named
method that builds a big as needed to hold the input exactly Decimal
object, but it shouldn't be the behavior of the
everyone-uses-it-constructor.  It's not an oversight that the IBM
standard defines no operations that ignore context (and note that
string-float is a standard operation):  it's trying to provide a
consistent arithmetic, all the way from input to output.  Part of
consistency is applying the rules everywhere, in the absence of
killer-strong reasons to ignore them.

Back to your point, maybe you'd be happier if a named (say)
apply_context() method were added?  I agree unary plus is a
funny-looking way to spell it (although that's just another instance
of applying the same rules to all operations).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-19 Thread Raymond Hettinger
[Tim suggesting that I'm clueless and dazzled by sparkling lights]
 There seems to be an unspoken wow that's cool! kind of belief 
 that because Python's Decimal representation is _potentially_ 
 unbounded, the constructor should build an object big enough to 
 hold any argument exactly (up to the limit of available memory).
 And that would be appropriate for, say, an unbounded rational 
 type -- and is appropriate for Python's unbounded integers.

I have no such thoughts but do strongly prefer the current design. I
recognize that it allows a user to specify an input at a greater
precision than the current context (in fact, I provided the example).

The overall design of the module and the spec is to apply context to the
results of operations, not their inputs.  In particular, the spec
recognizes that contexts can change and rather than specifying automatic
or implicit context application to all existing values, it provides the
unary plus operation so that such an application is explicit.  The use
of extra digits in a calculation is not invisible as the calculation
will signal Rounded and Inexact (if non-zero digits are thrown away).

One of the original motivating examples was schoolbook arithmetic
where the input string precision is incorporated into the calculation.
IMO, input truncation/rounding is inconsistent with that motivation.
Likewise, input rounding runs contrary to the basic goal of eliminating
representation error.

With respect to integration with the rest of Python (everything beyond
that spec but needed to work with it), I suspect that altering the
Decimal constructor is fraught with issues such as the
string-to-decimal-to-string roundtrip becoming context dependent.  I
haven't thought it through yet but suspect that it does not bode well
for repr(), pickling, shelving, etc.  Likewise, I suspect that traps
await multi-threaded or multi-context apps that need to share data.
Also, adding another step to the constructor is not going to help the
already disasterous performance.

I appreciate efforts to make the module as idiot-proof as possible.
However, that is a pipe dream.  By adopting and exposing the full
standard instead of the simpler X3.274 subset, using the module is a
non-trivial exercise and, even for experts, is a complete PITA.  Even a
simple fixed-point application (money, for example) requires dealing
with quantize(), normalize(), rounding modes, signals, etc.  By default,
outputs are not normalized so it is difficult even to recognize what a
zero looks like.  Just getting output without exponential notation is
difficult.  If someone wants to craft another module to wrap around and
candy-coat the Decimal API, I would be all for it.  Just recognize that
the full spec doesn't have a beginner mode -- for better or worse, we've
simulated a hardware FPU.

Lastly, I think it is a mistake to make a change at this point.  The
design of the constructor survived all drafts of the PEP,
comp.lang.python discussion, python-dev discussion, all early
implementations, sandboxing, the Py2.4 alpha/beta, cookbook
contributions, and several months in the field.  I say we document a
recommendation to use Context.create_decimal() and get on with life.



Clueless in Boston



P.S.  With 28 digit default precision, the odds of this coming up in
practice are slim (when was the last time you typed in a floating point
value with more than 28 digits; further, if you had, would it have
ruined your day if your 40 digits were not first rounded to 28 before
being used).  IOW, bug tracker lists hundreds of bigger fish to fry
without having to change a published API (pardon the mixed metaphor).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-19 Thread Tim Peters
Sorry, I simply can't make more time for this.  Shotgun mode:

[Raymond]
 I have no such thoughts but do strongly prefer the current
 design.

How can you strongly prefer it?  You asked me whether I typed floats
with more than 28 significant digits.  Not usually wink.  Do you? 
If you don't either, how can you strongly prefer a change that makes
no difference to what you do?

 ...
 The overall design of the module and the spec is to apply
 context to the results of operations, not their inputs.

But string-float is an _operation_ in the spec, as it has been since
1985 in IEEE-754 too.  The float you get is the result of that
operation, and is consistent with normal numeric practice going back
to the first time Fortran grew a distinction between double and single
precision.  There too the common practice was to write all literals as
double-precision, and leave it to the compiler to round off excess
bits if the assignment target was of single precision.  That made it
easy to change working precision via fiddling a single implicit (a
kind of type declaration) line.  The same kind of thing would be
pleasantly applicable for decimal too -- if the constructor followed
the rules.

 In particular, the spec recognizes that contexts can change
| and rather than specifying automatic or implicit context
 application to all existing values, it provides the unary plus
 operation so that such an application is explicit.  The use
 of extra digits in a calculation is not invisible as the
 calculation will signal Rounded and Inexact (if non-zero digits
 are thrown away).

Doesn't change that the standard rigorously specifies how strings are
to be converted to decimal floats, or that our constructor
implementation doesn't do that.

 One of the original motivating examples was schoolbook
 arithmetic where the input string precision is incorporated
 into the calculation.

Sorry, doesn't ring a bell to me.  Whose example was this?

 IMO, input truncation/rounding is inconsistent with that
 motivation.

Try keying more digits into your hand calculator than it can hold 0.5 wink.

 Likewise, input rounding runs contrary to the basic goal of
 eliminating representation error.

It's no surprise that an exact value containing more digits than
current precision gets rounded.  What _is_ surprising is that the
decimal constructor doesn't follow that rule, instead making up its
own rule.  It's an ugly inconsistency at best.

 With respect to integration with the rest of Python (everything
 beyond that spec but needed to work with it), I suspect that
 altering the Decimal constructor is fraught with issues such
 as the string-to-decimal-to-string roundtrip becoming context
 dependent.

Nobody can have a reasonable expectation that string - float -
string is an identity for any fixed-precision type across all strings.
 That's just unrealistic.  You can expect string - float - string to
be an identity if the string carries no more digits than current
precision.  That's how a bounded type works.  Trying to pretend it's
not bounded in this one case is a conceptual mess.

 I haven't thought it through yet but suspect that it does not
 bode well for repr(), pickling, shelving, etc.

The spirit of the standard is always to deliver the best possible
approximation consistent with current context.  Unpickling and
unshelving should play that game too.  repr() has a special desire for
round-trip fidelity.

 Likewise, I suspect that traps await multi-threaded or multi-
 context apps that need to share data.

Like what?  Thread-local context precision is a reality here, going
far beyond just string-float.

 Also, adding another step to the constructor is not going to
 help the already disasterous performance.

(1) I haven't found it to be a disaster.  (2) Over the long term, the
truly speedy implementations of this standard will be limited to a
fixed set of relatively small precisions (relative to, say, 100,
not to 28 wink).  In that world it would be unboundedly more
expensive to require the constructor to save every bit of every input:
 rounding string-float is a necessity for speedy operation over the
long term.

 I appreciate efforts to make the module as idiot-proof as
 possible.

That's not my interest here.  My interest is in a consistent,
std-conforming arithmetic, and all fp standards since IEEE-754
recognized that string-float is an operation much like every other
fp operation.  Consistency helps by reducing complexity.  Most users
will never bump into this, and experts have a hard enough job without
gratuitous deviations from a well-defined spec.  What's the _use case_
for carrying an unbounded amount of information into a decimal
instance?  It's going to get lost upon the first operation anyway.

 However, that is a pipe dream.  By adopting and exposing the
 full standard instead of the simpler X3.274 subset, using the
 module is a non-trivial exercise and, even for experts, is a
 complete PITA.

Rigorous numeric programming is a 

Re: [Python-Dev] Adventures with Decimal

2005-05-19 Thread Guido van Rossum
I know I should stay out of here, but isn't Decimal() with a string
literal as argument a rare case (except in examples)? It's like
float() with a string argument -- while you *can* write float(1.01),
nobody does that. What people do all the time is parse a number out of
some larger context into a string, and then convert the string to a
float by passing it to float(). I assume that most uses of the
Decimal()  constructor will be similar. In that case, it  makes total
sense to me that the context's precision should be used, and if the
parsed string contains an insane number of digits, it will be rounded.

I guess the counter-argument is that because we don't have Decimal
literals, Decimal(12345) is used as a pseudo-literal, so it actually
occurs more frequently than float(12345). Sure. But the same
argument applies: if I write a floating point literal in Python (or C,
or Java, or any other language) with an insane number of digits, it
will be rounded.

So, together with the 28-digit default precision, I'm fine with
changing the constructor to use the context by default. If you want
all the precision given in the string, even if it's a million digits,
set the precision to the length of the string before you start; that's
a decent upper bound. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com