"Martin v. Löwis" <[EMAIL PROTECTED]> writes:

> Kristján V. Jónsson schrieb:
>> I can't see how this situation is any different from the re-use of
>> low ints.  There is no fundamental law that says that ints below 100
>> are more common than other, yet experience shows that  this is so,
>> and so they are reused.
>
> There are two important differences:
> 1. it is possible to determine whether the value is "special" in
>    constant time, and also fetch the singleton value in constant
>    time for ints; the same isn't possible for floats.

I don't think you mean "constant time" here do you?  I think most of
the code posted so far has been constant time, at least in terms of
instruction count, though some might indeed be fairly slow on some
processors -- conversion from double to integer on the PowerPC
involves a trip off to memory for example.  Even so, everything should
be fairly efficient compared to allocation, even with PyMalloc.

> 2. it may be that there is a loss of precision in reusing an existing
>    value (although I'm not certain that this could really happen).
>    For example, could it be that two values compare successful in
>    ==, yet are different values? I know this can't happen for
>    integers, so I feel much more comfortable with that cache.

I think the only case is that the two zeros compare equal, which is
unfortunate given that it's the most compelling value to cache...

I don't know a reliable and fast way to distinguish +0.0 and -0.0.

Cheers,
mwh

-- 
  The bottom tier is what a certain class of wanker would call
  "business objects" ...                      -- Greg Ward, 9 Dec 1999
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to