Kristján V. Jónsson schrieb: > I can't see how this situation is any different from the re-use of > low ints. There is no fundamental law that says that ints below 100 > are more common than other, yet experience shows that this is so, > and so they are reused.
There are two important differences: 1. it is possible to determine whether the value is "special" in constant time, and also fetch the singleton value in constant time for ints; the same isn't possible for floats. 2. it may be that there is a loss of precision in reusing an existing value (although I'm not certain that this could really happen). For example, could it be that two values compare successful in ==, yet are different values? I know this can't happen for integers, so I feel much more comfortable with that cache. > Rather than to view this as a programming error, why not simply > accept that this is a recurring pattern and adjust python to be more > efficient when faced by it? Surely a lot of karma lies that way? I'm worried about the penalty that this causes in terms of run-time cost. Also, how do you chose what values to cache? Regards, Martin _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com