[issue4024] float(0.0) singleton
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- superseder: - Intern certain integral floats for memory savings and performance ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
Terry J. Reedy tjre...@udel.edu added the comment: I have 3 comments for future readers who might want to reopen. 1) This would have little effect on calculation with numpy. 2) According to sys.getrefcount, when '' appears, 3.0.1 has 1200 duplicate references to 0 and 1 alone, and about 2000 to all of them. So so small int caching really needs to be done by the interpreter. Are there *any* duplicate internal references to 0.0 that would help justify this proposal? 3) It is? (certainly was) standard in certain Fortran circles to NAME constants as Raymond suggested. One reason given was to ease conversion between single and double precision. In Python, named constants in functions would ease conversion between, for instance, float and decimal. -- nosy: +tjreedy ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
Changes by STINNER Victor victor.stin...@haypocalc.com: -- nosy: -haypo ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
Changes by Raymond Hettinger rhettin...@users.sourceforge.net: -- resolution: - rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
Georg Brandl [EMAIL PROTECTED] added the comment: Will it correctly distinguish between +0.0 and -0.0? -- nosy: +georg.brandl ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
lplatypus [EMAIL PROTECTED] added the comment: No it won't distinguish between +0.0 and -0.0 in its present form, because these two have the same value according to the C equality operator. This should be easy to adjust, eg we could exclude -0.0 by changing the comparison if (fval == 0.0) into static double positive_zero = 0.0; ... if (!memcmp(fval, positive_zero, sizeof(double))) ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
STINNER Victor [EMAIL PROTECTED] added the comment: We need maybe more hardcoded floats. I mean a cache of current float. Example of pseudocode: def cache_float(value): return abs(value) in (0.0, 1.0, 2.0) def create_float(value): try: return cache[value] except KeyError: obj = float(value) if cache_value(value): cache[value] = obj return obj Since some (most?) programs don't use float, the cache is created on demand and not at startup. Since the goal is speed, only a benchmark can answer to my question (is Python faster using such cache) ;-) Instead of cache_float(), an RCU cache might me used. -- nosy: +haypo ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
Christian Heimes [EMAIL PROTECTED] added the comment: Please use copysign(1.0, fval) == 1.0 instead of your memcpy trick. It's the cannonical way to check for negative zero. copysign() is always available because we have our own implementation if the platform doesn't provide one. We might also want to special case 1.0 and -1.0. I've to check with Guido and Barry if we can get the optimization into 2.6.1 and 3.0.1. It may have to wait until 2.7 and 3.0. -- assignee: - christian.heimes nosy: +christian.heimes priority: - normal versions: +Python 3.0 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
Raymond Hettinger [EMAIL PROTECTED] added the comment: I question whether this should be done at all. Making the creation of a float even slightly slower is bad. This is on the critical path for all floating point intensive computations. If someone really cares about the memory savings, it is not hard take a single in instance of float and use it everywhere: ZERO=0.0; arr=[ZERO if x == 0.0 else x for x in arr]. That technique also works for 1.0 and -1.0 and pi and other values that may commonly occur in a particular app. Also, the technique is portable to implementations other than CPython. I don't mind this sort of optimization for immutable containers but feel that floats are too granular. Special cases aren't special enough to break the rules. If the OP is insistent, then at least this should be discussed with the numeric community who will have a better insight into whether the speed/space trade-off makes sense in other applications beyond the OP's original case. Tim, any insights? -- assignee: christian.heimes - tim_one nosy: +rhettinger, tim_one ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4024] float(0.0) singleton
New submission from lplatypus [EMAIL PROTECTED]: Here is a patch to make PyFloat_FromDouble(0.0) always return the same float instance. This is similar to the existing optimization in PyInt_FromLong(x) for small x. My own motivation is that the patch reduces memory by several megabytes for a particular in-house data processing script, but I think that it should be generally useful assuming that zero is a very common float value, and at worst almost neutral when this assumption is wrong. The minimal performance impact of the test for zero should be easily recovered by reduced memory allocation calls. I am happy to look into benchmarking if you require empirical performance data. -- components: Interpreter Core files: python_zero_float.patch keywords: patch messages: 74224 nosy: ldeller severity: normal status: open title: float(0.0) singleton type: resource usage versions: Python 2.6 Added file: http://bugs.python.org/file11686/python_zero_float.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue4024 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com