Yury Selivanov added the comment: >But the next question is then the overhead on the "slow" path, which requires >a benchmark too! For example, use a subtype of int.
telco is such a benchmark (although it's very unstable). It uses decimals extensively. I've tested it many times on three different CPUs, and it doesn't seem to become any slower. > Discarding Numpy because it's "overkill" sounds misguided to me. That's like discarding asyncio because it's "less overkill" to write your own select() loop. It's often far more productive to use the established, robust, optimized library rather than tweak your own low-level code. Don't get me wrong, numpy is simply amazing! But if you have a 100,000 lines application that happens to have a a few FP-related calculations here and there, you won't use numpy (unless you had experience with it before). My opinion on this: numeric operations in Python (and any general purpose language) should be as fast as we can make them. > Python 2 is much faster than Python 3 on any kind of numeric > calculations. > Actually, it shouldn't really be faster on FP calculations, since the float object hasn't changed (as opposed to int/long). So I'm skeptical of FP-heavy code that would have been made slower by Python 3 (unless there's also integer handling in that, e.g. indexing). But it is faster. That's visible on many benchmarks. Even simple timeit oneliners can show that. Probably it's because that such benchmarks usually combine floats and ints, i.e. "2 * smth" instead of "2.0 * smth". ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue21955> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com