On 2/22/2018 8:36 PM, Python wrote:
On Thu, Feb 22, 2018 at 11:29:53PM +1100, Chris Angelico wrote:
The idea of the Fibonacci benchmark is to test how effectively an
implementation manages large numbers of recursive function calls. Then,
fib(36) would normally involve 48,315,633 calls.

This version does only 37, giving a misleading impression.

Not overly misleading; the point of it is to show how trivially easy
it is to memoize a function in Python.

This does not appear to me at all to be the point of the article.  The
point of the article seems to be that the Julia benchmarks compare
unfairly the performance of Python to Julia, because they do not use
algorithms that suit "the best way to use Python."  But I have to
agree with bartc, that is not at all the point of the benchmarks.  The
point of the benchmarks is to compare the performance of a particular
algorithm or set of algorithms across a variety of languages.

It's fine to say that this method of computing Fibonacci sequences is
inefficient; but anyone who's spent any time on the problem knows that
recursive function calls is not the most efficient way to calculate
them in most languages.  So it must follow logically that the point of
the benchmark is not to prove that Julia is faster than Python at
solving Fibonacci sequences, but rather that Julia is faster than
Python at solving the class of problems that would be implemented in a
similar manner as this particular implementation of Fibonacci

It is no secret that Python's interpreted function calls are slower than function calls compiled to machine code and that Python's infinite precision integer arithmetic is slower that machine int arithmetic. So there is almost no point in combining both in a naive drastically inefficient algorithm and declaring that Python is slower.

As to the vague 'class of problems implemented in a similar manner': Any function f of count N that depends of values of f for counts < N can be memoized the same way in Python as fibonacci. Everything said about P vs J for fib applies to such similar problems. The same is almost true for functions that depend on a pair of counts, such as the number of combinations of N things taken K at a time. The time benefit per space used is just a bit less.

Now consider a general recursive problem: a graph with N nodes and E edges and recursive node functions that depend on the value of the function at args(node) other nodes. Example: the minimum length of a path from node a to node b. In the field of graph algorithms, it is completely routine to save f(node) for intermediate nodes when calculated. The idea of naively implementing the recursive definition, in any non-trivial practical situation, without saving intermediate values, is pretty ludicrous.

Fib can be viewed as a function on a directed graph where all but the 2 base nodes have two incoming and two outgoing edges.

Terry Jan Reedy


Reply via email to