On Thu, 22 Feb 2018 17:53:30 +0000, bartc wrote:
> On 22/02/2018 16:00, Steven D'Aprano wrote:
>> On Thu, 22 Feb 2018 12:03:09 +0000, bartc wrote:
>>> The idea of the Fibonacci benchmark is to test how effectively an
>>> implementation manages large numbers of recursive function calls.
>>> Then, fib(36) would normally involve 48,315,633 calls.
>>> This version does only 37, giving a misleading impression.
>> Who cares if it only does 37? There is *absolutely no benefit* to those
>> additional 48,315,596 calls, and thanks to the benchmark I discovered
>> I want to know what is the fastest implementation of Fibonacci I can
>> write. I'm not married to the idea that I have to make a ludicrous 48
>> million function calls just to get the 36th Fibonacci number.
> As I said, people keep missing the point. The fact this uses a grossly
> inefficient way of calculating Fibonacci seems to blind them to any
> other considerations.
> The actual result is irrelevant, so long as its correct. The important
> thing is those 50 million calls.
Why do you care about the 50 million calls? That's crazy -- the important
thing is *calculating the Fibonacci numbers as efficiently as possible*.
I don't believe for a second that you would write code like this:
# need a Fibonacci number, but the implementation we have
# is too efficient, only taking 50 calls instead of 50 million
# so call the function a million times
for i in range(1000000):
num = fibonacci(40)
so why on earth do you care about the 50 million calls?
The only reason I care about them is to reduce the number as much as
possible. Why do you care about them?
> If you want to know how well one language implementations deals with
> function calls compared to another, you will prefer a test that is doing
> 48 million function calls and little else, rather than one that does
> only three dozen.
I don't care about "how well one language implementations deals with
function calls compared to another". I care about writing the fastest
idiomatic code I can in the language I'm using. If I were writing Julia
code, I would write the best Julia code I could. If I were writing Python
code, I would write the best Python code I can.
If I'm writing C or Pascal, that would mean writing loops like this:
arr := some array
for i := 1 to len(arr):
value = arr[i]
If I'm writing Ruby or Python, that means *NOT* writing loops like that,
but writing them like this:
for value in arr:
If the Julia coders have decided, for whatever reason, that they want to
show off how super-efficient function recursion is in Julia by using a
doubly-recursive implementation of the Fibonacci sequence, I'm not bound
by their choice. Their motive is to highlight the advantages of Julia. My
motive is to show how Python can be just as fast.
When beginners complain that "Python is too slow", nine times out of ten
they are writing non-idiomatic, inefficient code. Python is not Java:
nor is it Julia.
> And one that is harder for a compiler to optimise, as you want to know
> how fast those 48 million calls can be made, not how well a compiler can
> avoid doing them!
I don't give a flying fox about how fast the compiler can do those 48
million calls, I want to know how to get Fibonacci numbers as fast as I
can, and if that means avoiding making the calls, then you better believe
that I will avoid making those calls.
Why are you benchmarking code you would never use in real code?
I would be a really shitty programmer if I insisted on writing Julia code
in Python just because the Julia developers write Julia code in Julia.
Just as I would be a shitty programmer if I insisted that Julia
programmers must write *Python* code in Julia.
Write code that is natural and efficient for the language you are using.
Otherwise you end up making really ludicrous comparisons between
languages that have *zero* relationship to real-world code that anyone in
their right mind would actually use.
> By running a different program? Fine, then here's my submission for the
> N=20 case used in the tests:
> def fib(n): return 6765
The program has to be *correct*, and your program is not.
When comparing two compilers, you are ALWAYS comparing two different
Either the compilers generate precisely the same machine/byte code, in
which case there is no point in comparing them, or they generate
different machine/byte code, in which case you're comparing two different
> The fact is that the vast majority of integer calculations don't need to
> use big integers (pretty much 100% of mine). Probably most don't even
> need 64 bits, but 32 bits.
And here we have the World According To Bart again: "since *I* don't need
more than 32-bits, then obviously it is a FACT than nobody else could
Yes Bart, the reason that so many languages have support for Bignums is
that everyone except you is an idiot who loves writing slow code for
absolutely no reason at all.
 Except to verify that they are, in fact, the same.