On 23/02/2018 01:27, Steven D'Aprano wrote:
On Thu, 22 Feb 2018 17:53:30 +0000, bartc wrote:

The actual result is irrelevant, so long as its correct. The important
thing is those 50 million calls.

Why do you care about the 50 million calls?

Because we are interested in comparing call-efficiency across languages? NOT in Fibonacci numbers.

 That's crazy -- the important
thing is *calculating the Fibonacci numbers as efficiently as possible*.

I don't believe for a second that you would write code like this:


     # need a Fibonacci number, but the implementation we have

We don't. If Fibonacci causes problems for you because you can't see past the inefficiency, then we might use the Ackermann function instead.

(But I've found Ackermann can cause stack overflow in some languages.)

so why on earth do you care about the 50 million calls?

The only reason I care about them is to reduce the number as much as
possible. Why do you care about them?

You're responsible for implementing the call-mechanism of a new language, and you want to make it efficient. What program do you test it with, one that contains zero calls (clearly, that will be most efficient!), or one that contains lots of calls?


arr := some array
for i := 1 to len(arr):
     value = arr[i]
     process(value)


If I'm writing Ruby or Python, that means *NOT* writing loops like that,
but writing them like this:

for value in arr:
     process(value)

Suppose it turned out that in Python, /this/ was actually faster:

  for i in range(len(arr)):
     value = arr[i]
     process(value)
?

I don't give a flying fox about how fast the compiler can do those 48
million calls, I want to know how to get Fibonacci numbers as fast as I
can, and if that means avoiding making the calls, then you better believe
that I will avoid making those calls.

OK, then, if you are at all interested in Julia vs. Python (and it was you who posted that article!), then compare a version of a benchmark that uses as few calls as possible, if you like. But what exactly is that benchmark then highlighting? Loops?


By running a different program? Fine, then here's my submission for the
N=20 case used in the tests:

    def fib(n): return 6765

The program has to be *correct*, and your program is not.

OK, then do as I did, and keep a global tally of how many calls it makes, and print that as the result. Then the correct output for 'fibcalls(36)' should be:

  48315633

If yours shows only 37, then /your/ program is wrong. (BTW, this value will always fit in 64 bits, for the foreseeable future.)

When comparing two compilers, you are ALWAYS comparing two different
programs.

That's part of the art of doing it properly.


Either the compilers generate precisely the same machine/byte code, in
which case there is no point in comparing them[1], or they generate
different machine/byte code, in which case you're comparing two different
running programs.

So the people who worked for years on PyPy never once bothered to compare results with running the same source program with CPython, because there would be point? OK, I never knew that...

The fact is that the vast majority of integer calculations don't need to
use big integers (pretty much 100% of mine). Probably most don't even
need 64 bits, but 32 bits.

And here we have the World According To Bart again: "since *I* don't need
more than 32-bits, then obviously it is a FACT than nobody else could
need them".

Do you have better statistics than me? If so please share. I guess the number of calculations (that /can/ be done in 64 bits) is less than 100%, but not 0%, so somewhere in between.

I'm saying it's nearer to 100% than to 0%; you're saying, what ..... ?

As examples of quantities that can be represented in 64 bits:

 Character codes or code points
 Integer pixel values
 Most enumerations
 For-loop counting (if it needs more than 64 bits, then we might be
   waiting a while for it to finish)
 Array indexing
 Numbers of people, cars, guns, countries, houses, books, iphones etc
   on Earth
 ... endless examples of numbers less than 9000000000000000000 used
   in programming

Most calculations on these can also be done in 64 bits (except multiply and power where the ceiling might 32 bits or lower).

Yes Bart, the reason that so many languages have support for Bignums is
that everyone except you is an idiot who loves writing slow code for
absolutely no reason at all.

If bignum support is a handicap, even when they are not needed, then it is something to consider.

--
bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to