On Fri, Mar 08, 2019 at 10:16:02PM +0100, Martin Bammer wrote:
> Hi,
> 
> what about the idea that the interpreter preallocates and 
> preinitializes the tuples and dicts for function calls where possible 
> when loading a module?

That's an implementation detail. CPython may or may not use tuples and 
dicts to call functions, but I don't think that's specified by the 
language. So we're talking about a potential optimization of one 
interpreter, not a language change.

If the idea survives cursory discussion here, the Python-Dev mailing 
list is probably a better place to discuss it further.


> Before calling a function then the interpreter would just need to update
> the items which are dynamic and then call the function.

As Greg points out, that would be unsafe when using threads. Let's say 
you have two threads, A and B, and both call function spam(). A wants to 
call spam(1, 2) and B wants to call spam(3, 4). Because of the 
unpredictable order that threaded code runs, we might have:

    A sets the argument tuple to (1, 2)
    B sets the argument tuple to (2, 3)
    B calls spam()
    A calls spam() # Oops!

and mysterious, difficult to reproduce errors occur.

It may be possible to solve this with locks, but that would probably 
slow code down horribly.

[...]
> Without the optimization the interpreter would need to:
> 
> - create new tuple (allocate memory)
> - write constant into first tuple index.
> - create dict (allocate memory)
> - add key+value
> - add key+value
> - call function

Sure, and that happens at runtime, just before the function is called. 
But the same series of allocations would have to occur under your idea 
too, it would just happen when the module loads. And then the pre- 
allocated tuples and dicts would hang around forever, wasting memory. 
Even if it turns out that the function never actually gets called:

    for x in sequence:
        if condition(x):  # always returns False!
            function(...)

the compiler will have pre-allocated the memory to call it.

So I suspect this is going to be very memory hungry. Trading off memory 
for speed might be worthwhile, but it is a trade-off that will make 
certain things worse rather than better.


> If this idea is possible to implement I assume the function calls would
> receive a great speed improvment.

Well, it might decrease the overhead of calling a function, but that's 
usually only a small proportion of the total time to make function 
calls. So it might not help as much as you expect, except in the case 
where you have lots and lots of function calls each of which do only a 
tiny amount of work.

But that has to be balanced against the slowdown that occurs when the 
module loads, when the same memory allocations (but not deallocations) 
would occur. Starting up Python is already pretty slow compared to other 
languages, this would probably make it worse.

Even if it became a nett win for some applications, for others it would 
likely be a nett loss. My guess is that it would probably hurt the cases 
which are already uncomfortably slow, while benefitting the cases that 
don't need much optimization.

But that's just a guess, and not an especially educated guess at that.


-- 
Steven
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to