On Sun, 25 Feb 2018 20:22:17 -0800, Rick Johnson wrote:

> So of course, speed is not and should not be the
> primary concern, but to say that execution speed is of _no_ concern is
> quite absurd indeed.

I'm pretty sure that nobody here has said that speed is of no concern.

Rather, I would argue that the position we're taking is that *in general* 
Python is fast enough for the sorts of tasks we use it for (except, of 
course, when it isn't, in which case you have our sympathy, and if we can 
suggest some solutions we will).

Of course we'd take speed optimizations if they were free, but they're 
never free:

- they often require more memory to run;

- they usually require more code, which adds to the maintenance burden
  and increases the interpreter bloat;

- and increases the risk of bugs;

- somebody has to write, debug, document and maintain the code, 
  and developer time and effort is in short supply;

- or the optimization requires changes to the way Python operates;

- even if we wanted to make that change, it will break backwards
  compatibility;

- and often what people imagine is a simple optimization (because
  they haven't tried it) isn't simple at all;

- or simply doesn't work;

- and most importantly, just saying "Make it go faster!" doesn't work,
  we actually need to care about the details of *how* to make it faster.

(We tried painting Go Faster stripes on the server, and it didn't work.)

There's no point suggesting such major changes to Python that requires 
going back to the drawing board, to Python 0.1 or earlier, and changing 
the entire execution and memory model of the language.

That would just mean we've swapped from a successful, solid, reliable 
version 3.6 of the language to an untried, unstable, unproven, bug-ridden 
version 0.1 of New Python.

And like New Coke, it won't attract new users (they're already using 
Javascript or Go or whatever...) and it will alienate existing users (if 
they wanted Javascript or Go they'd already be using it).

There have been at least two (maybe three) attempts to remove the GIL 
from CPython. They've all turned out to increase complexity by a huge 
amount, and not actually provide the hoped-for speed increase. Under many 
common scenarios, the GIL-less CPython actually ran *slower*.

(I say "hoped for", but it was more wishful thinking than a rational 
expectation that removing the GIL would speed things up. I don't believe 
any of the core developers were surprised that removing the GIL didn't 
increase speeds much, if at all, and sometimes slowed things down. The 
believe that the GIL slows Python down is mostly due to a really 
simplistic understanding of how threading works.)

Besides, if you want Python with no GIL so you can run threaded code, why 
aren't you using IronPython or Jython?

Another example: UnladenSwallow. That (mostly) failed to meet 
expectations because the compiler tool chain being used wasn't mature 
enough. If somebody were to re-do the project again now, the results 
might be different.

But it turns out that not that many people care enough to make Python 
faster to actually invest money in it. Everyone wants to just stamp their 
foot and get it for free.

(If it weren't for the EU government funding it, PyPy would probably be 
languishing in oblivion. Everyone wants a faster Python so long as they 
don't have to pay for it, or do the work.)

A third example: Victor Stinner's FatPython, which seemed really 
promising in theory, but turned out to not make enough progress fast 
enough and he lost interest.

I have mentioned FatPython here a number of times. All you people who 
complain that Python is "too slow" and that the core devs ought to do 
something about it... did any of you volunteer any time to the FatPython 
project?



-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to