On 2/9/2012 3:27 PM, Glenn Linderman wrote:
On 2/9/2012 11:53 AM, Mike Meyer wrote:
On Thu, 9 Feb 2012 14:19:59 -0500
Brett Cannon<br...@python.org>  wrote:
On Thu, Feb 9, 2012 at 13:43, PJ Eby<p...@telecommunity.com>  wrote:
Again, the goal is fast startup of command-line tools that only use a
small subset of the overall framework; doing disk access for lazy imports
goes against that goal.

Depends if you consider stat calls the overhead vs. the actual disk
read/write to load the data. Anyway, this is going to lead down to a
discussion/argument over design parameters which I'm not up to having since
I'm not actively working on a lazy loader for the stdlib right now.
For those of you not watching -ideas, or ignoring the "Python TIOBE
-3%" discussion, this would seem to be relevant to any discussion of
reworking the import mechanism:

http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059801.html

"For 32k processes on BlueGene/P, importing
100 trivial C-extension modules takes 5.5 hours, compared to 35
minutes for all other interpreter loading and initialization.
We developed a simple pure-Python module (based on knee.py, a
hierarchical import example) that cuts the import time from 5.5 hours
to 6 minutes."

So what is the implication here?  That building a cache of module
locations (cleared when a new module is installed) would be more
effective than optimizing the search for modules on every invocation of
Python?


--
Terry Jan Reedy

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to