On Thu, Feb 9, 2012 at 17:00, PJ Eby <p...@telecommunity.com> wrote:

> On Thu, Feb 9, 2012 at 2:53 PM, Mike Meyer <m...@mired.org> wrote:
>
>> For those of you not watching -ideas, or ignoring the "Python TIOBE
>> -3%" discussion, this would seem to be relevant to any discussion of
>> reworking the import mechanism:
>>
>> http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059801.html
>>
>> Interesting.  This gives me an idea for a way to cut stat calls per
> sys.path entry per import by roughly 4x, at the cost of a one-time
> directory read per sys.path entry.
>
> That is, an importer created for a particular directory could, upon first
> use, cache a frozenset(listdir()), and the stat().st_mtime of the
> directory.  All the filename checks could then be performed against the
> frozenset, and the st_mtime of the directory only checked once per import,
> to verify whether the frozenset() needed refreshing.
>

I actually contemplated this back in 2006 when I first began importlib for
use at Google to get around NFS's crappy stat performance. Never got around
to it as compatibility with import.c turned out to be a little tricky. =)
Your solution below, PJE, is more-or-less what I was considering (although
I also considered variants that didn't stat the directory when you knew
your code wasn't changing stuff behind your back).


>
> Since a failed module lookup takes at least 5 stat checks (pyc, pyo, py,
> directory, and compiled extension (pyd/so)), this cuts it down to only 1,
> at the price of a listdir().  The big question is how long does a listdir()
> take, compared to a stat() or failed open()?   That would tell us whether
> the tradeoff is worth making.
>

Actually it's pyc OR pyo, py, directory (which can lead to another set for
__init__.py and __pycache__), .so, module.so (or whatever your platform
uses for extensions).


>
> I did some crude timeit tests on frozenset(listdir()) and trapping failed
> stat calls.  It looks like, for a Windows directory the size of the 2.7
> stdlib, you need about four *failed* import attempts to overcome the
> initial caching cost, or about 8 successful bytecode imports.  (For Linux,
> you might need to double these numbers; my tests showed a different ratio
> there, perhaps due to the Linux stdib I tested having nearly twice as many
> directory entries as the directory I tested on Windows!)
>
> However, the numbers are much better for application directories than for
> the stdlib, since they are located earlier on sys.path.  Every successful
> stdlib import in an application is equal to one failed import attempt for
> every preceding directory on sys.path, so as long as the average directory
> on sys.path isn't vastly larger than the stdlib, and the average
> application imports at least four modules from the stdlib (on Windows, or 8
> on Linux), there would be a net performance gain for the application as a
> whole.  (That is, there'd be an improved per-sys.path entry import time for
> stdlib modules, even if not for any application modules.)
>

Does this comment take into account the number of modules required to load
the interpreter to begin with? That's already like 48 modules loaded by
Python 3.2 as it is.


>
> For smaller directories, the tradeoff actually gets better.  A directory
> one seventh the size of the 2.7 Windows stdlib has a listdir() that's
> proportionately faster, but failed stats() in that directory are *not*
> proportionately faster; they're only somewhat faster.  This means that it
> takes fewer failed module lookups to make caching a win - about 2 in this
> case, vs. 4 for the stdlib.
>
> Now, these numbers are with actual disk or network access abstracted away,
> because the data's in the operating system cache when I run the tests.
>  It's possible that this strategy could backfire if you used, say, an NFS
> directory with ten thousand files in it as your first sys.path entry.
>  Without knowing the timings for listdir/stat/failed stat in that setup,
> it's hard to say how many stdlib imports you need before you come out
> ahead.  When I tried a directory about 7 times larger than the stdlib,
> creating the frozenset took 10 times as long, but the cost of a failed stat
> didn't go up by very much.
>
> This suggests that there's probably an optimal directory size cutoff for
> this trick; if only there were some way to check the size of a directory
> without reading it, we could turn off the caching for oversize directories,
> and get a major speed boost for everything else.  On most platforms, the
> stat().st_size of the directory itself will give you some idea, but on
> Windows that's always zero.  On Windows, we could work around that by using
> a lower-level API than listdir() and simply stop reading the directory if
> we hit the maximum number of entries we're willing to build a cache for,
> and then call it off.
>
> (Another possibility would be to explicitly enable caching by putting a
> flag file in the directory, or perhaps by putting a special prefix on the
> sys.path entry, setting the cutoff in an environment variable, etc.)
>
> In any case, this seems really worth a closer look: in non-pathological
> cases, it could make directory-based importing as fast as zip imports are.
>  I'd be especially interested in knowing how the listdir/stat/failed stat
> ratios work on NFS - ISTM that they might be even *more* conducive to this
> approach, if setup latency dominates the cost of individual system calls.
>
> If this works out, it'd be a good example of why importlib is a good idea;
> i.e., allowing us to play with ideas like this.  Brett, wouldn't you love
> to be able to say importlib is *faster* than the old C-based importing?  ;-)
>

Yes, that woud be nice. =)

Now there are a couple things to clarify/question here.

First is that if this were used on Windows or OS X (i.e. the OSs we support
that typically have case-insensitive filesystems), then this approach would
be a massive gain as we already call os.listdir() when PYTHONCASEOK isn't
defined to check case-sensitivity; take your 5 stat calls and add in 5
listdir() calls and that's what you get on Windows and OS X right now.
Linux doesn't have this check so you would still be potentially paying a
penalty there.

Second is variance in filesystems. Are we guaranteed that the stat of a
directory is updated before a file change is made? Else there is a small
race condition there which would suck. We also have the issue of
granularity; Antoine has already had to add the source file size to .pyc
files in Python 3.3 to combat crappy mtime granularity when generating
bytecode. If we get file mod -> import -> file mod -> import, are we
guaranteed that the second import will know there was a modification if the
first three steps occur fast enough to fit within the granularity of an
mtime value?

I was going to say something about __pycache__, but it actually doesn't
affect this. Since you would have to stat the directory anyway, you might
as well just stat directory for the file you want to keep it simple. Only
if you consider __pycache__ to be immutable except for what the interpreter
puts in that directory during execution could you optimize that step (in
which case you can stat the directory once and never care again as the set
would be just updated by import whenever a new .pyc file was written).

Having said all of this, implementing this idea would be trivial using
importlib if you don't try to optimize the __pycache__ case. It's just a
question of whether people are comfortable with the semantic change to
import. This could also be made into something that was in importlib for
people to use when desired if we are too worried about semantic changes.
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to