On Wed, Dec 02, 2009 at 03:17:10PM -0800, Brock Pytlik wrote:
> Ok, this is probably just a nit, but if chunksize ever got small, or
> namelist was really big, it could make a difference, so...
> 
> Could you change 686-687 to be:
> nameslice = namelist[chunksz:]
> del nameslice[chunksz:]
> 
> and set chunksz to be -101 instead of 100?
> 
> Here's why:
> def f(lst):
> ...     while lst:
> ...         a = lst[:100]
> ...         del lst[:100]
> 
> def g(lst):
> ...     while lst:
> ...         a = lst[-101:]
> ...         del lst[-101:]
> 
> timeit.Timer("<func>(l)","from __main__ import <func>; l =
> range(0,X)").timeit()
> 
> Time in secs to run for different values of X
> X            f       g  1000        0.23     0.24
> 10000       0.23     0.23
> 100000      0.30     0.21
> 1000000     3.30     0.22
> 10000000  120+       0.36

This seems like a premature optimization, when considering all the other
places that the transport currently performs list operations.  At the
moment the largest install I can do from ipkg is 1500 pkgs and 300000
files.  For manifests, this is quite a ways away from the 1000000
necessary to make things go slowly.  The files would be more concerning,
except that they're appended to a list, using a procedure like the old
version of prefetch.

Just out of curiosity, I tried to reproduce your results, but found
that, at least on python 2.6.2, a list of ten million items was faster
to slice forwards then backwards.  In my case, f took 0.20 seconds,
whereas g took 0.35 seconds.

I'm not opposed to making the transport faster, but I think we should
figure out where the slow parts actually are first.

-j
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to