On Wed, Dec 02, 2009 at 04:12:46PM -0800, Brock Pytlik wrote:
> Ok, maybe they've fixed this in python 2.6. And. as long as chunksz
> stays big, and doesn't go to say, 1, we're probably unlikely to hit
> this in practice. The comment says that it's fixed to 100 for now,
> but gives no indication what likely values for it might be once it's
> unfixed. In any case, since the code was basically identical, I
> didn't see a reason not to avoid having an n^2 scaling issue sitting
> around w/out a reason for it. Of course, since this is fixed in 2.6,
> it doesn't matter.

I don't know if this is fixed in 2.6 or not, but I wasn't able to
replicate your results.  In general, I expect the chunk size to be large.
For the data operations, the transport temporarily chooses a chunk size
of 10 until it visits all of the repositories.  This is so that we test
the performance of all mirrors, but don't stick with one that's too
slow.

I'm planning on doing the same thing for prefetch manifests.  Basically
you'd make N 10-chunk downloads, where N is the number of origins.  Then
you'd pick a bigger default size and the fastest origin you can find.
I'm not going to implement this without testing.  If a variable chunk
size turns out to be painful, I'll change the list manipulation routines
with the wad that enables the variable sizing.

-j
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to