>  You are saying we shouldn't go over the max-connections the server
> admin. gave us to a single host? ... if so, I agree :).

Yep, that's what I always took for granted, 
maybe there was some confusion.

> If you want to make max_connections _just_ global, and then that be
> a hard upper limit on the number of download connections ... that's
> fine by me.

Exactly, see below.

>  My main concern was that if a repo. config. for max-connections was
>  set to N and we make N+1 connections for that repo. ... life is not
>  going to be pleasant.

Less knobs is often better.

>  This is likely to get "complicated" long term. ... where people will
> want to say things like "I don't mind if you do 666 connections, but
> don't go past these X,Y,Z mirrors (top N according to MirrorManager,
> or whatever) unless you have to."

Have been thinking about this for a while, and came up with
this solution:

input:
* single global max_connections option.
* priority-sorted mirror groups
* list of download requests

algo:
* always max out the total number of connections
* assign mirrors dynamically, just use the 'best' 
  mirror that has a free download slot.

This has a number of benefits:
* no per-repo options or heuristics needed
* better handling of mirror failures
* single queue means downloads start in the same order
  as urlgrab() calls (easier debugging?)

> The "obvious" first step is "don't leave the first mirror" for each
> MirrorManager repo. and "unto max-connections" for everything else.

Using 'first fit' seems natural there, as this does not leave
first mirror, obviously :)  I can even update/re-sort 
the mirror list during downloading..

Doing the mirror selection at urlgrab() time and per-mirror
queueing seemed natural with per-host limits, and the way 
mirror.py works, but it was a mistake.  MG should just tag
the request with that MG, and add it to the global list.
_______________________________________________
Yum-devel mailing list
Yum-devel@lists.baseurl.org
http://lists.baseurl.org/mailman/listinfo/yum-devel

Reply via email to