On 2016-01-20, Michael Lambert <[email protected]> wrote:
>> On 19 Jan 2016, at 03:57, Erling Westenvik <[email protected]>
> wrote:
>>
>>> On Tue, Jan 19, 2016 at 01:26:15AM -0600, Luke Small wrote:
>>> then it changes all the parsed http and ftp mirrors into http and ftp
>>> downloads and changes them to non redundant http mirrors (it has to to
>>> easily call ftp on it). It takes them and downloads SHA256 from the
>>> mirrors and the parent times how long it takes. If it takes too long
>>> it kills the ftp call and goes on to the next one. Then it sorts the
>>> results and puts the winner in /etc/pkg.conf
>>
>> So the program basically makes several network connections to
>> potentially some 120 servers all across the world and the "winner" is
>> calculated based on the "speed" it took downloading a 1.9K text file
>> from each of them?
>
> Which isn't even a big enough transfer to get TCP out of slow start.

Talking of this, you'll probably get a better speed improvement
by switching to HTTP/1.1 with keepalives and picking *any* mirror
on the same continent, than fine-tuning from the current list...

That is probably more interesting to implement too :-)

Reply via email to