-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

(Accidentally sent private reply).

Tony Godshall wrote:
> On 10/17/07, Matthias Vill <[EMAIL PROTECTED]> wrote:
>> Tony Godshall wrote:
>>> If it was me, I'd have it default to backing off to 95% by default and
>>> have options for more aggressive behavior, like the multiple
>>> connections, etc.
>> I don't like a default back-off rule. I often encounter downloads with
>> often changing download speeds. The idea that the first few seconds I
>> only have a quite bad speed and could get much more out of it is just
>> not satisfying.
>
> You might be surprised, but I totally agree with you.  A default
> backoff rule would only make sense if the measuring was better.  E.g.
> a periodic ramp-up/back-off behavior to achieve 95% of the maximum
> measured rate.
>
>>> I'm surprised multiple connections would buy you anything, though.  I
>>> guess I'll take a look through the archives and see what the argument
>>> is.  Does one tcp connection back off on a lost packet and the other
>>> one gets to keep going?  Hmmm.
>> I guess you get improvements if e.g. on your side you have more free
>> bandwidth than on the source-side. Having two connections than means,
>> that you get almost twice the download speed, because you have two
>> connections competing for free bandwidth and ideally every connection
>> made two the sever is equally fast.
>>
>> So in cases, where you are the only one connecting, you probably win
>> nothing.
>
> Ah, I get it.  People want to defeat sender rate-limiting or other QOS
> controls.
>
> The opposite of nice.  We could call it --mean-mode.  Or --meanness n,
> where n=2 means I want to have two threads/connections, i.e. twice as
> mean as the default.

Oh, think you misunderstand. I have no intention of providing such a
thing. That's what "download accelerators" are for, and as much as some
people may want Wget to be one, I'm against it.

However, multiple simultaneous connections to _different_ hosts, could
be very beneficial, as latency for one server won't mean we sit around
waiting for it before downloading from others. And, up to two
connections to the same host will also be supported: but probably only
for separate downloads (that way, we can be sending requests on one
connection while we're downloading on another). The HTTP spec says that
clients _should_ have a maximum of two connections to any one host, so
we appear to be justified to do that. However, it will absolutely not be
done by default. Among other things, multiple connections will destroy
the way we currently do logging, which in and of itself is a good reason
not to do it, apart from niceness.

> No, well, actually, I guess there can be cases where bad upstream
> configurations result in a situation where more connections don't
> necessarily mean one is taking more than one's fair share of
> bandwidth, but I bet this option will be result in more harm than
> good.  Perhaps it should be one of those things that one can do
> oneself if one must but is generally frowned upon (like making a
> version of wget that ignores robots.txt).

You do know that Wget already can be configured to ignore robots.txt, right?

Yeah, I'm already cringing at the idea that people will alter the "two
connections per host" limit to higher values. Even if we limit it to
_one_ per host, though, as long as we're including support for multiple
connections of any sort, it'd be easy to modify Wget to allow them for
the same host.

And multiple connections to multiple hosts will be obviously beneficial,
to avoid bottlenecks and the like. Plus, the planned (plugged-in)
support for Metalink, using multiple connections to different hosts to
obtain the _same_ file, could be very nice for large and/or very popular
downloads.

- --
Micah J. Cowan
Programmer, musician, typesetting enthusiast, gamer...
http://micah.cowan.name/

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHFmYO7M8hyUobTrERCEDOAJ0T/a70fraMdvQMGgIGPl2XXoprHACcDlI0
eqx0DfnsJ+NUAkzJhUMQS68=
=EGay
-----END PGP SIGNATURE-----

Reply via email to