> Jim Wright wrote:
> > I think there is still a case for attempting percent limiting.  I agree
> > with your point that we can not discover the full bandwidth of the
> > link and adjust to that.  The approach discovers the current available
> > bandwidth and adjusts to that.  The usefullness is in trying to be
> > unobtrusive to other users.
>
> Does it really fit that description, though? Given that it runs
> full-bore for 15 seconds (not that that's very long)...

I guess it depends on the type of users you are sharing with and the
upstream switches and routers.

My experience is that with some routers and switches a single user
wget'ing an iso can cause web-browsing people to experience slow
response.  That kind of application is not one where 15sec will make
much difference, and in fact there's a big backoff after that first
15sec.

OTOH if you are sharing with latency-sensitive apps (VOIP, realtime
control, etc.) and a wget bogs your app down, you better fix your
switches and routers- you will be affected by anybody in an
interactive web browser streaming youtube or whatever too.  This patch
is not a solution for that use case, and I agree that there really
isn't one that an app like wget can reasonably implement (without
delving into nonportable OS stuff).

Tony

Reply via email to