> > >> - --limit-rate will find your version handy, but I want to hear from
> > >> them. :)

> > > I would appreciate and have use for such an option.  We often access
> > > instruments in remote locations (think a tiny island in the Aleutians)
> > > where we share bandwidth with other organizations.

> > A limitation in percentage doesn't make sense if you don't know
> > exactly how much bandwidth is available.  Trying to determine full
> > bandwidth and backing off from there is IMHO doomed to failure because
> > the initial speed Wget gets can be quite different from the actual
> > link bandiwdth, at least in a shared link scenario.  A --limit-percent
> > implemented as proposed here would only limit the retrieval speed to
> > the specified fraction of the speed Wget happened to get at the
> > beginning of the download.  That is not only incorrect, but also quite
> > non-deterministic.

> > If there were way to query the network for the connection speed, I
> > would support the "limit-percent" idea.  But since that's not
> > possible, I think it's better to stick with the current --limit-rate,
> > where we give the user an option to simply tell Wget how much
> > bandwidth to consume.

> I think there is still a case for attempting percent limiting.  I agree
> with your point that we can not discover the full bandwidth of the
> link and adjust to that.  The approach discovers the current available
> bandwidth and adjusts to that.  The usefullness is in trying to be
> unobtrusive to other users.

Network conditions do change and my initial all-out estimate is
certainly not ideal but it works in many situations.

Another way would be to transfer in a more bursty mode rather than
meter at small granularity- that way one could measure the rate of
each burst an one could take the channel capacity.  I worry that that
might be more harmful to those sharing channel in cases like Hvroje's
than the initial-burst measurement, and in fact have am thinking that
one could cache the initial-measure value and use it for future

An alternative to a bursty mode would be to start at full speed and
then ramp down till we hit the desired rate and then ramp back up till
we bump the limit and down again.  That way we could update the max
rate estimate periodically and recover from any error that might have
occurred in the initial estimate.  Any thoughts on this behavior?
Less harmful?  More?


Reply via email to