Indeed. On 10/10/07, Hrvoje Niksic <[EMAIL PROTECTED]> wrote: > Jim Wright <[EMAIL PROTECTED]> writes: > > > I think there is still a case for attempting percent limiting. I > > agree with your point that we can not discover the full bandwidth of > > the link and adjust to that. The approach discovers the current > > available bandwidth and adjusts to that. The usefullness is in > > trying to be unobtrusive to other users. > > The problem is that Wget simply doesn't have enough information to be > unobtrusive. Currently available bandwidth can and does change as new > downloads are initiated and old ones are turned off. Measuring > initial bandwidth is simply insufficient to decide what bandwidth is > really appropriate for Wget; only the user can know that, and that's > what --limit-rate does.
My patch (and the doc change in my patch) don't claim to be totally unobtrusive - it has a particular behavior that's documented, which is to try to be less obtrusive than your typical get-it-for-me-right-now-as-fast-as-you-can download. Obviously people who the level of unobtrusiveness you define shouldn't be using it. Then again, people who require that level probably need to get routers implement a little more "fairness" or QOS- that don't let one TCP connection lock out other users. My patch just does automatically what I used to do manually- start obtrusive and then scale back to less obtrusive for the rest of the download. Even competent non-sys-admin people often are not appraised of the technical details of the networks they use, but they may still want to be reasonably nice, for example, to the other people using the wifi at the cybercafe. It's certainly a step above the naive behavior- the naive user doesn't even know then (their typical tools, like MSIE don't even tell them!) Tony G