Because of the way the always_rest logic has been restructured, if a non-fatal error occurs in an initial attempt, subsequent retries will forget about always_rest and clobber the existing file. Ouch.

Also, the behavior of –c when downloading from a server that does not support ranges has changed since 1.9.1. (Or seems to have, from looking at the code; I haven't actually tested.) Previously, Wget would bail in such situations:

Continued download failed on this file, which conflicts with `-c'.
Refusing to truncate existing file `%s'

Now, it will re-download the whole file and discard bytes util it gets to the right position. I think this change deserves explicit mention in the NEWS file. There's an entry about the new logic when Range requests fail, but I don't think it's obvious that this affects –c.

Also, I think the old behavior was useful in some situations. If you're short on bandwidth it might not be worth it to re-get the whole file. Especially when it's a popular file and there's likely to be another mirror that does support Range. What would you think of an option to disallow start-over retries?



Reply via email to