hi,

if I observerd correctly, wget behaves this way:

errors are classified into two classes:. critical and non-critical errors.

when a non critical error (eg time out) occurs wget retries continuing at the byte the last transmission stopped at.
(if configured that way)


if a critical error (like access denied, file not found) occured, wget stops.

I now experienced for the very first time, wget OVERWRITING a partly retrieved file because of a bug.
it was a pain in the ass, because I already had been waiting 1:30 hours to download the first half of 600 MB.


Wget tried to continue, but the server answered with the remaining file size (I believe) , not the complete file size. So wget got confused and restarted.
no, sorry.... I do not have the logfile any more. but i can get the link (it was a filefront download)


This makes me think about another behaviour:

- wget may be forced to always retry
since yesterday and in the past I experiencend many false stops because of bad server or connection. probably it should delay a retry in the critical case.


- if wget decides a continue is not possible due to server limitiation it should NOT delete the file but create a diff, if it seems appropriate
(is the diff command able to work on binary files?)



Jan






Reply via email to