Unbelievable!

This was with web downloader for x using only 5 kilobyte rollback on each
failed retry.
Infact, I split the download into 10 threads and killed the threads over 30
times.
Are there any plans to INCLUDE rollback support with a user specified value
with lftp or wget?
Web Downloader For X is great, unless you want to run your downloads in
console :).



[war@p300 x]$ /usr/bin/time md5sum -c file.iso.md5
file.iso: OK
22.56user 5.62system 0:36.38elapsed 77%CPU (0avgtext+0avgdata
0maxresident)k
0inputs+0outputs (113major+13minor)pagefaults 0swaps
[war@p300 x]$





Justin Piszcz wrote:

> I was curious if lftp or wget will ever support a rollback feature and
> somehow verify the bytes are correct somehow where the file has been
> resumed.
>
> Why?  This is stated below:
>
> LFTP VS WGET EXPERIMENT:
>
> PROBLEM: With lftp, many of my downloads get corrupted.
>          This is because the connection between my satellite link, and
> my ISP
>          gets severed, therefore causing FTP to resume.  Regular
> connection
>          breaks are normal on a satellite connection, they may only last
> 1ms or
>          less, however, they cause the FTP to resume, thus causing
> corruption.
>
> QUESTION: However, does lftp corrupt files more often than say wget?
>
> TEST LFTP: With each 700MB pull with lftp, 6% of files usually are bad.
>            This means 3 to 4 re-downloads.  I've downloaded over 1
> terrabyte
>            of data, the average seems to be about 6%, the more resumes,
> the
>            greater the percentage file problems.
>
> TEST WGET: First 700MB file transfer: 0.00% corruption.
>            Second 700MB file transfer: 0.00% corruption.
>            Third 700MB file transfer: 5.00% corruption.
>            After several more 700MB pulls, it is about the same.
>
> POINT: It is a single character error.  I've done multiple splits and
> diffs.
>        I've found only 1 character is different from the original.
>        This is however, catastrophic for binary files.

Reply via email to