It is, indeed, necessary to parse web pages in order to download files with
particular extensions. Those web pages could be read in memory, however,
without being downloaded to disk, themselves. So, I, too, wish that program
option were available (unfortunately, I am not able to do the
Dear developers of wget,
If you find some free time, would you, please, implement a feature that
would progressively download a file that is growing on the remote site?
Though it might be obvious, let me explain what I mean.
In real world, some files might grow on the remote filesystem, eg.
On Dec 27, 2011, at 4:36 PM, Keisial wrote:
On 27/12/11 17:05, Michal Tausk wrote:
The --ignore-length is not taken into consideration, (logically)
when using --continue as it needs to count the difference in size
between the downloaded file and the remote file. However, the file is
Michal Tausk wrote:
If you can put wireshark on it, check to see which FIN comes over first. I
bet Keisial is right. I bet the server is telling wget I'm done by sending
the FIN.
Hope this helps
pedz
--
I can try that, but you are both probably right. Even though, it should