Hello!

I have Wget version 1.10.2 running on PC (Cygwin on
Windows XP; 768 MB RAM; 1150 MB virtual memory). I was
downloading files from some URLs when one of the files
was unexpectedly large and I got the following:  

Command and subsequent Error: 

=============
$ wget --execute=input=c:/test.txt
--execute=dir_prefix=c:/data
15:33:57
URL:http://goldenspud.com/webrog/archives/category/geek-stuff/python/

[335607029] -> "c:/data/index.html" [1]
wget: realloc: Failed to allocate 536870912 bytes;
memory exhausted.
=============

The above URL is the only entry in the input file:
test.txt. An index.html file of size 327,720 KB was
downloaded in "c:/data" directory. In another attempt,
the size of the same file downloaded was 420,609 KB
but with the same error.

Firstly, this file size is less than the 2 GB limit;
so ideally there should not be any problem. Should I
change any settings?

Secondly, actually I do not want to download this
large file. But as far as I know there is no way to
skip files above some size. Can you suggest me any 
quick change to the code to skip large files?

Thirdly, if I have many URL entries in the input file,
after this URL and the error, the wget process ends.
Is there a way to "catch" this error and continue?

Any help is appreciated!

Thanks,
Rahul.


                
__________________________________ 
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs

Reply via email to