Hello,

I am a former computer tech, and I've followed all
instructions closely regarding wget.  I am using wget
1.9 in conjunction with the wgetgui program.  

I have confirmed resumability with smaller binary
files, up to 2.3 gigabytes in size.  

What happens is, that when downloading the wikipedia
database, which is about 8 gigabytes, using wget, the
download proceeds and is resumable up to about the 4
gig mark, then, when I attempt resumption, the
internet connection appears to be working, but the
file just sits there, and doesn't increase in size.

I theorize that the datastream is being corrupted, and
my next step will be to "shave" pieces of the file off
the end, in several megabyte increments, until I reach
the uncorrupted part.  

Please let me know what's going on and why this is
happening at this email address, as I am not a
developer and not currently subscribed to the mailing
list, but I do need to have wget working properly to
get the database.  

Thanks,

Jonathan.

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Reply via email to