The expectation from the man page and from the straight thinking, is that
the
--no-clobber option will just skip existing files, continuing on to download
other,
non-existing files. This is not what I see. Truth is, that the man page,
with so
many words, fails to clarify the semantics well enough.
If I can't have wget just continue an existing website download (or part
thereof)
with -nc, there must be some other way to do this, without it having
anything to
do with timestamps. The simple and straightforward criterion should be
"file doesn't exist ==> download it now". If this bailing-out on first
already-existing
file is what is intended, the design is deficient, and wget is less than
useful.
I hope this is a bug, or at least there is some other way to achieve the
simple
behaviour that I am expecting. ("file doesn't exist ==> download it now")
This was my command line:
wget -erobots=off -r
http://pure-data.cvs.sourceforge.net/pure-data/doc/tutorials/
I interrupted the first download because I thought something was going
wrong, then
I decided to continue. I entered this command line:
wget -erobots=off -nc -r
http://pure-data.cvs.sourceforge.net/pure-data/doc/tutorials/
wget bailed out on the first file:
File `pure-data.cvs.sourceforge.net/pure-data/doc/tutorials/index.html'
already there; not retrieving.
Aborted
??
_________________________________________________________________
Win a Zunemake MSN® your homepage for your chance to win!
http://homepage.msn.com/zune?icid=hmetagline