Hi,
I've just come across the following remark in the wget manual page (1.10.2),
about the -c option:
> Wget has no way of verifying that the local file is really a valid prefix of
> the remote file.
This is not quite true. It could at least check the remote and local
file time stam
Hello,
Current wget versions seem to have broken -c:
wget --cut-dirs=1 -m -nH ftp://ftp.gnu.org/gnu/wget/
wget --cut-dirs=1 -m -nH -c ftp://ftp.gnu.org/gnu/wget/
The second .listing contains duplicate entries and wget tries to download each
file twice. It is smart enough to detect that the fi
Lukasz Bolikowski wrote:
>
> Hello!
>
>I think the -c option in wget results in misleading output.
> I have been downloading
>
> ftp://ftp.kernel.org/pub/dist/superrescue/v2/superrescue-2.0.0a.iso.gz
>
> which is 514596847 bytes long. I aborted the downloading
Hello!
I think the -c option in wget results in misleading output.
I have been downloading
ftp://ftp.kernel.org/pub/dist/superrescue/v2/superrescue-2.0.0a.iso.gz
which is 514596847 bytes long. I aborted the downloading after 246345728
bytes and then run:
wget -c --passive-ftp -o super-log
Henrik van Ginhoven <[EMAIL PROTECTED]> writes:
> On Sat, May 26, 2001 at 01:56:41PM +0200, Hrvoje Niksic wrote:
> > Sp00l <[EMAIL PROTECTED]> (d'uh, damn mail) writes:
> > > Aah.. Yes, you are right as always. However, once wget determine the
> > > (index.html) file to be completely downloaded,
On Sat, May 26, 2001 at 01:56:41PM +0200, Hrvoje Niksic wrote:
> Sp00l <[EMAIL PROTECTED]> (d'uh, damn mail) writes:
> > Aah.. Yes, you are right as always. However, once wget determine the
> > (index.html) file to be completely downloaded, shouldn't it start going
> > through the file for links
> > The server does not support continued downloads, which conflicts with -c'.
> But that's true, isn't it?
> The server didn't respond with a `Range' header, hence "continued
> download" doesn't work. Since you specified that the download should
> be continued, Wget refuses to truncate up your f
Henrik van Ginhoven <[EMAIL PROTECTED]> writes:
> $ src/wget -d -c -r -np http://ftp.sunet.se/pub/gnu/wget/
I assume that ftp.sunet.se/pub/gnu/wget/index.html already exists at
this point.
> /.../
> ---request begin---
> GET /pub/gnu/wget/ HTTP/1.0
> User-Agent: Wget/1.7-pre1
> Host: ftp.sunet.
On Sat, May 26, 2001 at 01:07:46PM +0200, Hrvoje Niksic wrote:
>
> But Wget *thinks* that the server doesn't support it. Sending a debug
> log of (the relevant part of) the Wget run would probably help in
> determining what went wrong.
oops, forgot that (actually, I was afraid I had missed some n
Henrik van Ginhoven <[EMAIL PROTECTED]> writes:
> Neither english nor networking are my native languages, but with
> ``continued downloads'' I take it wget means ``continue on a file
> where you left off'', which in this case would be untrue, because
> sunet.se does support it.
But Wget *thinks*
Just tried out the latest and greatest cvs version of wget, and I'm a bit
confused regarding the way the --continue option work, or doesn't work, now.
After doing a mirror with -np -r on http://ftp.sunet.se/pub/gnu/wget/, I run
the same command again, this time with -c, and I get:
...
@verdandi
11 matches
Mail list logo