Hi all,

when using wget recently to get some "free" mp3's from Mike Baas'
website, I had to kill it halfway through to do some other downloading.

When I continued, I used `wget -c ...` but I got "strange" behaviour:

$ wget -c
"http://68.106.74.139/mp3/Mike_Baas/4/mp3/Mike_Baas_-_4_-_01_-_Why_Can't_I_Slow_The_World_Down.mp3"
 --10:38:16--  
http://68.106.74.139/mp3/Mike_Baas/4/mp3/Mike_Baas_-_4_-_01_-_Why_Can't_I_Slow_The_World_Down.mp3
            => `Mike_Baas_-_4_-_01_-_Why_Can't_I_Slow_The_World_Down.mp3' 
Connecting to 68.106.74.139:80... connected. HTTP request sent, awaiting 
response... 200 OK Length: 13,870,871 (13M) [audio/mpeg]

50% [=================>                   ] 13,870,871     3.67M/s
ETA 00:04

10:38:19 (3.69 MB/s) -
`Mike_Baas_-_4_-_01_-_Why_Can't_I_Slow_The_World_Down.mp3' saved
[13870871/13870871]

As you can see, wget didn't continue, instead it tried to download the
complete file again.  It must also be cached by squid, because I
certainly don't get 3.67 M/s on our shared 512k line :)

So the question is: why does wget try to download it from the beginning?
wget shouldn't do this even if the server can't resume, wget should just
die.

And why also does it not appear to finish, even though the mp3 sounds
complete?

many thanks,
-- 
Iain Buchanan <iaindb at netspace dot net dot au>

Loneliness is a terrible price to pay for independence.

-- 
[email protected] mailing list

Reply via email to