On Tue, 15 Feb 2011 00:22:48 -0500
James Vega <[email protected]> wrote:

> On Mon, Dec 27, 2010 at 02:02:08PM +1030, Karl Goetz wrote:
> > I've just noticed dget doesn't resume downloads. on packages like
> > linux or openoffice this can be quite a problem if you drop out
> > half way through. it would be great if it could download using
> > --continue for wget, or whatever the curl equivalent is.
> 
> This does sound useful.  As described below, though, dget
> removes/backs up an existing file if its hash sum isn't what it
> expected (which would be the case for a partial download).
> 
> When given a URL, the corresponding on-disk file is always
> removed/backed up and re-downloaded.  If that was a .dsc/.changes,
> then the hash sums of the files listed within are compared against the
> corresponding hash sums of the on-disk files and if they don't match,
> removed/backed up and re-downloaded.
> 
> Maybe it would be good to consolidate the "mis-matched hash" behavior
> into an option where one can choose "remove & re-download", "backup &
> re-download", or "continue download"?

I'd think the last two are more important, but all three would be nice
i suppose.

> My concern with that is "continue download" doesn't ensure the
> existing file really contains the correct information.  It just
> continues downloading from an offset based on the existing file.  If
> that existing contents are wrong somehow, then the entire file would
> need to be redownloaded anyway.

Thats a good point. I wonder how often it happens, and what the
potential issue is.

eg, if someone tried resuming because they are bandwidth restrained,
how will they feel about a small corruption causing issues.
Would they be able to finish it with rsync to flip the right bits, or
would it need to be redownloaded?
kk

-- 
Karl Goetz, (Kamping_Kaiser / VK5FOSS)
Debian contributor / gNewSense Maintainer
http://www.kgoetz.id.au
No, I won't join your social networking group

Attachment: signature.asc
Description: PGP signature

Reply via email to