On Mon, Aug 03, 2009 at 05:54:11PM -0500, Shawn Walker wrote:
> Danek Duvall wrote:
>> On Mon, Aug 03, 2009 at 03:08:43PM -0700, Brock Pytlik wrote:
>>
>>> It makes perfect sense to me why it's done this way, except that if
>>> I'm a user, seeing the amount of stuff I've downloaded go down would
>>> confuse me. Is there a way we could either change the output, or
>>> possibly resume the download in place (at 56MB rather than starting
>>> from scratch)?
>>
>> Assuming that's the problem, then the Range: header is the solution,
>> assuming that our server stack supports it.
>
> cherrypy supports it; as does Apache.

Libcurl supports this too, but only for HTTP, FILE, and FTP (not https,
apparently).  The catch for the client is that if mirrors are configured
and it encounters an error, our policy has been to defer the failed
requests and then retry at a different location.  Because partially
downloaded files always cause verification errors, we try to delete them
as soon as the transfer fails.  However, I'm not opposed to providing
some kind of threshold for large files where we preserve the incomplete
files instead of starting anew.  Last time I looked, the depot didn't
have any files larger than 40mb, so we probably don't need this feature
immediately.

-j
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to