On Mon, Aug 03, 2009 at 04:04:13PM -0700, Brock Pytlik wrote:
> Roughly, that question is, "when I download stuff using a web browser  
> (or other download program I can think of at the moment), progress is  
> monotonic, why isn't ours?" Now, the answer could be a) they don't retry  
> so if they time out, they just give up b) they change the goal (though  
> I've never seen them do that) c) they don't update if they're restarting  
> from scratch until they've passed what they've reported before or d)  
> they use the range header to resume downloads where they left off, if  
> the problem was only a timeout.

To answer the webbrowser question, It's a and e.  Where e is they don't
verify the content they download.

> In any case, I'll file a RFE to resume downloading a partial file using  
> the range header as that might still be useful for people who are on  
> thin pipes and have bandwidth restrictions. Sound reasonable?

Ok, but using the range header is somewhat orthogonal to keeping the
progress tracker from rolling backwards.  The progress tracker gets
updated by the libcurl framework as the download occurs, so if a
download fails we removed the progress accrued by that transaction.
There's a tradeoff between having your progress updated as the download
progresses and the previous way where we only updated the progress once
the download finished.  Now we're coupled to the framework but on a
per-request instead of per file basis.

-j
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to