There's a few splitfiles out there (*cough*movie central*cough*) That
are barely retrievable because of impatience.

The first segment returns fine, but each segment after that has higher
and higher loss rate.  In fact, this is the exact problem we had with
splitfiles when they were retrieved linearly (first blocks were always
found, latter blocks were gone)

The problem is REALLY aggrivated when people retry over and over... they
heal segment 1 a lot, so it has close to 100% retrievability, but
segment2, 3, 4 lose more and more blocks.

"obviously", we should retrieve the segments in random order.  That has
the obvious problem that we can't send the file until the download is
complete, anyway.

So, let's step back a second on this.  Browser timeout is killer on
large files already, that download session going for possibly hours
without getting a single byte.  So, why not spool the file to local
holding and initiate the download to the browser once the whole thing
is there.

I'd prefer this solution, actually... this way I can simply wget the
resulting file to wherever I want it (or copy the segments from
store/tmp) and be able to check the progress from anywhere (by knowing
the splitfile ID URL)

--Dan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 155 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20030313/6e87d6a9/attachment.pgp>

Reply via email to