On Saturday 13 February 2010 19:00:16 xor wrote:
> First of all, I like your calculations very much and I wonder why nobody 
> calculated this before FEC was implemented. If I understood this correctly 
> then a 700mib file with block success rate p=0.58 will have a 48% total 
> success 
> chance. This sucks... 
> 
> On Saturday 13 February 2010 20:09:46 Evan Daniel wrote:
> > For files of 20 segments (80 MiB) or more, we move to the
> > double-layered interleaved scheme.  I'm working on the interleaving
> > code still (it isn't optimal for all numbers of data blocks yet).  The
> > simple segmenting scheme is better for smaller files, and the
> > interleaved scheme for large ones.  At 18 segments, the segmentation
> > does better.  By 20 segments, the interleaved code is slightly better.
> >  By 25 segments, the difference is approaching a 1.5x reduction in
> > failure rates.  (Details depend on block success rate.  I'll post them
> > on the bug report shortly.)
> >
> 
> I wonder why you do not want the interleaved scheme for all multi-segment 
> files? Why the arbitrary choice of 80 MiB files? 

Think this is NOT arbirary.  It the place where it starts to make a difference 
in retrieval rates.
Less that 80 should work just as well with the simpiler scheme...

> It would suck if then people started to artificially bloat 50MiB files up to 
> 80MiB to improve their success rates...

Ed

Reply via email to