On Sat, Feb 13, 2010 at 7:00 PM, xor <xor at gmx.li> wrote:
> First of all, I like your calculations very much and I wonder why nobody
> calculated this before FEC was implemented. If I understood this correctly
> then a 700mib file with block success rate p=0.58 will have a 48% total
> success chance. This sucks...

Thanks :)

That's correct.  However, in some ways it isn't *quite* that bad.
There's a reason I picked p=0.58: it's nice and dramatic.  But, the
curve is fairly steep there :)  At p=0.60, it's 92% success rate.  At
p=0.61, it's 97%.

There are two ways to look at the success rate.  One option is to pick
some block success rate and look at the resulting file success rates
for a variety of files.  By this metric, the interleaved coding is a
stunning improvement.  The other approach is to pick a target file
success rate, and see what block success rates are required to get it
for different scenarios.  So at 700M, we need p=0.61 to get 97% with
simple segments.  With interleaved coding, we need p=0.56.

The latter approach is actually probably more directly meaningful.
The improvement from interleaved coding is significant, and useful,
but modest.  What it says is that instead of being able to lose 39% of
blocks and still expect to recover the file, we can now expect to
recover the file after losing 44%.  If we assume that individual
blocks are well modeled as having a half-life to disappearance, then
it takes 1.17 times longer for the file to become inaccessible if we
use the interleaved coding.

Evan Daniel

Reply via email to