On Mon, Feb 15, 2010 at 8:37 AM, VolodyA! V Anarhist <Volodya at whengendarmesleeps.org> wrote: > Matthew Toseland ?????: >> >> On Sunday 14 February 2010 04:29:31 Evan Daniel wrote: >>> >>> On Sat, Feb 13, 2010 at 7:00 PM, xor <xor at gmx.li> wrote: >>>> >>>> First of all, I like your calculations very much and I wonder why nobody >>>> calculated this before FEC was implemented. If I understood this >>>> correctly >>>> then a 700mib file with block success rate p=0.58 will have a 48% total >>>> success chance. This sucks... >>> >>> Thanks :) >>> >>> That's correct. ?However, in some ways it isn't *quite* that bad. >>> There's a reason I picked p=0.58: it's nice and dramatic. ?But, the >>> curve is fairly steep there :) ?At p=0.60, it's 92% success rate. ?At >>> p=0.61, it's 97%. >>> >>> There are two ways to look at the success rate. ?One option is to pick >>> some block success rate and look at the resulting file success rates >>> for a variety of files. ?By this metric, the interleaved coding is a >>> stunning improvement. ?The other approach is to pick a target file >>> success rate, and see what block success rates are required to get it >>> for different scenarios. ?So at 700M, we need p=0.61 to get 97% with >>> simple segments. ?With interleaved coding, we need p=0.56. >>> >>> The latter approach is actually probably more directly meaningful. >>> The improvement from interleaved coding is significant, and useful, >>> but modest. ?What it says is that instead of being able to lose 39% of >>> blocks and still expect to recover the file, we can now expect to >>> recover the file after losing 44%. ?If we assume that individual >>> blocks are well modeled as having a half-life to disappearance, then >>> it takes 1.17 times longer for the file to become inaccessible if we >>> use the interleaved coding. >> >> However, in practice, it appears that many files do stall at close to >> 100%. This could be a client layer bug, but it could be that we have lost a >> segment. > > Isn't it exactly the expected behaviour when the last segment is > significantly smaller than the rest?
It's also the expected behavior when all are the same size and only one is lost. A 100 MiB file (25 segments) with 24 segments downloaded and one at 100 blocks of 128 required will show 99%. Distinguishing the two cases requires a download progress monitor that shows per-segment details. Evan Daniel
