On Tue, Mar 31, 2009 at 11:19 PM, Gregory Maxwell <gmaxwell at gmail.com> wrote:
> On Tue, Mar 31, 2009 at 10:59 AM, Matthew Toseland
> <toad at amphibian.dyndns.org> wrote:
>> My understanding is the blocks available is no longer random, right? We need
>> to be able to fetch random blocks, or at least whatever strategy we adopt
>> needs to be able to provide the same probability of any given block being
>> selected, when averaged over many fetches, so that some blocks do not fall
>> out more quickly than other blocks. This is a problem with LDPC, correct?
>
> From your perspective block LDPC should work just like the RS code does:
>
> The publisher encodes N blocks into M blocks and submits them into the 
> network.
> The client fetches some random subset of M, as soon as he has ~N of the M he
> can reconstruct the original file.
>
> So no block is special.
>
> In the RS case N is always sufficient. For block LDPC you may need someplace
> between N and N+? of the blocks; the page I linked to links to a paper with
> calculations about the size of ?.

The current code depends on this fact.
Making the number of block variable make this not a plug-and-go change.

> The advantage being that RS is slow and becomes much slower as M increases
> and you're forced to use wider arithmetic. This means that practically

Downloading chunks is always the bottleneck.
I believe the RS code overhead is much faster then download ?. extra blocks.
[..]

Reply via email to