On Tue, Mar 31, 2009 at 11:19 PM, Gregory Maxwell <[email protected]> wrote:
> On Tue, Mar 31, 2009 at 10:59 AM, Matthew Toseland
> <[email protected]> wrote:
>> My understanding is the blocks available is no longer random, right? We need
>> to be able to fetch random blocks, or at least whatever strategy we adopt
>> needs to be able to provide the same probability of any given block being
>> selected, when averaged over many fetches, so that some blocks do not fall
>> out more quickly than other blocks. This is a problem with LDPC, correct?
>
> From your perspective block LDPC should work just like the RS code does:
>
> The publisher encodes N blocks into M blocks and submits them into the 
> network.
> The client fetches some random subset of M, as soon as he has ~N of the M he
> can reconstruct the original file.
>
> So no block is special.
>
> In the RS case N is always sufficient. For block LDPC you may need someplace
> between N and N+ε of the blocks; the page I linked to links to a paper with
> calculations about the size of ε.

The current code depends on this fact.
Making the number of block variable make this not a plug-and-go change.

> The advantage being that RS is slow and becomes much slower as M increases
> and you're forced to use wider arithmetic. This means that practically

Downloading chunks is always the bottleneck.
I believe the RS code overhead is much faster then download ε. extra blocks.
[..]
_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to