On Tue, Mar 31, 2009 at 10:59 AM, Matthew Toseland
<toad at amphibian.dyndns.org> wrote:
> My understanding is the blocks available is no longer random, right? We need
> to be able to fetch random blocks, or at least whatever strategy we adopt
> needs to be able to provide the same probability of any given block being
> selected, when averaged over many fetches, so that some blocks do not fall
> out more quickly than other blocks. This is a problem with LDPC, correct?

>From your perspective block LDPC should work just like the RS code does:

The publisher encodes N blocks into M blocks and submits them into the network.
The client fetches some random subset of M, as soon as he has ~N of the M he
can reconstruct the original file.

So no block is special.

In the RS case N is always sufficient. For block LDPC you may need someplace
between N and N+? of the blocks; the page I linked to links to a paper with
calculations about the size of ?.

The advantage being that RS is slow and becomes much slower as M increases
and you're forced to use wider arithmetic. This means that practically
most applications
must break large files up and code in sub-groups, so rather than being
able to recover
using any N from the entire file, you must have X blocks from
subgroup1, X blocks from
subgroup2.. etc.

If the important limit in freenet is I/O then this might all be moot. The block
LDPC should be basically neutral I/O wise vs RS, so the only advantage would be
being able to have longer correction windows (whole file) which would improve
reliability.  Since the transmission units in freenet are large (as opposed to
1400 byte IP datagrams) then perhaps there isn't much gain there.

In any case, I just thought it would be worth your consideration.

Reply via email to