On Tuesday 31 March 2009 17:33:20 Daniel Cheng wrote: > On Tue, Mar 31, 2009 at 11:19 PM, Gregory Maxwell <gmaxwell at gmail.com> wrote: > > On Tue, Mar 31, 2009 at 10:59 AM, Matthew Toseland > > <toad at amphibian.dyndns.org> wrote: > >> My understanding is the blocks available is no longer random, right? We need > >> to be able to fetch random blocks, or at least whatever strategy we adopt > >> needs to be able to provide the same probability of any given block being > >> selected, when averaged over many fetches, so that some blocks do not fall > >> out more quickly than other blocks. This is a problem with LDPC, correct? > > > > From your perspective block LDPC should work just like the RS code does: > > > > The publisher encodes N blocks into M blocks and submits them into the network. > > The client fetches some random subset of M, as soon as he has ~N of the M he > > can reconstruct the original file. > > > > So no block is special. > > > > In the RS case N is always sufficient. For block LDPC you may need someplace > > between N and N+? of the blocks; the page I linked to links to a paper with > > calculations about the size of ?. > > The current code depends on this fact. > Making the number of block variable make this not a plug-and-go change. > > > The advantage being that RS is slow and becomes much slower as M increases > > and you're forced to use wider arithmetic. This means that practically > > Downloading chunks is always the bottleneck. > I believe the RS code overhead is much faster then download ?. extra blocks. > [..]
Very possibly, especially for rare files. However, large segments and all the seeks involved are a serious problem here, if decode can be done progressively then it would be a big gain... -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: This is a digitally signed message part. URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20090331/992db562/attachment.pgp>