On Tuesday 31 March 2009 17:33:20 Daniel Cheng wrote: > On Tue, Mar 31, 2009 at 11:19 PM, Gregory Maxwell <[email protected]> wrote: > > On Tue, Mar 31, 2009 at 10:59 AM, Matthew Toseland > > <[email protected]> wrote: > >> My understanding is the blocks available is no longer random, right? We need > >> to be able to fetch random blocks, or at least whatever strategy we adopt > >> needs to be able to provide the same probability of any given block being > >> selected, when averaged over many fetches, so that some blocks do not fall > >> out more quickly than other blocks. This is a problem with LDPC, correct? > > > > From your perspective block LDPC should work just like the RS code does: > > > > The publisher encodes N blocks into M blocks and submits them into the network. > > The client fetches some random subset of M, as soon as he has ~N of the M he > > can reconstruct the original file. > > > > So no block is special. > > > > In the RS case N is always sufficient. For block LDPC you may need someplace > > between N and N+ε of the blocks; the page I linked to links to a paper with > > calculations about the size of ε. > > The current code depends on this fact. > Making the number of block variable make this not a plug-and-go change. > > > The advantage being that RS is slow and becomes much slower as M increases > > and you're forced to use wider arithmetic. This means that practically > > Downloading chunks is always the bottleneck. > I believe the RS code overhead is much faster then download ε. extra blocks. > [..]
Very possibly, especially for rare files. However, large segments and all the seeks involved are a serious problem here, if decode can be done progressively then it would be a big gain...
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ Devl mailing list [email protected] http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
