On Tuesday 31 March 2009 15:02:39 Gregory Maxwell wrote: > 2009/3/31 Matthew Toseland <toad at amphibian.dyndns.org>: > [snip] > > *should* be around 4 times slower, but this has not yet been tested. Larger > > segments should increase reliability (vivee: how much?). Assuming that 16-bit > > codecs achieve around 175MB/sec, this is very tempting... > [snip] > > Rather than switching to a wider RS code you should consider using an > LDPC based block erasure code. > > http://planete-bcast.inrialpes.fr/article.php3?id_article=7 > http://www.rfc-editor.org/rfc/rfc5170.txt > > Unlike RS these codes are not optimal (meaning you need slightly more > data than the theoretical minimum), but they are vanishingly close to > optimal and *significantly* faster for large numbers of blocks.
My understanding is the blocks available is no longer random, right? We need to be able to fetch random blocks, or at least whatever strategy we adopt needs to be able to provide the same probability of any given block being selected, when averaged over many fetches, so that some blocks do not fall out more quickly than other blocks. This is a problem with LDPC, correct? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: This is a digitally signed message part. URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20090331/d8a80c5a/attachment.pgp>