On Tuesday 31 March 2009 15:02:39 Gregory Maxwell wrote: > 2009/3/31 Matthew Toseland <toad at amphibian.dyndns.org>: > [snip] > > *should* be around 4 times slower, but this has not yet been tested. Larger > > segments should increase reliability (vivee: how much?). Assuming that 16-bit > > codecs achieve around 175MB/sec, this is very tempting... > [snip] > > Rather than switching to a wider RS code you should consider using an > LDPC based block erasure code. > > http://planete-bcast.inrialpes.fr/article.php3?id_article=7 > http://www.rfc-editor.org/rfc/rfc5170.txt > > Unlike RS these codes are not optimal (meaning you need slightly more > data than the theoretical minimum), but they are vanishingly close to > optimal and *significantly* faster for large numbers of blocks.
I do not see that we have a problem with speed here, in terms of CPU usage; the main problem is disk I/O afaics, which will surely get worse with more blocks??? Also there is an excellent reason not to use LDPC (aka fountain codes), but if I mention it Ian will kill me. :) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: This is a digitally signed message part. URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20090331/092b3b64/attachment.pgp>