On Tuesday 31 March 2009 15:02:39 Gregory Maxwell wrote:
> 2009/3/31 Matthew Toseland <[email protected]>:
> [snip]
> > *should* be around 4 times slower, but this has not yet been tested. 
Larger
> > segments should increase reliability (vivee: how much?). Assuming that 
16-bit
> > codecs achieve around 175MB/sec, this is very tempting...
> [snip]
> 
> Rather than switching to a wider RS code you should consider using an
> LDPC based block erasure code.
> 
> http://planete-bcast.inrialpes.fr/article.php3?id_article=7
> http://www.rfc-editor.org/rfc/rfc5170.txt
> 
> Unlike RS these codes are not optimal (meaning you need slightly more
> data than the theoretical minimum), but they are vanishingly close to
> optimal and *significantly* faster for large numbers of blocks.

I do not see that we have a problem with speed here, in terms of CPU usage; 
the main problem is disk I/O afaics, which will surely get worse with more 
blocks???

Also there is an excellent reason not to use LDPC (aka fountain codes), but if 
I mention it Ian will kill me. :)

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to