As discussed, I was able to get some information from Justin Chapweske, one of the chief engineers at openCOLA, the prior owner of the FEC codebase, before Onion networks acquired it.
He had the following to say in response to your queries: === The #1 most important thing is to make sure that the native/JNI FEC codes are being used. They are many times faster than the pure Java ones. Secondly, doing progressive decoding using FecFile will save you a lot of headaches, because as long as your decoding speed is as fast as your reception speed, then you'll experience no delay due to decoding. As far as "using less redundancy" as Matthew suggests, I'm assume that he's talking about decreasing the "N" value. In truth, the value of N has no impact on performance unless you're using an N > 256. If N is larger than 256, then you take a 4x speed reduction as 16 bit codes need to be used rather than 8 bit codes. The #1 impact on performance is the value of K, so if you're having problems with K=32, then you can try K=16, but that means that you'll have to split your files into even more pieces. The other thing to look at is your I/O performance. If you're doing a lot of random I/O, your files may become highly fragmented and may actually end up being the bottleneck in your system. We've successfully used FEC for applications well over 100 Mbps, so it is certainly possible to do high speed data transfers, but you have to pay very close attention to your I/O patterns. Besides that, you could try LDPC codes, but their non-deterministic nature will absolutely kill you on I/O performance, so even though they'll give you a boost on the CPU side, you'll quickly hit a wall with respect to I/O performance. Hope this helps. -Justin === -- Ken Snider
