Matthew Toseland wrote: > According to the docs, decoding cannot begin until we have a full > segment. So the opportunities for progressive decoding are limited, but > we can use 4MB segments and decode one segment while fetching the next. > This should provide significantly improved performance relative to what > we do now, at a higher CPU cost. Small files *should* decode in a very > few seconds even on slow CPUs with the pure java code.
Switching to 4MB segments will also cut the effectiveness of the FEC, as each block will check for fewer other blocks. To be honest, the effectiveness of the FEC probably isn't a weakness as 50% redundancy is a lot of margin for error. I wish we/I could talk with a patent lawyer about the tornado codes patent. There's no way they can be patenting the idea of XORing data blocks together to produce check blocks, can they? They've got to be patenting some other ideas, like the cascading of their FEC and the method of choosing what blocks to XOR. These are important for the purposes they use tornado codes for, but freenet should[1] be nearly immune to these requirements because of the fact that the client gets to choose what blocks to request, instead of having the server just broadcast information. Anyway, I'm back in a little way for the moment. I'm still working on re-implementing the node in ocaml, probably using the mldonkey project's codebase as at least a huge library of p2p networking code. Thelema
