Robert Bihlmeyer <robbe at debian.org> writes:

> Gianni Johansson <giannijohansson at attbi.com> writes:
> 
> > C. Within a segment all data blocks must be the same size and all
> > check blocks must be the same size.  The check block and data block
> > sizes are not required to be the same however.  Smaller trailing
> > blocks must be zero padded to the required length.
> 
> Ugh, do we need this bloat on the wire? An exception for the last
> data block would be preferable.
> 
The exception for the last data block is still assumed.  What GJ's
arguing for is being able to have data blocks of 256K and check blocks
of 128K, or something like that.  It's a nice generalization, but
IMNSHO, it doesn't gain that much.

Having check blocks smaller than data blocks means that to correct one
missing key, you're going to have to retrieve more than one check
block.  And if the check blocks are bigger...  well, I guess you lose
most of the advantages of splitfiles; might as well just make all the
blocks bigger.

> > 0) Deprecate the BlockSize field, since check blocks are not necessarily the
> > same size as data blocks and blocks may be different sizes across segments.
> 
> I'd still like to know the data and check block sizes for a segment
> beforehand. Will I be able to deduce these?
> 
At the moment, the only good solution to your problem is to include
offsets for each block, and this is a pretty wasteful method, so we're
not going with it.  The next best solution (that still allows variable
block sizes) is to have an optional field that gives the size of
blocks when they're the same size, and to just have clients "deal with
it" when it's not known.

> > 1) Add an AlgoName field. This is the name for the decoder and encoder 
> > implementation, that can be used to decode or re-encode the file. 
> > This replaces decoder.name and decoder.encoder in the previous
> > implementation.
> 
> The Metadata spec should probably list the known values for this
> field, and give links to further documentation. A genuine registry for
> these may be needed, but I doubt that there will be more than 3
> algorithms used in general ...
> 
I think that having multiple algorithms is going to result in either
noone using FEC or clients implementing their own FEC that's only
compatible with other copies of that same client.  Trying to
standardize on OnionNetworks' code is going to result in orphaning
platforms that code doesn't run on.  (I know there's a java version of
their code; I also know that there's a lot of platforms java doesn't
run on)

> > * SplitFile.Graph is currently not being used and is not implemented.
> 
> Delivering the blueprint for reassembly with every splitfile has it's
> appeal, but is probably more redundancy than it's worth.
> 
actually, it's worth quite a lot in terms of future growth; as better
methods of deciding how to XOR data blocks together to produce check
blocks are discovered, they can be used without requiring all client
authors to rewrite their code.

the methods that are being developed that don't use Graph have the
disadvantages of 
1) not being extensible to future advances in FEC technology
2) not being able to correct anywhere near as many error patterns.

> -- 
> Robbe

Thelema
-- 
E-mail: thelema314 at bigfoot.com                        Raabu and Piisu
GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7  84B7 D8D7 6ECE 3635 2AAB

_______________________________________________
devl mailing list
devl at freenetproject.org
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to