Most of this sounds pretty good.

Firstly, a stupid question - is there any reason to seperate "data blocks"
and "check blocks"?  As far as the FEC encoder/decoder knows, they're just
blocks, right?  I mean, that's the whole *point* of it (you need any k
blocks of n in order to decode a file).  Would make things simpler,
conceptually, I think.  This of course is based on assumptions from FEC
implentations that I have seen, where the block size is
constant... obviously, if your FEC implentation makes the distinction,
then I guess it makes sense (translation from fishish: yeah, we need to
support checkblocks for certain algorythms, but they don't make sense for
onion's)

Anyhow, the other point I wished to make, is that from looking at your
information, it seems like it would be far more convinent still to just
call the onionnetworks library direclty - okay, yeah, I see the usefulness
of this for providing access to people who don't have/don't want to have
bindings to this, but it just seems like an unnessesary layer of
abstraction to me.  But perhaps I am on crack.

> III. Changes to SplitFile metadata format.
> 
> 0) Deprecate the BlockSize field, since check blocks are not necessarily the
> same size as data blocks and blocks may be different sizes across segments.

I strongly disagree with this - if we want to support this case, it is
better to then have a seperate set of metadata for each segment and
specify the block/check size in each one.  This information is very useful
to have for reasons of memory allocation and the like.

Other than that, however, All of that being said, this all looks okay to
me on my initial reading.

        - fish

p.s. I included the following in the original draft of this email, but
considered it offtopic and hence seperated it out from the main
email.  However, I include it here because it is interesting and
semi-relevent:

> For a given maximum block size, some FEC algorithms can only
> practically handle files up to a certain maximum size.  The design
> uses segmentation to handle this case.  Large files are divided into
> smaller segments and FEC is only done on a per segment basis.  This
> compromise provides a least limited redundancy for large files.

Should we be perhaps looking for a library which doens't suffer from these
problems as much as onion's library, however?  The thing is, the whole
usefulness of FEC is for big files, you know... I know that we do have to
deal with reality instead of would-be-nice's, however, but it is something
to think about.

The other problem is, that as you stripe like this, the amount of
redundancy is, of course, reduced significantly, however you already knew
that.  However, people writing bad algorythms for downloading files (block
1,2,3,4.... etc in order) could make this problem even worse.  (As a side
note to this, I have been wondering if an AWT based freenet download
manager would be a useful thing to code/have... any thoughts on
this?  Heh, of course, this would require me to learn how to communicate
with freenet from java, but i can't be that hard, can it? :-p)

Anyhow, I'm sure you knew all of this... just restating it for my own
benefit, don't mind me :).  I'll look into the alternative libraries
myself over the next few days... there's nothing to stop us having two
encoders, given that the facilities are there, and let the best codec win
:-p.


_______________________________________________
devl mailing list
devl at freenetproject.org
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to