From:
Matthew Toseland <[EMAIL PROTECTED]>
Date:
Thu, 28 Nov 2002 15:35:39 +0000


On Sun, Nov 24, 2002 at 01:41:45PM -0600, Edgar Friendly wrote:

Ian Clarke <[EMAIL PROTECTED]> writes:


On Sun, Nov 24, 2002 at 12:43:34PM +0100, Anonymous wrote:

if you wanna check the fec status you can do it from a new menu in the
webinterface, which shows u the usual fec-downloading table.

Hopefully we can come up with a more attractive/intuitive interface than the one we have right now.



On this subject: One feature which was discussed but never implemented was that on downloading a FEC file, the node would reconstruct any chunks that DNFed, and reinsert them. Of course, this should occur in the background after the reconstructed file has been sent to the user.


This should occur *with* the user's consent; making failed block
reinsertion part of the interface. Doing this automatically leads to
problems like people inserting files with hideously large redundancy
and only inserting a few of the pieces.

Not if we hardcode the redundancy.

This would dramatically improve the reliability of FEC encoded files.

Ian.


Aren't FEC encoded files already supposed to be dramatically better
than regular splitfiles? Why don't we just make normal splitfiles
work better by improving freenet instead of fixing the symptom?

Because if there are 300 blocks in a splitfile, the odds of one of them
falling out has to be kept to a ridiculously low figure to guarantee
that the file can be retrieved. Do the math. Or read Oskar's version in
the devl archives.

If it is not done already,
one thing that can be done is to make a first request pass over the entire set of
blocks in a segment at the lowest HTL then increment the HTL and request the
missing pieces (maybe working its way backwards in that list), etc.

This has a three fold effect:
1) It will at least *try* all of the blocks before incrementing the HTL to try
harder at the first blocks. If you manage to get enough on the first pass then,
great!
2) the minimal number of hops were used in the process rather then
hammering at blocks that are not immediately accessible to your node
due to routing blockades (busy bunch of nodes in that keyspace, an especially
unspecialized keyspace for you, etc.)
3) successes later on in the blocks will help to train your local node for the
second pass and will decrease the overall hops needed for the entire FEC
request. What leads me to believe this happens is the incremental increase
in the number of blocks downloaded with requests that immediately
follow failed requests. Another explanation might be blocks that are being
transfered *really* slowly and the requests timing out before the data
can get routed back.

I think that random selection of blocks to download may be good in theory
but bad in practice since retried attempts at downloading a FEC file will
not make the most efficient use of previously successfully retrieved blocks
since the next request will not be looking for the same set of blocks.

Mike


_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to