Freenet uses a lot of bandwidth.  Some of it might be avoidable.
Somewhere I saw an argument that content must pass through each node
along the chain so that they can all verify that the content matches the
hash.  But there are other ways of verifying this.

A node which has the file (and knows it has it because the CHK comes out
right) can calculate f(file,random long) giving say 128 bit result for
lots of different random longs, and save them.  This can be done during
inserts by nodes which decide not to save the whole file.

Deleting files from datastore goes in two stages.  The second stage is
to delete all traces of the file when it hasn't been requested and space
is tight.  But first, you replace the whole file with a bunch of these
results, so that if you get a new request for the file, you can insert
two or three of the random longs into the request before passing it on. 
Then when a node down the line actually has the file, it can prove it to
you by sending the answers back up without having to send the whole file
past every node on the chain.

Now there are no doubt other obstacles to avoiding transmission of the
data through every node, but inability to check the honesty of
downstream nodes is not one of them.

-- Ed Huff

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to