--- "Edward J. Huff" <[EMAIL PROTECTED]> wrote: > Freenet uses a lot of bandwidth. Some of it might be avoidable. > Somewhere I saw an argument that content must pass through each node > along the chain so that they can all verify that the content matches the > hash. But there are other ways of verifying this.
Yeah, every link checks to see if SSK or CHK matches the data. If it costs too much CPU you could probably do it probablistically, but the neat thing about having everyone do it is that if some jerk is tring to send crap data you'll bust him (or his friends) on the first hop, and blacklist him. > A node which has the file (and knows it has it because the CHK comes out > right) can calculate f(file,random long) giving say 128 bit result for > lots of different random longs, and save them. This can be done during > inserts by nodes which decide not to save the whole file. Sure, use a keyed hash or just append some salt (your long) to the data and hash (SHA-1). P2P nets can use this to build trust, by forcing a potential adversary to store data. > Deleting files from datastore goes in two stages. The second stage is > to delete all traces of the file when it hasn't been requested and space > is tight. But first, you replace the whole file with a bunch of these > results, so that if you get a new request for the file, you can insert > two or three of the random longs into the request before passing it on. > Then when a node down the line actually has the file, it can prove it to > you by sending the answers back up without having to send the whole file > past every node on the chain. Right or I could let a bad guy just burn his upbandwidth sending junk, and watch him get disconnected from his nieghbors for doing it. Your solution doesn't prevent him from passing the test if he's got the data and still sending junk back. > Now there are no doubt other obstacles to avoiding transmission of the > data through every node, but inability to check the honesty of > downstream nodes is not one of them. Right, here are reasons why data is routed back the way it was requested insteaded of dirrectly: 1) The main obsticle is requestor annoymity. If you connect back to me when you give me data you'll need my IP which copromises me. 2) The other problem is that since TCP/IP has no protection against flooding attacks, if a good node were to connect dirrectly back to the sender (or give his IP back in a reply, which we do now :-(), that node could get pummeled by a SYN flood or other DNS attack. This is why hopping through a couple intermediate nodes is good. 3) Routing data through the overlay network makes for nice replication. Personally I think I should only have to upload a piece data to a nieghbor once a week tops, at least if it's in our specialization, but right now pcaching makes it likely I could have to do it a few times. Using tricks like this does have merit in other systems; I'm not sure it's useful for freenet. Chris __________________________________________________________________ Gesendet von Yahoo! Mail - http://mail.yahoo.de Logos und Klingelt�ne f�rs Handy bei http://sms.yahoo.de _______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
