On Wed, 2003-10-15 at 08:18, Some Guy wrote: > --- "Edward J. Huff" <[EMAIL PROTECTED]> wrote: > > Freenet uses a lot of bandwidth. Some of it might be avoidable. > > Somewhere I saw an argument that content must pass through each node > > along the chain so that they can all verify that the content matches the > > hash. But there are other ways of verifying this. > > [...]
> Your solution doesn't prevent him from passing the > test if he's got the data and still sending junk back. > Thank you. That is what I wanted to know. So, what I need to save when I'm deleting the data is hashes of short segments of the file. When I get a request for the actual data, I send back the list of hashes (and I guess also forward the request). That way, if the original requester trusts me, he can verify parts of the data before receiving it all, and not waste time downloading junk. Also, he makes the supplier tell him the answers to these questions first. So, if they agree with what I said, the supplier must have the data. But then, if when the data comes down, it doesn't check out, the supplier is caught in a lie early in the transfer. > > Now there are no doubt other obstacles to avoiding transmission of the > > data through every node, but inability to check the honesty of > > downstream nodes is not one of them. > > Right, here are reasons why data is routed back the way it was requested > instead of directly: > > 1) The main obstacle is requester anonymity. If you connect back to me > when you give me data you'll need my IP which compromises me. Freenet gives plausible deniability, not complete anonymity. The supplier (who is trying to catch users of the contraband he supplies) can't be sure that I didn't send the data along to some other node, since the connections are encrypted. So he can't prove I am the one who requested it. But for this to work, I have to sometimes relay data. That's ok, I relay data so long as my bandwidth isn't saturated, but when it is, it would be nice to allow the original requester to get the data directly. It would also be nice to still find out if the supplier actually had it. > > 2) The other problem is that since TCP/IP has no protection against > flooding attacks, if a good node were to connect directly back to the > sender (or give his IP back in a reply, which we do now > :-(), that node could get pummeled by a SYN flood or other DNS attack. > This is why hopping through a couple intermediate nodes is good. Right, but you can't keep the IP addresses of freenet nodes secret. I know some thousand of them just from running one node for 2 months. > > 3) Routing data through the overlay network makes for nice replication. > > Personally I think I should only have to upload a piece data to a > neighbor once a week tops, at least if it's in our specialization, > but right now pcaching makes it likely I could have to do it > a few times. > > > Using tricks like this does have merit in other systems; I'm not sure > it's useful for freenet. > I'm just trying to figure out how to meet the goals of freenet, without necessarily using all of the current design decisions. For instance, suppose we have a very anonymous mixnet, with source routing. I pick a random list of nodes, and pass nested envelopes. Somewhere in the middle, the decrypted instructions say issue a request for a CHK via the routing network. The reply, which says where to find the CHK, comes back to that node. The node sends it along to the next node on the source routed path, which eventually comes back to the me. I can then issue a direct request for that CHK. Since the supplier of the CHK has no idea whatever what is in it, I am anonymous. There are many omitted details. Also, suppose we have a completely non-anonymous distributed backup system. BTW, I would pay money and provide 10x bytes storage to join a system where I could backup x bytes and be sure of getting it back if I lose my disks... I suspect lots of other people would too. So here is a valid reason for sending lots of encrypted data around and storing it on my disk without knowing what it is, and also for constantly proving to others that I still have it. Everybody in the network has put up 10x space in exchange for distributed redundant backup of x space, and they are continually sending updates around. I want to run freenet on the back of these two services: a highly anonymous mixnet and, on the same network, constant traffic in large encrypted files which actually are just the network user's mundane files being backed up remotely. I suspect this can be done, and furthermore, that it can achieve actual anonymity rather than just plausible deniability. But the devil is in the details. -- Ed Huff _______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
