On Sunday 03 August 2003 11:03 am, Gabriel K wrote: > I'm not sure I follow you here.. > Sure, ONE node has to upload EVERY time a node that doesn't have the file > requests it in a regular network. > And in a anonymous network, the least required is to proxy via one node, so > that means two nodes need to upload that file. > In freenet you have cashing of files.. so what? Still the files need to be > uploaded by some node. > Please explain the differance you see here. How is the net upload greater? > If you have something like bittorrent for instance, it is pretty much > optimal with respect to net upload right? > > I have only read how freenet work at a theoretical level, so HTL 0 means > nothing to me, sorry :) > I'm guessing it's some kind of TTL for the request? So your request dies > after 1 hop? > Well, I'm talking about a protocl that can perform a search on all nodes, > but STILL allow for the data transfer to use only one or a few proxies. > As for the number of proxies (considering they could be comromised), it's > simply trading sequrity vs. bandwidth overhead. I'm guessing something like > 2 or 3 proxies is optimal for large data. > > > > be OK that the overhead of finding the data is large (if it's 500% more > > > than shortest/least hops/bandwidth it's ok), if that optimises the data > > > transfer! Think big! :) The amount of data transfered is so much bigger > > > than the amount of control messages! Optimise data transfers! > > > > But what about small data like freesites? Latency is so much more important > > > than bandwidth there. > > Yes that is true, but if you have used DirectConnect (maybe with a 10Mbit > line), most files downloaded are big. Or at least they are big together, > files from the same source. These files would take the same path, so they > can be seen as a big file. > And I think that everything over 3 MB could be considered big compared to > the size of control messages. >> > Well, if I would be in a network where practically noone is downloading > except me, and each node has the same BW as me, then sure I could have 20 > proxies in between without loosing speed. Latency would be greater, but > once the transfer has started it will be pretty much as fast as downloading > directly from the source. > > Now, if each node is trying to max out it's BW, then the amount of proxies > is highly noticable to the BW. > Let's say every transfered file passes though every node. That would pretty > much mean that in average each node has 1/N of his BW left to download > himself, and the rest is used to proxying, assuming a lot of activity in > the net. > > I think it's good to assume lots of activity in the network.
You don't really seem to have a very good understanding of how Freenet works. I'm going to put up a document in a few days that hopefully should make it more clear. Right now Freenet's documentation is very lacking. First there is no distinction between searching for and requesting a file. So, if you make a request for it, you don't find out who has it and then connect to them through proxies, you just get the file. (You don't know or care where it's from.) So having to upload on Freenet is a BIG BIG plus. It means that instead of people having to connect to your computer to get the file, they get it from the network. Anything bigger than a meg is broken up into 256k chucks each of which is routed and stored separately. So when you download a large file, you can easily run 50 different threads, each downloading a segment form a different computer. So the total bandwidth for all users downloading the file is never limited by the number of users that already have the file. (Although more would result in multiple copies on the network) So, Freenet would give much much better throughput than something like bitTorrent with anonymizing proxies. It is also not vulnerable to anyone host being down, overloaded etc. However if you are talking about transmitting data point to point, rather than making it available to everyone, then Freenet may not be for you. If your download traffic is going to be less than the cost to upload all the data, then you might as well just run an ftp server or something. Alternately you don't always have to insert the data into Freenet. Frost employs a mechanism to only insert the metadata, then when a user wants it, it will be inserted on demand. To address your last point about reducing the number of proxies: This might be done soon. Someone just brought this up on [EMAIL PROTECTED] It is not good to do as a general practice, because then the intermediate nodes can't improve their routing tables, and the network is more venerable to attack. However a good time for then to cut themselves out of the return path, is when they are overloaded, and would slow the transfer. So that will probably be done soon. _______________________________________________ Tech mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/tech
