Since time immemorial, Freenet has generally averaged 3KB/sec ... it still does.
[20:27:51] <nextgens> CHK Request RTT=30.145s delay=1.940s bw=16890B/sec [20:28:04] <nextgens> CHKs 14.122% 737,150 [20:20:53] <toad_> # CHK Request RTT=19.045s delay=1.727s bw=18973B/sec [20:25:54] <toad_> in actual fact, 14.396% of CHK requests succeed, making the well-known freenet trademark 3KB/sec average download speed :| (Multiplying the speed estimate from the AIMD by the percentage that succeed...) Two well connected nodes - mine has 30KB/sec upstream, nextgens has 80KB/sec upstream. Nextgens' averages 2400 bytes per second, mine averages 2700 bytes per second. These are very similar because the load management code estimates the capacity of the whole network, not of your immediate neighbourhood. Token passing could perhaps fix this, without necessarily increasing the proportion of requests that are locally originated - local requests would be balanced as if they were just another node, more or less as they are now, but without the AIMD, with load being indicated directly by the network. Also we should get better routing especially for "bulk" requests prepared to be queued a little time for a good route. Would traffic be more bursty and therefore less secure? Burstiness is simply a function of what proportion of your capacity is used for your own requests, so it is relatively easy (modulo bootstrapping issues) to control it. If this capacity was unused previously, then you can use it without offending your peers (on opennet, a node that doesn't relay requests will quickly lose all good connections; on darknet, load management could eventually have the same effect). So the user controllable variable is what proportion of *unused* capacity is used for starting my own requests. The higher the bandwidth used for your requests relative to the average node's bandwidth, the greater the proportion of requests at a few hops away will be yours, the lower your security. Most traffic, of course, is polling for keys that don't exist. Passive requests would reduce the cost of such polling, but I have no idea what the throughput gains would be. On the other hand, bloom filters would be a clear gain, provided we can actually use them on opennet as well as darknet: - Significantly reduce the average number of hops for a successful request for popular or moderately popular data. - Significantly improve the odds of a request for unpopular data succeeding. - Can be exempt from AIMDs. (Responses to offered keys via ULPRs can perhaps be too). With tunnels as well, you get: - Much better throughput and latency with current security. - Much better security with hopefully similar throughput and latency. - Better data retrievability with either mode. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: This is a digitally signed message part. URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20090411/410a957e/attachment.pgp>