On Wednesday 08 August 2007 17:08, Matthew Toseland wrote: > On Wednesday 08 August 2007 17:06, Matthew Toseland wrote: > > > > ----- Anonymous at o9_0DTuZniSf_+oDmRsonByWxsI ----- 2007.05.14 - > > > > 16:46:24GMT ----- > > > > > > > > While token passing would indeed smooth the traffic out, it feels > > > > excessive: > > > > > > > > - it adds extra traffic; > > > > - it creates additional traffic patterns, that quite simplify attacks > > > > (like those aiming at reliably proving that a particular request > > > > originates from attacked node) against a node which all connections > > > > are monitored (by ISP), and some of them are fast but compromised > > > > (compromised peers). > > > > - it requires to pull a multidimensional set of heurictics on whom to > > > > send new token out of a thin air, and those heuristics will tend to > > > > disagree for different connection types. > > > > > > > > The method of delaying network reads (thats important - and AFAIK the > > > > only major missing thing to get shaping rolling smoothly already) > > > > should work similarly well (might be even better): just consider the > > > > metric 'the current peer roundtrip time is lower than the [peer] > > > > average roundtrip time' as equivalence of 'the peer gave us few > > > > tokens', and enjoy the bandwidth/crypt(CPU) free virtual token > > > > passing which obeys both hardware/ISP traffic shaping imposed limits, > > > > as well as software configured limits - whichever is stricter. > > > > > > > > So I currently discorage implementing explicit token passing, in > > > > favor of lower, equially tasty fruit. > > > > > > ----- mrogers at UU62+3E1vKT1k+7fR0Gx7ZN2IB0 ----- 2007.05.17 - > > > 21:40:27GMT ----- > > > > > > > - it adds extra traffic > > > > > > Um, right. "Here are n tokens" takes about 6 bytes: two for the message > > > type, two for the message size, and two for the number of tokens (we're > > > never going to hand out more than 65535 tokens in one go). It uses less > > > traffic than "Can I send you a request?" "Yes" "Here's the request", > > > and it avoids a round-trip. It also uses less traffic than "Can I send > > > you a request?" "No", because if you don't have a token, you don't need > > > to ask! > > > > > > > - it creates additional traffic patterns, that quite simplify attacks > > > > (like > > > > > > those aiming at reliably proving that a particular request originates > > > from attacked node) against a node which all connections are monitored > > > (by ISP), and some of them are fast but compromised (compromised > > > peers). > > > > > > Please explain how handing my peer some tokens reveals anything about > > > traffic patterns that wasn't already visible to traffic analysis. If > > > they can see the requests and results going back and forth, who cares > > > if they can also see the tokens? > > > > > > > - it requires to pull a multidimensional set of heurictics on whom to > > > > send > > > > > > new token out of a thin air, and those heuristics will tend to disagree > > > for different connection types. > > > > > > No magical heuristics are needed - we hand out tokens as long as we're > > > not overloaded (measured by total queueing delay, including the > > > bandwidth limiter). That alone should be enough to outperform the > > > current system, because we'll avoid wasting traffic on rejected > > > searches. Then we can start thinking about clever token allocation > > > policies to enforce fairness when the network's busy, without imposing > > > unnecessary limits when the network's idle, etc etc. But token passing > > > doesn't depend on any such policy - it's just a lower-bandwidth > > > alternative to pre-emptive rejection. > > > > ----- Anonymous at o9_0DTuZniSf_+oDmRsonByWxsI ----- 2007.05.25 - > > 11:57:46GMT ----- > > > > As far as I see, the tokens should be transferred in timely enough > > manner, to keep bursts problem moderate; so majority of them will not be > > coalesced, frequently resulting in the folloing overhead for the 6 bytes: > > > > - up to 100 bytes of random padding; > > - 50+ bytes FNP headers; > > - 8 bytes UDP header; > > - 20+ bytes IP header. > > > > Those 150-200 bytes packets, aside being large enough to get noticeable > > in number, inavoidably create additional traffic patterns that could be > > useful to estimate node's activity with other peers (even if the other > > peers use some local connection like Bluetooth/WiFi/LAN which is much > > more expensive to monitor remotely/centralized). > > ----- cptn_insano at _g2YxqIynCrs2bcwLGQkr+0b544 ----- 2007.05.25 - > 13:52:15GMT ----- > > Could the tokens be mixed in with real data transfers? > Are we merging multiple inserts/requests to make full packets? > Could SSK transfers be padded with CHK data?
----- Anonymous at o9_0DTuZniSf_+oDmRsonByWxsI ----- 2007.05.25 - 20:48:05GMT ----- > Could the tokens be mixed in with real data transfers? Teoretially yes; but in order to avoid extraneous delays the coalese period must be pretty short in order to avoid limiting effective transfer speed too much; thus coalescing will work ok for fast links that do not really need traffic shaping; and frequently fail for slower links making them even slower. > Are we merging multiple inserts/requests to make full packets? Yes but again only fast links seriously benefit from that, due to the very short (100ms currently) coalesce period. In other terms 100ms approximately means 128kbod connection PER PEER - so if you have 8 connected peers, you should have at least 1mbod internet connection for coalescing to succeed most of the time - which is way too high to assume being available globally. > Could SSK transfers be padded with CHK data? SSK transfers are extremely infrequent (as absolute majority of SSK requests result in DNF currently). So this does not makes noticeable difference. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20070808/09fa5fa6/attachment.pgp>