On Sunday 18 June 2006 10:54, Michael Rogers wrote: > Ed Tomlinson wrote: > > We probably want to size a peers token bucket relative to the ammount > > of the keyspace the peer is the best path for (resizing after all location > > swaps). > > The token buckets are for incoming traffic, so we'd have to know how > much of each peer's keyspace we were responsible for. I'm not sure if > that's a good idea from an anonymity perspective - but on the other hand > we can probably work it out passively anyway, by seeing which keys the > peer requests from us.
> There's also a trust issue - assuming I've made the mistake of > connecting to a malicious peer who's trying to flood the network, fair > sharing will restrict the amount of my resources (and my peers' > resources) the malicious peer can use. On the other hand if I allow the > peer to influence the size of its share (eg by claiming that I'm > responsible for a large part of its keyspace and therefore should accept > a large number of requests), the attacker can get access to more resources. Could not a node appear to be tring to flood the network when all its doing is forwarding many request to its optimal node? We need to cope with this usage pattern... > > Hopefully we can find a metric that allows the node to self adjust. > > Hopefully - not sure what the feedback signals should be. > > > Bandwidth control (packet level) does look at all packets. > > Really? As far as I can tell, BlockTransmitter only throttles CHK replies. I was almost certian I saw code that uses an AIMD on packets in general to limit total bandwidth. > > It makes sense not to throttle replys to intransit message. > > It makes sense not to reject them, but I'm not sure ignoring the > (user-specified) bandwidth limit is a good idea. Replies could be queued > rather than rejected if the bandwidth limiter's token bucket is empty. > > > Maybe throttling swap requests would be a good thing? > > Maybe - not sure how this would impact the swapping algorithm. If fewer > swaps lead to less efficient routing, it could exacerbate the load problems. > > > We now use a ethernet like alg to control backoff. My patch changes this > > to (G)AIMD which > > might be better. If we use a token scheme both the existing and/or my > > (G)AIMD scheme > > probably can be scrapped. > > OK, I'm not very happy about chasing a moving target but on the other > hand I understand that people might not want to wait until the end of > the SoC project to see improvements in the backoff situation. It's up to > Matthew really. This is already understood. There does need to be a good case for making any changes to this area of the code. > > Do you see any value in implementing the metric mentioned previously? > > > > Node percent of time backed off > > Peer percent time backed off > > Node percent of time routed to non optimal node due to backoff > > Yup, it would definitely be useful to have this information if we're > planning to modify the backoff code before the SoC work is finished. OK. I will create a another svn copy here and add the metrics to the existing code. Once this is done we will at least know now well/badly the current code is working. At this point it might be interesting to have a few nodes try my changes IF my numbers show its an improvement. Thanks Ed
