On Tue, Apr 07, 2009 at 02:34:50AM +0100, Matthew Toseland wrote:
...
> 
> Lots more things that can be simulated (but I'm not saying you must implement 
> all of these!):
...
> - Various proposed new load management schemes such as token passing. The 
> basic principle of token passing is that we tell our peers when we have the 
> capacity for new requests, and then accept some of their requests; when we 
> have accepted the requests, they are queued until we can either serve them 
> from the datastore or from the results of another request, or we can forward 
> them to one of our peers that says it can accept some requests. As I have 
> mentioned there are many details that can be tweaked...
> - Whether queueing requests (probably only bulk requests) is useful in 
> general.
...
> 
> I don't expect you to implement everything I've mentioned above! Getting a 
> good simulation of the current model working is essential, particularly as it 
> relates to load management. I'm not sure exactly how far load-balancing-sims 
> went, mailing list traffic suggests phase 7 implements most of the current 
> model, but I think preemptive rejection is missing, and IMHO that's a 
> critical component of the current architecture. One thing to note is that 
> vive has tried some interesting routing changes out without copying the code 
> first, so you might need to turn them off. Simulating token passing would 
> then be the next big thing, although as I mentioned there are a great many 
> variations, most of which are in the mailing list archives; we would need to 
> dig through the archives to find them.
> 
...
> David Cabanillas wrote:
> > The proposals for extend the simulations are as follows:
> > 
> >    - To extend the simulations phases in large scale.
> >    - To compare simulations using and not using backoff.
> 
> See here for mrogers' results from phase7 (these don't include preemptive 
> rejection):
> http://archives.freenetproject.org/message/20061122.001144.52dbb09d.en.html
> 
> >    - To apply peer-peer token buckets to track fairness between accepting
> >    requests from different peers,

This is actually a fairly important detail. Without this, any queueing based
scheme, at least any queueing based scheme that matches requests to nodes
based on key closeness, can be DoS'ed by simply sending lots of requests for
keys close to the known specialisations of the target's peers (which can be
picked up by snooping on swapping, or from FOAF announcements). We need to
ensure that incoming requests are balanced between nodes, not just outgoing
requests (we already prevent more than 30% of our outgoing requests going to
any given peer). Such balance does not require token passing, although it
will be most interesting with some sort of queueing-based system.

It is essential that we propagate load back to its originator, hence
flooding needs to be damped out at source; currently AIMDs ensure this
function, but on a token passing network, ensuring fairness should be
sufficient, although arguably going a bit further and rewarding
non-flooding nodes at the expense of flooding nodes may help more.
-- 
The theory that the earth is round has been repeatedly debunked. Therefore it 
must be false.
_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to