Matthew Toseland wrote:
And we retry after RejectedOverload, unless it's a timeout.

OK, last question: is it the original source that retries, or does each node in the chain retry until the next node accepts the request?

So why do you need to keep the current insecure behaviour which requires
the original sender to behave differently from every other node? A good
token passing scheme ought not to need this, surely?

I'd prefer to do without it, but we might need it if backpressure doesn't work as well as we hope.

Calculating rates adds extra complications relating to
estimating in advance how many requests will be routed to each peer;
this is bad IMHO.

OK, let's use the one-out-one-in approach.

The metaphor is the flow of an incompressible fluid: one molecule goes
forward on a particular pipe, which allows us to let one in on the
incoming pipe.

Nice. :-)

1) Queues or no queues?

Queues produce extra latency. However I don't see how we can, without
gross misrouting, propagate token-for-token otherwise.

I think we might be able do it by storing tokens rather than storing requests: if each node has a reasonably large pool of tokens, it should be able to accommodate a burst of traffic without rejecting any requests. Instead of the queues getting full, the buckets get empty.

In order to achieve near perfect routing, I think we will have to
propagate exactly one token incoming for every incoming request, from
the node to which we have routed the request.

But once we've propagated a token to someone, we can't control which key they request using that token, can we?

- Not every request can be routed to every node. This complicates
  calculations of rates if we go that way, and it also complicates token
  distribution if we just allocate every token which comes in to a
  potential request sender.

OK, so the number of tokens we give out should be based on the number we actually use, not the number we're given. That way if we're given a lot of unusable tokens (eg from a fast peer that's responsible for a small region of the keyspace) we won't accept more requests than we can actually forward.

We have to queue, and match an outgoing token to a queued request - i.e.
an outgoing token to an incoming token.

I think maybe we should match an outgoing token to a completed request - not all incoming tokens are usable.

As for misrouting, I'll make my suggestion in a separate thread. :-)

Cheers,
Michael
_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to