One more tweak:
We get a request. It's in the failure table.
Consider which node we would route the request to.
If the selected node is better than the one we routed to in the failure table, 
then let the request through.

Combine this with request coalescing (already implemented) to avoid floods 
when things change.

On Friday 25 May 2007 19:02, Matthew Toseland wrote:
> On Friday 25 May 2007 18:57, Matthew Toseland wrote:
> > Most of the below comes from [EMAIL PROTECTED] on
> > Frost. I have made a few changes. It should be fairly easy to implement.
> >
> > If we get a request for a key, and it DNFs:
> > - If the Subscribe flag is not set on the request, do nothing, otherwise:
> > - Add the key to our failure table for N minutes.
> > - Add the node, with its boot ID, to our list of nodes interested in that
> > key, which is a structure connected to the key in the failure table.
> >
> > If we get a request for a key in the failure table, we return a fatal
> > error other than DNF: KeyInFailureTable. This works much the same way as
> > a DNF, except that it doesn't get added to nodes' failure tables.
> >
> > If we receive the key, we send it (FNPSubscribeData[SSK]) to all the
> > peers who are in the list of nodes for that key in the failure table,
> > cache it, and delete the failure table entry. We don't send them all at
> > once: We have a queue of keys we currently want to send.
> >
> > Note that the subscription lasts only as long as the failure table entry,
> > and that it is inherently unreliable. Applications are still expected to
> > poll for the key, but when it is inserted it will be propagated quickly
> > to all subscribers (hopefully), and the act of polling will not use very
> > much resources.
> >
> > We do NOT make any attempt to tell our clients that the key is no longer
> > subscribed, because e.g. we lost the connection to the upstream node. The
> > reason for this is that if we do automatic resubscription on the client
> > node, it may make the client node detectable. Whereas if we do it on any
> > subscribed node, while it *is* much cheaper to do it closer to the
> > target, we run into all the complexity of full blown passive requests or
> > pub/sub. That's not something we want to deal with right now, although I
> > think it is *possible*. Also, we never reroute because of location
> > changes, even though in an ideal passive requests system where the client
> > just subscribes and waits we'd have to.
>
> We should probably still track who we routed to, so we can let requests
> through if that node is no longer connected or has restarted.
>
> > The user interface is trivial: An application can either poll for the key
> > directly, or it can tell the node to poll indefinitely for it by
> > submitting a request with MaxRetries=-1. How do we determine whether the
> > subscribe flag should be enabled? If MaxRetries=-1, then it should.
> > However some apps may want to control which requests are pending at any
> > given time, so we should probably have a Subscribe flag.
> >
> > Unresolved questions:
> > - What if backoff caused misrouting the first time? We could end up
> > subscribed to the wrong chain, and not able to get out of the rut because
> > of the failure table? We will have a chance to reroute after the timer
> > expires, is that enough? We could allow multiple failures before blocking
> > the key, or occasionally let a request through (it won't get far unless
> > the routing has changed)?
> > - Should we ever store the key (rather than cache it)? If so, how do we
> > determine whether to store it?
> > - Any negative impacts on load management? Do we want to include the
> > transfers which happen later on in the cost of a request somehow?
> >
> > Implementation difficulties:
> > - We should have a global map of all keys that clients are interested in,
> > and check this whenever we get a key. (This is probably a good idea
> > anyway).
> >
> > Benefits:
> > - Significantly reduced load caused by polling.
> > - Much simpler than any reliable subscription scheme.
> > - Minimal API changes needed.
> > - Apps can continue their polling behaviour, they can in fact expand it
> > and e.g. poll outboxes, with minimal impact on the network.
> >
> > Security:
> > - Subscription and failure table data is kept exclusively in RAM. It
> > lasts for a very limited amount of time. The subscription is tied to the
> > node's boot ID, so if the node is restarted, we don't send the data it
> > subscribed to on to it. Thus, the privacy loss through these slightly
> > longer term arrangements is minimal: If an attacker seizes the node,
> > hopefully it will be restarted, and then they will not know anything
> > about your prior subscriptions. - In 0.5, failure tables were fairly
> > catastrophic: They were self-reenforcing, causing many keys to be blocked
> > all across the network. We can completely avoid this by having a separate
> > failure mode, which does not trigger failure table entries itself. And by
> > this subscription mechanism.

Attachment: pgpWMenPBPJwL.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to