On Sun, 2003-11-16 at 21:40, Ken Corson wrote:
> Edward J. Huff wrote:
> > chained request gets QRed.  The node which first receives the QR
> > does not try again with a different node in its routing table, as
> > it would if it got a DNF.  Instead, it passes the QR back, and
> > all of the preceding links pass it back to the original requester.
> 
> I'm very confused by this. I was under the impression that a
> QR meant "DON'T back down the chain, just try another path" and
> that DNF meant "send a failure all the way back down the chain,
> as the HTL has been exhausted."
> 
> In fact, that's how I described it to someone on the list just
> a few days ago. No one corrected my mistake, if I got it wrong.
> I believe Edward has a longer-term, firmer grasp of Freenet's
> operations than myself, but someone please confirm this one way
> or the other. Apologies in advance for consuming devl traffic
> on this... (if I am the only one who is educated by it)

Thanks for the confidence, but I don't know everything.  I took
the guy you corrected as being right because I got behind reading
this list and thought that no one corrected _his_ mistake.  But
since Toad and Ian replied to your message and didn't correct it,
you may well be right.  But I saw other exchanges which make
much more sense if QR's do go all the way back.

> 
> > Thus, a QR is a disaster.  It would be better if the request had
> 
> Many QRs are a disaster. Many defined as "enough to consume a
> significant percentage of total available bandwidth." A few are
> probably okay, as a means of sharing information about load - like
> 'that node is/was too busy at the point in time it rejected'
>    It is valuable for implementing backoff.

Assuming that the immediate predecessor can retry, then one QR
is not a disaster.  But if HTL was 25, and you are the 24th
node, and QR goes all the way back, that means that 22 nodes
accepted the query, routed it, and will process the QR, all
for nothing.  Especially if you knew where to send it and
the next node has the data.  Can someone who knows for sure
comment?

> 
> > <edt> we are now doing backoff - does expontenial make sense?
> > <edt> I suggest that a randomized linear backoff based on the
> > pDNF(node)*random()*constant would work better.
> > <edt> when 'constant' is determined by a feedback loop
> > <edt> designed to minimumize the overall pDNF 
> 
> be careful, this approximates a "rate negotiation" :) which I
> favor, but others may not... ahhh, forget I applied that nasty
> label to it. But notice it deals specifically with the rate of
> queries between a single Pair of nodes, rather than saying
> "I'm busy" to whomever happens to query me next ...

Well, I might not agree with the pDNF(node) factor, but I think
the backoff should be linear and random.  (The amount to back off
increases each time the retry still gets QR, but not exponentially).

As to "between a single pair of nodes", AFAIK the request receiving
node does not use a table indexed by requester ID to decide whether
to QR or not.  It is the requester who uses a table (his routing table)
to remember if he is backed off of any given node.  So there isn't
any negotiation going on, just an attempt to substantially reduce
the number of QR's by ignoring the node for a while.

> 
>  > <edt> designed to minimumize the overall pDNF
> pDNF for the _requestor_, signifying a Pair relationship
> 
> > <edt> zab_ why does an exponent make sense given the patterns 
> > of QR we see?
> > <edt> I do not think exponential backoff makes than much sense... 
> > backoff yes.  exponential no.
> 
> Exponential makes sense (as in ethernet) when we are addressing
> a contention between multiple writers. Linear makes sense when
> we are trying to optimize the rate between 2 entities. The
> 'exponent' was supposed to be different for each sender, to
> reduce contention. It was rather perverted here by using it
> repeatedly for a single sender. I am NOT criticizing anyone,
> that's just how it got done. (so I agree w/edt above)
> 

As I understand it, exponential backoff makes sense when you
really don't have any idea how long the backoff should be.
You start small, but let it increase rapidly if you get
repeated failures.  In this case, maybe we know that the 
backoff interval should not exceed a few minutes.

My alternative idea, having the request receiver close
connections after he has enough backlog to be sure of
keeping his uplink saturated for several minutes, seems
to have a lot of advantages.  The requester knows the 
connection is closed and won't route there and won't get
a QR.  The request receiver probably has too many open 
connections and can keep his bandwidth saturated without 
needing to reopen the connection anytime soon.  When he 
does need more requests, he can re-open the connection.

So, does a QR go all the way back to the original requester
before retry, or not?

-- Ed Huff

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to