On October 04, 2003 11:19 am, Toad wrote:
> On Sat, Oct 04, 2003 at 11:12:07AM -0400, Ed Tomlinson wrote:
> > On October 04, 2003 10:21 am, Toad wrote:
> > > On Fri, Oct 03, 2003 at 11:40:25PM +0100, Toad wrote:
> > > > On Fri, Oct 03, 2003 at 12:40:03PM -0700, Ian Clarke wrote:
> > > > > Toad wrote:
> > > > > >On Sun, Sep 28, 2003 at 12:21:05AM -0500, Brandon Low wrote:
> > > > > >>Also, it is possible that inserts are __slow__ due (indirectly)
> > > > > >> to the asyncronizing of trailer sends.  I've tried to explain my
> > > > > >> view of this to toad, but lets see if mentioning it here gets me
> > > > > >> anywhere: With trailers async, there is no local limit on how
> > > > > >> many trailers we can start, that is we will keep queueing up
> > > > > >> trailers till we are blue in the face, causing tremendous
> > > > > >> slowness in the actual transfer of said trailers.  IT IS MY
> > > > > >> __HUMBLE__ opinion that this behavior is less desirable than
> > > > > >> having threads block on trailer sends forcing the node to limit
> > > > > >> itself on how many trailers it can start.
> > > > >
> > > > > I agree with Brandon.
> > > > >
> > > > > Would you rather go to a restaurant where they just kept accepting
> > > > > more customers, slowing the service afforded to their existing
> > > > > customers, or would you prefer that a restaurant did its best to
> > > > > serve its existing customers, and didn't accept new customers if
> > > > > existing customers would suffer?
> > > > >
> > > > > I would prefer the latter, and would point out that the latter
> > > > > restaurant may well end up serving as many if not more customers
> > > > > overall than the former.
> > > >
> > > > The solution as I hammered out with edt recently, is to make the send
> > > > queue length accurate (by only including data we actually have in
> > > > cache ready to send, this defeats some DoS attacks and makes the
> > > > whole thing much more accurate when dealing with stream-through
> > > > requests), and to reject all queries when the send queue exceeds some
> > > > factor multiplied by the bandwidth limit. He said he'd have a go at
> > > > implementing it; I am still working on PeerHandler. Do we have your
> > > > approval? My main concern was that a simple limit on the number of
> > > > trailers moving was going to be a major problem, but the above scheme
> > > > should work.
> > >
> > > There is a major problem with this.
> > >
> > > We have no control over how fast data is sent out from the send queue
> > > if we limit the send queue - so we could end up using very low
> > > bandwidth and still rejecting all incoming requests, for no good
> > > reason.
> >
> > No.  This does not have to be the case.
> >
> > > The right solution is exactly what we have now - if the output
> > > bandwidth usage is over some fraction of the limit, reject all queries.
> >
> > What we have now works as designed.  Its lets us start far more work than
> > we have the resources to do.
> >
> > > What we have now is apparently not working, this is most likely due to
> > > some problem with the exact ratios, bugs in the bandwidth limiter or
> > > similar problems. You could try increasing lowLevelBWLimitMultiplier
> > > (default currently 1.2, used to be 1.5, maybe it is too low), disabling
> > > the hard limiter completely (doLowLevelOutputLimiting=false), or
> > > decreasing the outLimitCutoff (currently 0.9, try 0.75). This should
> > > produce similar results to what you have with your other limiting
> > > schemes - in theory - modulo bugs.
> >
> > What I have do is setup a feedback loop that finds the smallest
> > sendQueueSize such that we are using our bandwidth.  I we do not use the
> > bandwidth it increases the window and the bandwidth usage goes up.  The
> > converse is also true.
>
> You are using all sorts of voodoo that you have not explained to me.

It NOT voodoo - it just that I have not explained it.

> Please explain it. And what is wrong with my suggestion anyway? If we
> are using all our available resources, we won't accept any more
> requests. It's just that the current implementation has a few problems -
> you yourself said that it was sending 8kB/sec of a 10kB/sec limit. The
> rejection code only kicks in at 9kB/sec (90%), and the hard limit is
> 120% - but the hard limit may be too conservative, and limiting it to
> 80% or so, with the result that it never actually hits the QueryReject
> limit. What is wrong with this theory? It fits the facts.

In case you may be right.  Once cvs works at some level again I will try.
Made the mistake of backing out all my changes and updating from cvs.
I does not even build, when I fix that (getIdleTime > idleTime in OCM)
fproxy stalls...  Unstable is, well, unstable right now.

Ed
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to