On Sunday 16 November 2003 11:34 am, Ken Corson wrote:
>    I created a fresh install of 6338 with new seednodes.ref and
> new default freenet.conf , plus empty datastore. After 10.5 hours
> of running time :
>
> load on the main page generally stays below 50%
>
> localQueryTraffic
> 11/16/03 2:00:00 AM EST       16      16      1.0
> 11/16/03 3:00:00 AM EST       522     466     0.89272030651341
> 11/16/03 4:00:00 AM EST       1120    927     0.8276785714285714
> 11/16/03 5:00:00 AM EST       385     364     0.9454545454545454
> 11/16/03 6:00:00 AM EST       160     160     1.0
> 11/16/03 7:00:00 AM EST       167     167     1.0
> 11/16/03 8:00:00 AM EST       6       6       1.0
> 11/16/03 9:00:00 AM EST       8       8       1.0
> 11/16/03 10:00:00 AM EST      34      34      1.0
> 11/16/03 11:00:00 AM EST      33      33      1.0
>
> 25 of 50 nodes backed off, max "Backed Off Until" 12163 sec
> most of them are backed off for more than an hour! Sure seems
> we do need an upper limit on the backoff period (as said by
> others) , somewhere around 10 - 15 minutes, at which point we
> do not increase it further... (at or below the average period
> of the "old" {accept all,then solid-QR} cycle)
>
> It would appear that backoff is 'too strong' , but certainly
> working. I have seen _NO_ incoming connections since startup.
> hmmm... why ?! this is hopefully a fluke.
>
> 83 (0/83/512)
> 7 (4/3)
> data waiting 2,759 KiB
> data moved  70 MiB
>
> (  WARNING: long-windedness follows :o  )
>
> I think most of us understand this: currently it only takes
> a few (old,or mean) nodes that do not backoff to beat out the
> others. I really strongly believe we are going to need to
> control/allocate bandwidth to individual requestors. Ian had
> suggested a fixed 'backoff' period, per QR. But specifying
> an inter-request period is another option at our disposal.
> If the requestee says '400 ms' between queries, this would
> specify a desired (avg,max, whatever) -rate- of 2.5 queries per
> second, from that specific requestor. We don't need to enforce
> compliance with the rate (just yet), but we sorely DO need to
> implement smoother query rates. QRs followed by requestor backoff
> varies the rate per requestor too much, and hurts NGR globall

Yes we do need to enforce it. Besides enforcing it is easy. Instead of 
rejecting all incoming requests with a probability of %overload. Reject them 
with probability of %overloadCausedByYOU.

IE: Not (Current-MaxQPH)/MaxQPH but 
(YourQueries/Currnet)*(Current-MaxQPH)/MaxQPH

>    Philosophically, the use of a QR is a push back against "you
> request too much." Suggesting that when QRing is widespread,
> there are too many requests for the network to handle. Which is
> attempting to force a rate limit all the way back to the original
> requestors' nodes. There are all sorts of queueing issues
> awaiting us on the road ahead...
>    I think I presented this idea before, mainly as a form of
> attack resistance, and that may have put some people off .
> Clearly it would also give us much finer-grained load balancing,
> at least where the rates of queries and rejects are concerned.
>
> If this is not obvious to some people yet...
>
> query handling = ROUTING
>
>    query handling determines (future) load, thus
> query handling = LOAD BALANCING
>
> It is good that we are focusing more on queries lately! The
> current implementation is still not really up to par, but
> rapidly improving. I am not speaking as an impatient user.
>
> Ken
>
>
> _______________________________________________
> Devl mailing list
> [EMAIL PROTECTED]
> http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to