On Tuesday 02 July 2002 04:04, Oskar wrote:
> On Mon, Jul 01, 2002 at 11:26:59PM -0400, Gianni Johansson wrote:
> > On Monday 01 July 2002 20:21, you wrote:
> ...
> > > >    I have read in one post that there is a limit of 60 requests per
> > > > minute. It is almost 100 times slower than node could handle in my
> > > > estimation.
> >
> > I hope that this is true.  But I am somehat dubious.  It's not sexy to
> > talk about limitatations but they must be factored into the design of the
> > system or it won't work.
> >
> > I have always suspected that the bounding factor limiting how many
> > requests a node can usefully handle will be the number healthy node refs
> > it can maintain.  At least for modern systems with cable-modem class
> > connectivity.
>
> I don't know what you mean. The neighbors are also likely to be "modern
> systems with cable-modem class connectivity".

I meant that I think the number of requests my node can usefully handle is 
bounded by the number of requests my node's nodefefs can handle, not 
bandwidth, cpu or RAM.

I don't understand how it can be useful for my node to answer inbound 
requests by generating more outbound requests than it could handle (on 
average).  How could a network of such nodes ever not be overloaded?


>
> <>
>
> > >I have to say that I agree with Pascal regarding the value
> > > of this limit - rejecting a request actually increases the total amount
> > > of work the network has to do compared to serving it (the previous node
> > > has to go back and route again, sending the request to it's next peer
> > > with the same HTL as you would have given it.) I don't see how nodes
> > > could possibly become better citizens by working below capacity.
>
> You didn't respond to this. When a node serves a request and forwards
> it, it does a little bit of work for the network, decreasing the total
> amount of work the rest of the network has to do by one hop. When a node
> rejects a request it does no work at all, and throws all the work back
> to the rest of the network.
Isn't the htl decremented as a result of the QR?

>Rejecting requests is node egoism, and is
> thus only justified when a node needs to be egoistic because it is
> overloaded. If a node suspects that the rest of the network is
> overloaded the best thing it can do is serve as many requests as
> possible!
>
> > > What nodes need to do to be good citizens, is to monitor the amount of
> > > requests they generate locally compared to the amount they are able to
> > > serve - but as was noted the current code doesn't do that all.
> >
> > Agreed, but how do you figure out "the amount they are able to serve "?
>
> Ideally because the network is load balanced and the node gets exactly
> the amount of work it can serve...

Could you make your best guess as to how the load balancing should work? I 
remember you wanted to do some simulations, but even a decent guess might be 
better than what we have now -- i.e. nothing.

The infrastructure for gathering the global load stats is there, but we never 
actually turned the selective reference resetting.

--gj

-- 
Freesite
(0.4) freenet:SSK@npfV5XQijFkF6sXZvuO0o~kG4wEPAgM/homepage//

_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to