On Mon, Jul 01, 2002 at 11:26:59PM -0400, Gianni Johansson wrote:
> On Monday 01 July 2002 20:21, you wrote:
< >
> > I have implemented a fix for this, along the lines that were discussed
> > many weeks ago (and optional representation of the NodeReference that
> > does not include the identity, which can be read from the session
> > layer). However, when I commit this it means a protocol upgrade that
> > breaks backwards compatibility (*), so I'm holding back a few hours
> > pending objections.
>
> Do it, I don't object.

Ok. Consider it comitted. The code is untested, but I figure it works. I
also updated the protocol specification to reflect the changes - I don't
want to be the only guy ignoring it!!

> > >    I have read in one post that there is a limit of 60 requests per
> > > minute. It is almost 100 times slower than node could handle in my
> > > estimation.
>
> I hope that this is true.  But I am somehat dubious.  It's not sexy to talk 
> about limitatations but they must be factored into the design of the system 
> or it won't work.
> 
> I have always suspected that the bounding factor limiting how many requests a 
> node can usefully handle will be the number healthy node refs it can 
> maintain.  At least for modern systems with cable-modem class connectivity.

I don't know what you mean. The neighbors are also likely to be "modern
systems with cable-modem class connectivity".

<> 
> >I have to say that I agree with Pascal regarding the value
> > of this limit - rejecting a request actually increases the total amount
> > of work the network has to do compared to serving it (the previous node
> > has to go back and route again, sending the request to it's next peer
> > with the same HTL as you would have given it.) I don't see how nodes
> > could possibly become better citizens by working below capacity.

You didn't respond to this. When a node serves a request and forwards
it, it does a little bit of work for the network, decreasing the total
amount of work the rest of the network has to do by one hop. When a node
rejects a request it does no work at all, and throws all the work back
to the rest of the network. Rejecting requests is node egoism, and is
thus only justified when a node needs to be egoistic because it is
overloaded. If a node suspects that the rest of the network is
overloaded the best thing it can do is serve as many requests as
possible!

> > What nodes need to do to be good citizens, is to monitor the amount of
> > requests they generate locally compared to the amount they are able to
> > serve - but as was noted the current code doesn't do that all.
> >
> 
> Agreed, but how do you figure out "the amount they are able to serve "?

Ideally because the network is load balanced and the node gets exactly
the amount of work it can serve...

-- 

Oskar Sandberg
[EMAIL PROTECTED]

_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to