Ian Clarke wrote:

> Toad wrote:
>> Currently the situation, even with the recently integrated probabilistic
>> rejection, is as follows:
>> We start off with no load
>> We accept some queries
>> Eventually we use up our outbound bandwidth, and due to either
>> messageSendTimeRequest or the output bandwidth limit, we reject queries
>> until our currently transferring requests have been fulfilled.
> 
> Your solutions all seem to be addressing the wrong end of this problem
> by accepting that this situation will happen and trying to make NGR deal
> with it - but the real question is: Why are nodes getting themselves
> into a situation where they have to QR solidly for periods of time?  The
> QRing should gradually increase with time such that the situation
> described above doesn't occur. 

Because nodes learn that we're overloaded far slower than we _get_
overloaded. Probabilistic QR should help some, but I think the best thing
we've got is Ian's idea (which is not far off from Martin's): Let nodes
know what we can handle, and then give them reasons not to try to send us
(much) more than that. No, it's not entirely leech-proof, but it's still a
good bit better than we've got. What we have here, is a failure to
communicate. ;)

--hobbs


_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to