"Trevor Smith" <[EMAIL PROTECTED]> writes:

> I think the real issue is the thread per TCP connection; if we were in C I'd
> complain about the lack of using select when waiting on the TCP connections
> to avoid this and have us only give the connection a thread when the
> connection was actively receiving/sending data (eg large trailing fields); I
> don't know if Java has an equally efficient means of doing this that we just
> aren't using or if Java itself is impared from this sort of efficiency
> methods (unfortunately I don't really know enough Java/have enough
> experience yet to know which it is)
> 
For a long time we've been stuck with only blocking-IO, even though
the java language as a whole has supported it for some time.  We do
our best to make sure that freenet can be run without any dependence
on non-free software, which means keeping compatibility with kaffe,
which hasn't supported non-blocking IO until just recently[1].
Needless to say, the networking layer will be getting an overhall
shortly to remedy this problem.

> I believe that there *is* a difference between local and forwarded requests;
> the node that forwarded the request can go off and use a different node if
> the local node cannot handle the request; the local request is stuck with
> the local node and hense why local requests can push a node over 100%
> capacity
> 
Remote nodes *shouldn't* have to go to a different node.  As well, it
probably wouldn't hirt to limit local requests so that people running
frost/FMB don't flood the network quite as bad as they currently do.

> It is on a slow machine; so my numbers don't compare to yours but; in a
> *day* before changing the numbers I would handle ~1.8k of ~ 64k requests;
> after making the change it was handling 18k of ~ 64k
> 
I'm suprised, given the following.

> Note that the thread count was adjusted upwards to counter the effect of the
> ratio going downwards - I suspect the increased capacity you are seeing is
> from the thread count not being lowered to counter the effect of the ratio
> going up and your machine wasn't configured to max capacity to start.
> 
The increased capacity (in the 700 vs. 1400 handled) is merely a
convenient side-effect of the change.  By no means is it the thing I'm
promoting my change for.  The thing that my change does is decrease
the chances of a node becoming saturated with QRej's and handling 0
requests because of it.  I'm interested to see how many connections
your node refuses, because that's the only effective way (at the
moment) for a node to get others to send it less requests.

> I have compiled the node with the ratios more to the direction of your way;
> and lowering the thread count so that the machine doesn't get into cannot
> create process situations - but I don't expect it to be fair anymore as I
> have gone off and added an accepted requests histogram and key located for a
> request histogram to my node recently; so that has probably created an
> additional load which will disrupt my other numbers so they are probably not
> comparable (I will create diffs against the latest CVS version when I get a
> moment -- is this list an appropriate place to mail diffs to [I don't have
> CVS access; and I think it would be better to have my changes vetted by
> someone prior to applying them to the CVS in any case])
> 
> Trevor
> 
This is a reasonable place for diffs (with full explanation of what
they do).  We can always use more people working on the codebase.

Thelema
-- 
E-mail: [EMAIL PROTECTED]                        Raabu and Piisu
GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7  84B7 D8D7 6ECE 3635 2AAB

_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to