Re: [freenet-dev] Node performance
Ian Clarke wrote: Great, but should be be deprecating any of the old stuff? I would rather not keep broken code lying around if we can help it. There is nothing broken left in... Except that I was unhappy that one conf value could override another. I would rather force the user to comment out the overridden value so that he explicitly agrees to have understood what he is doing. In Node.java there is a variable breakExistingConfFiles=false which you can change to true if you want the stricter behaviour. Also, if you specify an average limit, you get hourly traffic reports in the log. This might be useful even if you set the average limits to the same as the perSecond limits. -- Christopher William Turner, http://www.cycom.co.uk/ Java development since 1996 http://club.cycom.co.uk/tms.htm Terminology Management software http://club.cycom.co.uk/wt.htm Wind Turbine blade design software ___ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node performance
On Tuesday 02 July 2002 04:04, Oskar wrote: On Mon, Jul 01, 2002 at 11:26:59PM -0400, Gianni Johansson wrote: On Monday 01 July 2002 20:21, you wrote: ... I have read in one post that there is a limit of 60 requests per minute. It is almost 100 times slower than node could handle in my estimation. I hope that this is true. But I am somehat dubious. It's not sexy to talk about limitatations but they must be factored into the design of the system or it won't work. I have always suspected that the bounding factor limiting how many requests a node can usefully handle will be the number healthy node refs it can maintain. At least for modern systems with cable-modem class connectivity. I don't know what you mean. The neighbors are also likely to be modern systems with cable-modem class connectivity. I meant that I think the number of requests my node can usefully handle is bounded by the number of requests my node's nodefefs can handle, not bandwidth, cpu or RAM. I don't understand how it can be useful for my node to answer inbound requests by generating more outbound requests than it could handle (on average). How could a network of such nodes ever not be overloaded? I have to say that I agree with Pascal regarding the value of this limit - rejecting a request actually increases the total amount of work the network has to do compared to serving it (the previous node has to go back and route again, sending the request to it's next peer with the same HTL as you would have given it.) I don't see how nodes could possibly become better citizens by working below capacity. You didn't respond to this. When a node serves a request and forwards it, it does a little bit of work for the network, decreasing the total amount of work the rest of the network has to do by one hop. When a node rejects a request it does no work at all, and throws all the work back to the rest of the network. Isn't the htl decremented as a result of the QR? Rejecting requests is node egoism, and is thus only justified when a node needs to be egoistic because it is overloaded. If a node suspects that the rest of the network is overloaded the best thing it can do is serve as many requests as possible! What nodes need to do to be good citizens, is to monitor the amount of requests they generate locally compared to the amount they are able to serve - but as was noted the current code doesn't do that all. Agreed, but how do you figure out the amount they are able to serve ? Ideally because the network is load balanced and the node gets exactly the amount of work it can serve... Could you make your best guess as to how the load balancing should work? I remember you wanted to do some simulations, but even a decent guess might be better than what we have now -- i.e. nothing. The infrastructure for gathering the global load stats is there, but we never actually turned the selective reference resetting. --gj -- Freesite (0.4) freenet:SSK@npfV5XQijFkF6sXZvuO0o~kG4wEPAgM/homepage// ___ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node performance
My average bandwidth throttled node settles down to exactly track its defined long-term weekly traffic limits. Any nodes in contact with it will be slowed to match it (if they don't lose patience and timeout). They don't get rejected. Currently serverSockets get created with a fixed backlog of 50. This means 50 clients could have false hopes of a communication soon. Maybe better to have a backlog of 1 so that clients don't connect to a busy node at all. -- Christopher William Turner, http://www.cycom.co.uk/ Java development since 1996 http://club.cycom.co.uk/tms.htm Terminology Management software http://club.cycom.co.uk/wt.htm Wind Turbine blade design software ___ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node performance
On Tue, Jul 02, 2002 at 08:34:53PM +0100, Christopher William Turner wrote: My average bandwidth throttled node settles down to exactly track its defined long-term weekly traffic limits. Any nodes in contact with it will be slowed to match it (if they don't lose patience and timeout). They don't get rejected. Very nice. Do our overall per second limits still not work? Last I saw they set each connection's limit to 10% of the overall limit, but since we can have many connections, this is unreliable - and also, does whatever there is now work with your changes? Currently serverSockets get created with a fixed backlog of 50. This means 50 clients could have false hopes of a communication soon. Maybe better to have a backlog of 1 so that clients don't connect to a busy node at all. -- Christopher William Turner, http://www.cycom.co.uk/ Java development since 1996 http://club.cycom.co.uk/tms.htm Terminology Management software http://club.cycom.co.uk/wt.htm Wind Turbine blade design software msg03372/pgp0.pgp Description: PGP signature
Re: [freenet-dev] Node performance
Matthew Toseland wrote: Very nice. Do our overall per second limits still not work? Last I saw they set each connection's limit to 10% of the overall limit, but since we can have many connections, this is unreliable - and also, does whatever there is now work with your changes? Yes. The per second limits still work as before. They were not perConnection, they were correctly global. Now they even work on input (broken before). You can have perSecond and average limits. My conf file has:- # If nonzero, specifies an independent limit for incoming data only. # (overrides bandwidthLimit if nonzero) inputBandwidthLimit=1 averageInputBandwidthLimit=1000 # If nonzero, specifies an independent limit for outgoing data only. # (overrides bandwidthLimit if nonzero) outputBandwidthLimit=1 averageOutputBandwidthLimit=1000 -- Christopher William Turner, http://www.cycom.co.uk/ Java development since 1996 http://club.cycom.co.uk/tms.htm Terminology Management software http://club.cycom.co.uk/wt.htm Wind Turbine blade design software ___ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node performance
On Tue, Jul 02, 2002 at 09:27:03PM +0100, Christopher William Turner wrote: Matthew Toseland wrote: Very nice. Do our overall per second limits still not work? Last I saw they set each connection's limit to 10% of the overall limit, but since we can have many connections, this is unreliable - and also, does whatever there is now work with your changes? Yes. The per second limits still work as before. They were not perConnection, they were correctly global. All committed. Now they even work on input (broken before). Very nice. I won't use the average limit, but the max limit is useful. You can have perSecond and average limits. My conf file has:- # If nonzero, specifies an independent limit for incoming data only. # (overrides bandwidthLimit if nonzero) inputBandwidthLimit=1 averageInputBandwidthLimit=1000 # If nonzero, specifies an independent limit for outgoing data only. # (overrides bandwidthLimit if nonzero) outputBandwidthLimit=1 averageOutputBandwidthLimit=1000 -- Christopher William Turner, http://www.cycom.co.uk/ Java development since 1996 http://club.cycom.co.uk/tms.htm Terminology Management software http://club.cycom.co.uk/wt.htm Wind Turbine blade design software msg03374/pgp0.pgp Description: PGP signature
Re: [freenet-dev] Node performance
On Mon, 1 Jul 2002, Roman Bednarek wrote: On 26 Jun 2002, Edgar Friendly wrote: Roman Bednarek [EMAIL PROTECTED] writes: Hi. Recently I was working with the Tomcat Servlet engine, my servlet was generating gifs on the fly. It was able to process about 30-50 requests/second on a standard PC ( 500Mhz ). Taking that into account I guess freenet should handle over 100 requests/second, because most requests (when the data is not found) are routed to other hosts. The traffic cannot be that big (I think), so freenet should handle it. Actually, at the moment requests _are_ that big; between 1 and 2K per request. (probably more on the 2K side) It may be that various nodes' transfer limits are reducing their capacity to less than what they need to be. Maybe such big requests are a serious problem to freenet? I want to add request size logging to my node. Could you advice me where to put the log to catch all incoming and outgoing requests? I have read in one post that there is a limit of 60 requests per minute. It is almost 100 times slower than node could handle in my estimation. Yes, but be careful when fiddling with it. I set maximumThreads=0 and tried to adjust node performance with maxConnectionsPerMinute and while NodeStatus showed that the node handled hundreds of requests simultaneously, FProxy performance was sluggish and eventually (after a few hours) it stopped answering FProxy requests completely. A look at the ticker revealed that the node tried to handle every request sent to it, building up a huge queue of requests, but failing to handle any of them within a reasonable time limit. It didn't even have time to poll and aggerate diagnostic data. -- Mika Hirvonen [EMAIL PROTECTED] http://nightwatch.mine.nu/ ___ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node performance
On Mon, Jul 01, 2002 at 11:35:34AM +0200, Roman Bednarek wrote: Maybe such big requests are a serious problem to freenet? I want to add request size logging to my node. Could you advice me where to put the log to catch all incoming and outgoing requests? I have implemented a fix for this, along the lines that were discussed many weeks ago (and optional representation of the NodeReference that does not include the identity, which can be read from the session layer). However, when I commit this it means a protocol upgrade that breaks backwards compatibility (*), so I'm holding back a few hours pending objections. I have read in one post that there is a limit of 60 requests per minute. It is almost 100 times slower than node could handle in my estimation. I am also planning to increase GJs hard outgoing limit by 5 times with this patch. I have to say that I agree with Pascal regarding the value of this limit - rejecting a request actually increases the total amount of work the network has to do compared to serving it (the previous node has to go back and route again, sending the request to it's next peer with the same HTL as you would have given it.) I don't see how nodes could possibly become better citizens by working below capacity. What nodes need to do to be good citizens, is to monitor the amount of requests they generate locally compared to the amount they are able to serve - but as was noted the current code doesn't do that all. (*) Before people start whining about bad design, I would like to note that it is in fact possible to implement this in a backwards compatible way, by not using the terse NodeReference format when talking to nodes whose current reference indicates they use the old protocol - but I would REALLY not like to get into that quagmire before 1.0... -- Oskar Sandberg [EMAIL PROTECTED] ___ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node performance
Roman Bednarek [EMAIL PROTECTED] writes: Hi. Recently I was working with the Tomcat Servlet engine, my servlet was generating gifs on the fly. It was able to process about 30-50 requests/second on a standard PC ( 500Mhz ). Taking that into account I guess freenet should handle over 100 requests/second, because most requests (when the data is not found) are routed to other hosts. The traffic cannot be that big (I think), so freenet should handle it. Actually, at the moment requests _are_ that big; between 1 and 2K per request. (probably more on the 2K side) It may be that various nodes' transfer limits are reducing their capacity to less than what they need to be. Tomcat uses synchronized IO, so it is not the main problem. I am trying to find where the node spends time processing a request, but for now I could not find anything. Is there anything which can help benchmark different parts of the Node code? Roman The only thing I tried that would benchmark java made fred run so slow that it was unusable. (btw, you probably mean synchronous, not synchronized) Thelema -- E-mail: [EMAIL PROTECTED]Raabu and Piisu GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7 84B7 D8D7 6ECE 3635 2AAB ___ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl