Just updated from CVS and I am seeing the same thing.  Looks like
someone added code to open a connection to every node in the routing
table on startup, each attempt to open a connection uses its own thread,
and the OCM ignores the thread limit.  From the below I would guess you
have about 750 nodes in your routing table.

Best solution would be to make the opening of connections non-blocking. 
The negotiation itself would still need a thread, but there is no reason
to allocate a thread as soon as the SYN is sent.  I don't know if this
is possible in java.nio.  Quick fix would be to make the OCM obey the
thread limit, but this would starve the node of threads.  I'm not sure
if this starvation would be temporary or if the connections
automatically retry.  Your description makes it sound like your node had
been up for a while before you noticed the problem.  Maybe a setting
should be added to artificially limit the number of pending connections?

-Pascal


Gordan wrote:
> Just noticed a possible issue. I've got maximum threads set to 512. I just 
> checked my node and found that the bandwidth consumption on it has ceased and 
> it is recjecting all requests because it is currently running 1200+ threads!
> 
> The routing time still looked good at 30-70ms, but I don't know how much that 
> could be trusted under the circumstances.
> 
> All I could think of to figure out what is going wrong is the Class/Threads 
> list on the Environment page. Here is what is said:
> 
> Class                                                 Threads used
>    Wake announcement procedure if there is no traffic.              1
>    freenet.Message: DataNotFound                                    2
>    freenet.Message: DataRequest                                    30
>    freenet.Message: InsertRequest                                   2
>    freenet.Message: QueryRejected                                  11
>    freenet.Message: QueryRestarted                                  5
>    freenet.OpenConnectionManager$ConnectionJob                    739
>    freenet.client.InternalClient$ClientMessageVector               25
>    freenet.interfaces.LocalNIOInterface$ConnectionShell           109
>    freenet.interfaces.PublicNIOInterface$ConnectionShell          186
>    freenet.node.states.data.DataSent                                4
>    freenet.node.states.data.DataStateInitiator                      2
>    freenet.node.states.request.NoInsert                             1
>    freenet.node.states.request.RequestInitiator                    93
> 
> The big problem seems to be in far too many 
> freenet.OpenConnectionManager$ConnectionJob instances being spawned.
> 
> Can anybody offer any insights? Has anyone seen similar behaviour?
> 
> Gordan
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to