Here is a snapshot of 6213 collapsing:<snip>
Now, when we go find out why it died (from env):I had this exact same problem. After I switched to threadFactory=Y and maximumThreads=300, I have had no troubles. Try this and see if it works for you.
Class Threads used Checkpoint: Connection opener 52 freenet.interfaces.LocalNIOInterface$ConnectionShell 2 freenet.interfaces.PublicNIOInterface$ConnectionShell 5 freenet.node.states.data.DataStateInitiator 1 freenet.node.states.data.TrailerWriteCallbackMessage:true:true 1
:-( I don't have the memory to burn on 1000s of threads (unless Y is significantly better than Q). And the effect it has (from general):
Pooled threads running jobs 60 (133.3%) Reason for refusing connections: activeThreads(60) >= maximumThreads (45)
:-( By running my node out of all threads, it just shuts down.
Toad, I wonder why this is not default for all. Would it hurt some users?
I get the feeling that chewing up threads for things that block indefinitely is a bad idea. Connections are either not timing out, or we are trying to contact a class that cannot be contacted (firewalls, NATs), or we are timing out too slowly, or... Using threads to serve content from my store is more important than opening a random connection.
The cute thing is my node looks like it is trying to announce to get things moving again, but, at 133% load, that just isn't gonna happen. Looked again just now, now we are up to 144% load.
I'm sure it can recover, but, better would be to handle the situation better. Leave 2 threads to serve content from the local datastore, and accept those Qs that can be served out of the local store, then at least we can still saturate the upstream serving content when this sort of thing happens. Consider killing stalled opens. Figure out if there is a class that just cannot be opened, and find ways to avoid trying to open them. And last, NIOize opens.
Another strategy would be to figure out how much stack you need for a thread and trim them down to not chew up the memory as fast, that might allow me to allocate more threads.
-Martin
_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
