I don't think it will be the case for all situations.  

River's thread pool has a requirement that a new thread must be created if none 
are waiting, even if this results in an oome, which results in an excess of 
threads being created during load fluctuations, which happens a lot.  Tasks 
aren't allowed to build up in the queue, that creates surging.  As a result, 
most of the time there are a lot of idle threads polling for tasks from an 
empty queue, resulting in contention.

So it's apparent that pooling threads in a suboptimal thread pool design is a 
bad idea.

Regards,

Peter.


Sent from my Samsung device.
  Include original message
---- Original message ----
From: Patricia Shanahan <p...@acm.org>
Sent: 02/12/2015 01:02:56 am
To: dev@river.apache.org
Subject: Re: Trunk merge and thread pools

Thanks for getting that done. 

There is a more general message in your thread pool observation. Any  
time we do something a certain way for performance, we should be  
prepared for the possibility that the opposite choice will become better  
in the future. 

In general, there has been a trend for object creation and GC to get  
more efficient, so that object pooling that improved performance on  
early Java versions is a disoptimization now. Still, I'm surprised that  
it extends to thread creation. 

On 12/1/2015 5:04 AM, Peter wrote: 
> Completed local merge of trunk into qa-refactor-namespace. Ran qa and 
> regression test suites. 
> 
> Performed some profiling on outrigger stress tests, found blocking 
> queue in org.apache.river.thread.ThreadPool to be a significant hot 
> spot.  Tested all blocking queue implementations, no improvement. 
> 
> Wild idea: don't pool threads, create new, let used threads be gc'd 
> after completion. 
> 
> Test result; hot spot eliminated, less threads required. 
> 
> I didn't expect that result. 
> 
> Hotspot's now serialization. 
> 
> Going to test jeri mux for graceful degredation under load when oome 
> occurs next. 
> 
> Regards, 
> 
> Peter. 
> 
> Sent from my Samsung device. 
> 

Reply via email to