Philipp Shergalis created IGNITE-22959:
------------------------------------------

             Summary: Fix incorrect ThreadPoolExecutor usages 
                 Key: IGNITE-22959
                 URL: https://issues.apache.org/jira/browse/IGNITE-22959
             Project: Ignite
          Issue Type: Improvement
            Reporter: Philipp Shergalis
            Assignee: Philipp Shergalis


ThreadPoolExecutors with unbounded queues and without allowCoreThreadTimeout 
actually work as fixed thread pools, ignoring maximumPoolSize and keepAlive 
properties. All usages of ThreadPoolExecutor constructor should be examined and 
fixed:

1) If this behaviour is expected, replace constructor with 
newFixedPoolExecutor() to avoid confusion

2) If pool is expected to resize only from corePoolSize to maximumPoolSize, 
SynchronousQueue can be used - but it will reject tasks if maximum amount of 
threads is already created. We can use Integer.MAXIMUM as maximumPoolSize or 
introduce a custom queue to avoid this 

3) If pool is expected to resize from 0 to maximumPoolSize, 
allowCoreThreadTimeout should be used

 

Example of incorrect usage:
{code:java}
tableIoExecutor = new ThreadPoolExecutor( Math.min(cpus * 3, 25), 
Integer.MAX_VALUE, 100, MILLISECONDS, new LinkedBlockingQueue<>(), 
IgniteThreadFactory.create(nodeName, "tableManager-io", LOG, STORAGE_READ, 
STORAGE_WRITE));{code}
>From ThreadPoolExecutor documentation:
(for example a LinkedBlockingQueue without a predefined capacity will cause new 
tasks to wait in the queue when all corePoolSize threads are busy. Thus, no 
more than corePoolSize threads will ever be created. (And the value of the 
maximumPoolSize therefore doesn't have any effect.)
By default, the keep-alive policy applies only when there are more than 
corePoolSize threads, but method #allowCoreThreadTimeOut(boolean) can be used 
to apply this time-out policy to core threads as well, so long as the 
keepAliveTime value is non-zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to