Message deserializer pool will never grow beyond a single thread.
-----------------------------------------------------------------

                 Key: CASSANDRA-1358
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1358
             Project: Cassandra
          Issue Type: Improvement
          Components: Core
    Affects Versions: 0.6.3
         Environment: All.
            Reporter: Mike Malone
            Priority: Minor


The message deserialization process can become a bottleneck that prevents 
efficient resource utilization because the executor that manages the 
deserialization process will never grow beyond a single thread. The message 
deserializer executor is instantiated in the MessagingService constructor as a 
JMXEnableThreadPoolExecutor, which extends 
java.util.concurrent.ThreadPoolExecutor. The thread pool is instantiated with a 
corePoolSize of 1 and a maximumPoolSize of 
Runtime.getRuntime().availableProcessors(). But, according to the 
ThreadPoolExecutor documentation "using an unbounded queue (for example a 
LinkedBlockingQueue without a predefined capacity) will cause new tasks to be 
queued in cases where all corePoolSize threads are busy. Thus, no more than 
corePoolSize threads will ever be created. (And the value of the 
maximumPoolSize therefore doesn't have any effect.)"

The message deserializer pool uses a LinkedBlockingQueue, so there will never 
be more than one deserialization thread. This issue became a problem in our 
production cluster when the MESSAGE-DESERIALIZER-POOL began to back up on a 
node that was only lightly loaded. We increased the core pool size to 4 and the 
situation improved, but the deserializer pool was still backing up while the 
machine was not fully utilized (less than 100% CPU utilization). This leads me 
to think that the deserializer thread is blocking on some sort of I/O, which 
seems like it shouldn't happen.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to