franz1981 commented on pull request #3572:
URL: https://github.com/apache/activemq-artemis/pull/3572#issuecomment-841788187


   @clebertsuconic @michaelandrepearce @gtully 
   One important note related the default thread pool(s) configuration of the 
broker.
   Just to have some fun, I've played a bit with the thread configuration in 
order to reduce the number of context switched of master and this is the 
improvement I got:
   ```
   **************
   RUN 1        EndToEnd Throughput: 51031 ops/sec
   **************
   EndToEnd SERVICE-TIME Latencies distribution in MICROSECONDS
   mean                346.02
   min                  92.67
   50.00%              321.54
   90.00%              444.42
   99.00%              626.69
   99.90%             4014.08
   99.99%            14942.21
   max               22020.10
   count              1600000
   ```
   Same test as 
https://github.com/apache/activemq-artemis/pull/3572#issuecomment-841677475, 
where I was getting `37770 ops/sec`
   
   I've just reduced the acceptor thread pool size to be 1/2 of the available 
cores (instead of 3X), while leaving the rest to the `thread-pool-max-size` 
(instead of having it to be `30`).
   
   eg on a 16 cores machines -> 8 threads for Netty + 8 threads for Artemis 
generic thread pool.
   Given that core protocol perform very few operations on the Netty threads 
I've decided to split the CPU capacity in 2.
   The boost is A LOT, hence I think we should re-think the default parameters 
we set for the broker wdyt?
   I'm going to file a new issue if you agree 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to