mosermw commented on PR #8584:
URL: https://github.com/apache/nifi/pull/8584#issuecomment-2423400073

   Thanks for looking at this @joewitt.  I was going to ask for eyes after the 
2.0.0 push was over, so I appreciate you finding this now.
   
   I probably spent >60 hours testing this, and my notes are on my work 
computer so I'll try to remember.  My setup was activemq and nifi on separate 
EC2 instances, with PublishJMS pushing messages as fast as it can and 
ConsumeJMS reading messages in various scenarios.  Under ideal conditions (100B 
message size, nothing else running in NiFi), 1 batch size versus 25 did **160k 
versus 350k** (messages per 5 mins), so roughly double.  Under more real-world 
conditions (5KB message size, NiFi busy doing lots of things) 1 versus 25 batch 
size did **30k versus 300k**, which was more noticeable.  Super big batch size 
like 10,000 didn't perform much different than 25 in my environment.
   
   I even put a speedbump latency generator between activemq and nifi, but 
results were predictable.  Though it did seem like PublishJMS was more affected 
by latency than ConsumeJMS, I didn't dive deeper there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to