Github user roshannaik commented on the issue:

    https://github.com/apache/storm/pull/2241
  
    Some points covering prev comments by @HeartSaVioR and @revans2 
    
    **Throughput limiting:** That only makes sense if you are measuring 
Throughput vs CPU/other resource usage.  Latency measurements do not need it. 
And its a sin if you are doing that when trying to measure throughput.
    
    **TVL topology:** 
    - Given its rate limiting nature, it definitely does not have the right 
name. Its employment of very high threads counts and rate limiting spouts 
appear to be tuned to work within the limitations of the current msging system 
and target the old sweetspot. Deserves a question.  Harsha's measurements 
(which are more sensible in terms of executor counts), shows that the current 
msging was brought down to its knees very quickly once the rate limiting went 
away.  
    
    
    @revans2 
    The drop you are seeing with the increased in splitter counts is indicative 
of the increased CPU contention going on even when not enough data flowing 
through an executor (the issue you initially brought up... of high CPU usage 
for idle topos).  The old system, executor seems to be spending more time 
sleeping when there is insufficient data flow and less CPU contention and 
adding redundant/idle executors is not affecting it as much.So you can 
throughput plateaus. 
    
    Lowering the CPU contention for idle mode is something i plan to address... 
and i think have left some TODOs for myself in the code already for to keep me 
honest.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to