Github user wangli1426 commented on the pull request:

    https://github.com/apache/storm/pull/753#issuecomment-150751146
  
    @d2r ,
    Thank you very much for your prompt response. However, I cannot quite 
understand your meaning by 
    > <cite>If the previous worker's throughput stats had declined sharply 
before the worker had died, then weighting the current worker's throughput 
stats still would be inaccurate, but in a different way. </cite>
    
    I will appreciate it a lot if you could provide a concrete example. 
    
    I couldn't agree with you more than storm needs a History Server keep 
historical information. Otherwise, executors are responsible for maintaining 
their stats, which make them stateful. Is there any plan about the history 
server?
    
    By the way, adding throughput metric is my first step. And my ultimate goal 
is to add ***normalized*** throughput, which leverages queueing theory to 
provide a comparable performance metrics, similar but more accurate than 
```capacity``` that is currently available in Storm. With normalized 
throughput, one can easily identify the performance bottleneck of a running 
topology by finding the executor with minimal number in normalized throughput. 
With this capability, we can develop a runtime scheduling algorithm to make 
better resource allocation. So what do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to