[ 
https://issues.apache.org/jira/browse/STORM-1700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268883#comment-15268883
 ] 

ASF GitHub Bot commented on STORM-1700:
---------------------------------------

Github user HeartSaVioR commented on a diff in the pull request:

    https://github.com/apache/storm/pull/1324#discussion_r61901102
  
    --- Diff: 
storm-core/src/jvm/org/apache/storm/metric/MetricsConsumerBolt.java ---
    @@ -47,17 +68,71 @@ public void prepare(Map stormConf, TopologyContext 
context, OutputCollector coll
             }
             _metricsConsumer.prepare(stormConf, _registrationArgument, 
context, collector);
             _collector = collector;
    +        _taskExecuteThread = new Thread(new MetricsHandlerRunnable());
    +        _taskExecuteThread.setDaemon(true);
    +        _taskExecuteThread.start();
         }
         
         @Override
         public void execute(Tuple input) {
    -        
_metricsConsumer.handleDataPoints((IMetricsConsumer.TaskInfo)input.getValue(0), 
(Collection)input.getValue(1));
    +        // remove older tasks if task queue exceeds the max size
    +        if (_taskQueue.size() > _maxRetainMetricTuples) {
    +            while (_taskQueue.size() - 1 > _maxRetainMetricTuples) {
    --- End diff --
    
    I guess maybe it could drop one more item but it's not a big deal anyway. 
Ideally, taskQueue exceeding the max retain metric tuples should not occur.


> Introduce 'whitelist' / 'blacklist' option to MetricsConsumer
> -------------------------------------------------------------
>
>                 Key: STORM-1700
>                 URL: https://issues.apache.org/jira/browse/STORM-1700
>             Project: Apache Storm
>          Issue Type: Sub-task
>          Components: storm-core
>    Affects Versions: 1.0.0, 2.0.0
>            Reporter: Jungtaek Lim
>            Assignee: Jungtaek Lim
>
> Storm provides various metrics by default, and so on some external modules 
> (storm-kafka).
> When we register MetricsConsumer, MetricsConsumer should handle all of 
> metrics. If MetricsConsumer cannot keep up with these metrics, only way to 
> keep up is increasing parallelism, which seems limited. Furthermore, some 
> users don't want to care about some metrics since unintended metrics will 
> fill external storage.
> Though MetricsConsumer itself can filter metrics by name, it would be better 
> to support filter by Storm side. It will reduce the redundant works for Storm 
> community.
> If we provide filter options, it would be great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to