[jira] [Comment Edited] (STORM-2153) New Metrics Reporting API

2017-06-20 Thread Srishty Agrawal (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056219#comment-16056219
 ] 

Srishty Agrawal edited comment on STORM-2153 at 6/20/17 11:25 PM:
--

We are planning to upgrade the Storm version from {{v0.9.6}} to {{v1.1+}} and 
were wondering if there are chances of new metrics framework being backported 
to {{v1.1.x}} in the future?


was (Author: srishtyagraw...@gmail.com):
We are planning to upgrade the Storm version from {{v0.9.6}} to {{v1.1.0}} and 
were wondering if there are chances of new metrics framework being backported 
to {{v1.1.x}} in the future?

> New Metrics Reporting API
> -
>
> Key: STORM-2153
> URL: https://issues.apache.org/jira/browse/STORM-2153
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> This is a proposal to provide a new metrics reporting API based on [Coda 
> Hale's metrics library | http://metrics.dropwizard.io/3.1.0/] (AKA 
> Dropwizard/Yammer metrics).
> h2. Background
> In a [discussion on the dev@ mailing list | 
> http://mail-archives.apache.org/mod_mbox/storm-dev/201610.mbox/%3ccagx0urh85nfh0pbph11pmc1oof6htycjcxsxgwp2nnofukq...@mail.gmail.com%3e]
>   a number of community and PMC members recommended replacing Storm’s metrics 
> system with a new API as opposed to enhancing the existing metrics system. 
> Some of the objections to the existing metrics API include:
> # Metrics are reported as an untyped Java object, making it very difficult to 
> reason about how to report it (e.g. is it a gauge, a counter, etc.?)
> # It is difficult to determine if metrics coming into the consumer are 
> pre-aggregated or not.
> # Storm’s metrics collection occurs through a specialized bolt, which in 
> addition to potentially affecting system performance, complicates certain 
> types of aggregation when the parallelism of that bolt is greater than one.
> In the discussion on the developer mailing list, there is growing consensus 
> for replacing Storm’s metrics API with a new API based on Coda Hale’s metrics 
> library. This approach has the following benefits:
> # Coda Hale’s metrics library is very stable, performant, well thought out, 
> and widely adopted among open source projects (e.g. Kafka).
> # The metrics library provides many existing metric types: Meters, Gauges, 
> Counters, Histograms, and more.
> # The library has a pluggable “reporter” API for publishing metrics to 
> various systems, with existing implementations for: JMX, console, CSV, SLF4J, 
> Graphite, Ganglia.
> # Reporters are straightforward to implement, and can be reused by any 
> project that uses the metrics library (i.e. would have broader application 
> outside of Storm)
> As noted earlier, the metrics library supports pluggable reporters for 
> sending metrics data to other systems, and implementing a reporter is fairly 
> straightforward (an example reporter implementation can be found here). For 
> example if someone develops a reporter based on Coda Hale’s metrics, it could 
> not only be used for pushing Storm metrics, but also for any system that used 
> the metrics library, such as Kafka.
> h2. Scope of Effort
> The effort to implement a new metrics API for Storm can be broken down into 
> the following development areas:
> # Implement API for Storms internal worker metrics: latencies, queue sizes, 
> capacity, etc.
> # Implement API for user defined, topology-specific metrics (exposed via the 
> {{org.apache.storm.task.TopologyContext}} class)
> # Implement API for storm daemons: nimbus, supervisor, etc.
> h2. Relationship to Existing Metrics
> This would be a new API that would not affect the existing metrics API. Upon 
> completion, the old metrics API would presumably be deprecated, but kept in 
> place for backward compatibility.
> Internally the current metrics API uses Storm bolts for the reporting 
> mechanism. The proposed metrics API would not depend on any of Storm's 
> messaging capabilities and instead use the [metrics library's built-in 
> reporter mechanism | 
> http://metrics.dropwizard.io/3.1.0/manual/core/#man-core-reporters]. This 
> would allow users to use existing {{Reporter}} implementations which are not 
> Storm-specific, and would simplify the process of collecting metrics. 
> Compared to Storm's {{IMetricCollector}} interface, implementing a reporter 
> for the metrics library is much more straightforward (an example can be found 
> [here | 
> https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/ConsoleReporter.java].
> The new metrics capability would not use or affect the ZooKeeper-based 
> metrics used by Storm UI.
> h2. Relationship to JStorm Metrics
> [TBD]
> h2. Target Branches
> [TBD]
> h2. Performance 

[jira] [Comment Edited] (STORM-2153) New Metrics Reporting API

2017-06-20 Thread Srishty Agrawal (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056219#comment-16056219
 ] 

Srishty Agrawal edited comment on STORM-2153 at 6/20/17 11:23 PM:
--

We are planning to upgrade the Storm version from {{v0.9.6}} to {{v1.1.0}} and 
were wondering if there are chances of new metrics framework being backported 
to {{v1.1.x}} in the future?


was (Author: srishtyagraw...@gmail.com):
We are planning to upgrade the Storm version from {{v0.9.6}} to {{v1.1.0}} and 
were wondering if there are chances of new metrics framework being backported 
to {{v1.1.0}} in the future?

> New Metrics Reporting API
> -
>
> Key: STORM-2153
> URL: https://issues.apache.org/jira/browse/STORM-2153
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> This is a proposal to provide a new metrics reporting API based on [Coda 
> Hale's metrics library | http://metrics.dropwizard.io/3.1.0/] (AKA 
> Dropwizard/Yammer metrics).
> h2. Background
> In a [discussion on the dev@ mailing list | 
> http://mail-archives.apache.org/mod_mbox/storm-dev/201610.mbox/%3ccagx0urh85nfh0pbph11pmc1oof6htycjcxsxgwp2nnofukq...@mail.gmail.com%3e]
>   a number of community and PMC members recommended replacing Storm’s metrics 
> system with a new API as opposed to enhancing the existing metrics system. 
> Some of the objections to the existing metrics API include:
> # Metrics are reported as an untyped Java object, making it very difficult to 
> reason about how to report it (e.g. is it a gauge, a counter, etc.?)
> # It is difficult to determine if metrics coming into the consumer are 
> pre-aggregated or not.
> # Storm’s metrics collection occurs through a specialized bolt, which in 
> addition to potentially affecting system performance, complicates certain 
> types of aggregation when the parallelism of that bolt is greater than one.
> In the discussion on the developer mailing list, there is growing consensus 
> for replacing Storm’s metrics API with a new API based on Coda Hale’s metrics 
> library. This approach has the following benefits:
> # Coda Hale’s metrics library is very stable, performant, well thought out, 
> and widely adopted among open source projects (e.g. Kafka).
> # The metrics library provides many existing metric types: Meters, Gauges, 
> Counters, Histograms, and more.
> # The library has a pluggable “reporter” API for publishing metrics to 
> various systems, with existing implementations for: JMX, console, CSV, SLF4J, 
> Graphite, Ganglia.
> # Reporters are straightforward to implement, and can be reused by any 
> project that uses the metrics library (i.e. would have broader application 
> outside of Storm)
> As noted earlier, the metrics library supports pluggable reporters for 
> sending metrics data to other systems, and implementing a reporter is fairly 
> straightforward (an example reporter implementation can be found here). For 
> example if someone develops a reporter based on Coda Hale’s metrics, it could 
> not only be used for pushing Storm metrics, but also for any system that used 
> the metrics library, such as Kafka.
> h2. Scope of Effort
> The effort to implement a new metrics API for Storm can be broken down into 
> the following development areas:
> # Implement API for Storms internal worker metrics: latencies, queue sizes, 
> capacity, etc.
> # Implement API for user defined, topology-specific metrics (exposed via the 
> {{org.apache.storm.task.TopologyContext}} class)
> # Implement API for storm daemons: nimbus, supervisor, etc.
> h2. Relationship to Existing Metrics
> This would be a new API that would not affect the existing metrics API. Upon 
> completion, the old metrics API would presumably be deprecated, but kept in 
> place for backward compatibility.
> Internally the current metrics API uses Storm bolts for the reporting 
> mechanism. The proposed metrics API would not depend on any of Storm's 
> messaging capabilities and instead use the [metrics library's built-in 
> reporter mechanism | 
> http://metrics.dropwizard.io/3.1.0/manual/core/#man-core-reporters]. This 
> would allow users to use existing {{Reporter}} implementations which are not 
> Storm-specific, and would simplify the process of collecting metrics. 
> Compared to Storm's {{IMetricCollector}} interface, implementing a reporter 
> for the metrics library is much more straightforward (an example can be found 
> [here | 
> https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/ConsoleReporter.java].
> The new metrics capability would not use or affect the ZooKeeper-based 
> metrics used by Storm UI.
> h2. Relationship to JStorm Metrics
> [TBD]
> h2. Target Branches
> [TBD]
> h2. Performance 

[jira] [Created] (STORM-2560) Storm-Kafka on CDH 5.11 with kerberos security enabled.

2017-06-20 Thread Niraj Parmar (JIRA)
Niraj Parmar created STORM-2560:
---

 Summary: Storm-Kafka on CDH 5.11 with kerberos security enabled.
 Key: STORM-2560
 URL: https://issues.apache.org/jira/browse/STORM-2560
 Project: Apache Storm
  Issue Type: Question
  Components: storm-kafka
Affects Versions: 1.1.0
Reporter: Niraj Parmar


Hi,
 
I have installed Apache Storm 1.1.0 manually on CDH 5.11 cluster. This cluster 
is secured with kerberos. 
I have storm sample written which ingest data from kafka topic and inserts into 
HDFS directory in real time. So, this sample uses storm-kafka as well as 
storm-hdfs. 
When I run the storm topology it gives the following error in kafka-spout.

 {color:#d04437}2017-06-18 22:29:31.297 o.a.z.ClientCnxn 
Thread-14-kafka-spout-executor[5 5]-SendThread(localhost:2181) [INFO] Opening 
socket connection to server localhost/127.0.0.1:2181. Will not attempt to 
authenticate using SASL (unknown error){color}
 
{color:#d04437}2017-06-18 22:29:31.571 k.c.SimpleConsumer 
Thread-14-kafka-spout-executor[5 5] [INFO] Reconnect due to error:
java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) 
~[stormjar.jar:?]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:85) 
[stormjar.jar:?]
at 
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
 [stormjar.jar:?]
at 
kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) 
[stormjar.jar:?]
at 
kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) 
[stormjar.jar:?]
at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75) 
[stormjar.jar:?]
at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65) 
[stormjar.jar:?]
at 
org.apache.storm.kafka.PartitionManager.(PartitionManager.java:94) 
[stormjar.jar:?]
at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) 
[stormjar.jar:?]
at 
org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69)
 [stormjar.jar:?]
at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129) 
[stormjar.jar:?]
at 
org.apache.storm.daemon.executor$fn__4976$fn__4991$fn__5022.invoke(executor.clj:644)
 [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) 
[storm-core-1.1.0.jar:1.1.0]{color}
 
Kafka version: 2.1.1-1.2.1.1.p0.18
 
There is no storm-kafka*.jat present in - "/usr/local/storm"
But this sample was workin fine before kerberizing the cluster, even in this 
case.
 
 
I have tried the same example on Hortonworks and after adding the below code to 
set security protcol, the topology runs fine:
*spoutConfig.securityProtocol = "SASL_PLAINTEXT";*
After Adding above code in case of cloudera it gives error: "Symbol not found"
 
Please let me know if you nedd any other information...
Thanks in advance..



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (STORM-2564) We should provide a template for storm-cluster-auth.yaml

2017-06-20 Thread liuzhaokun (JIRA)
liuzhaokun created STORM-2564:
-

 Summary: We should provide a template for storm-cluster-auth.yaml
 Key: STORM-2564
 URL: https://issues.apache.org/jira/browse/STORM-2564
 Project: Apache Storm
  Issue Type: Bug
Reporter: liuzhaokun
Assignee: liuzhaokun


As the configuration which named storm.zookeeper.auth.payload should be filled 
in "storm-cluster-auth.yaml",and there isn't such a file.So, I think we should 
provide a template for storm-cluster-auth.yaml.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (STORM-2563) Remove the workaround to handle missing UGI.loginUserFromSubject

2017-06-20 Thread Arun Mahadevan (JIRA)
Arun Mahadevan created STORM-2563:
-

 Summary: Remove the workaround to handle missing 
UGI.loginUserFromSubject
 Key: STORM-2563
 URL: https://issues.apache.org/jira/browse/STORM-2563
 Project: Apache Storm
  Issue Type: Bug
Reporter: Arun Mahadevan
Assignee: Arun Mahadevan


https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/AutoTGT.java#L225
The "userCons.setAccessible(true)" invokes constructor of a package private 
class bypassing the Java access control checks and raising red flags in our 
internal security scans.

The "loginUserFromSubject(Subject subject)" has been added to UGI 
(https://issues.apache.org/jira/browse/HADOOP-10164) and available since Hadoop 
version 2.3 released over three years ago 
(http://hadoop.apache.org/releases.html).

 
I think the workaround is no longer required since the case will not happen 
when using hadoop-common versions >= 2.3



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (STORM-2449) Iterator of Redis State may return same key multiple time, with different values

2017-06-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-2449.
-
   Resolution: Fixed
Fix Version/s: 1.1.1
   2.0.0

Merged into master and 1.x branch.

> Iterator of Redis State may return same key multiple time, with different 
> values
> 
>
> Key: STORM-2449
> URL: https://issues.apache.org/jira/browse/STORM-2449
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-redis
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Jungtaek Lim
>Assignee: Jungtaek Lim
> Fix For: 2.0.0, 1.1.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Redis state iterator iterates pending prepare -> pending commit -> external 
> storage (state) sequentially. While iterating, more of them are subject to 
> change, so it can provide inconsistent result.
> While we can't provide consistent result (since states are changing 
> continuously), at least iterator needs to provide same key only once.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (STORM-2557) A bug in DisruptorQueue causing severe underestimation of queue arrival rates

2017-06-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-2557.
-
   Resolution: Fixed
 Assignee: tangkailin
Fix Version/s: 1.1.1
   2.0.0

Thanks [~wendyshusband], I merged into master and 1.x branch.

> A bug in DisruptorQueue causing severe underestimation of queue arrival rates
> -
>
> Key: STORM-2557
> URL: https://issues.apache.org/jira/browse/STORM-2557
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: tangkailin
>Assignee: tangkailin
> Fix For: 2.0.0, 1.1.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Recently, we are tuning the performance of our topology and deploying some 
> theoretical performance models that heavily rely on the metrics of query 
> arrival rates. We found a bug in DisruptorQueue that leads to severe 
> underestimation to the queue arrival rates. After further investigation, we 
> finally found that in the current implementation of DisruptorQueue, the 
> arrival rates are actually measured as the number of batches of tuples rather 
> than the actual number of tuples, resulting in significant underestimation of 
> the arrival rates. 
> To be more specific, in DisruptorQueue.publishDirectSingle() and 
> DisruptorQueue.publishDirect() functions, objects containing tuples are 
> published to the buffer and the metrics are notified by calling 
> _metric.notifyArrivals(1). This works fine when the object is simply a 
> wrapper of a single tuple. However, the object could also be an instance of 
> ArrayList or HashMap. In 
> such case, we should get the actual number of tuples in the object and notify 
> the _metrics with the right value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (STORM-2512) Change KafkaSpoutConfig in storm-kafka-client to make it work with flux

2017-06-20 Thread Priyank Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyank Shah resolved STORM-2512.
-
Resolution: Fixed

> Change KafkaSpoutConfig in storm-kafka-client to make it work with flux
> ---
>
> Key: STORM-2512
> URL: https://issues.apache.org/jira/browse/STORM-2512
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Reporter: Priyank Shah
>Assignee: Priyank Shah
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (STORM-2562) Use stronger key size for blow fish key generator and get rid of stack trace

2017-06-20 Thread Priyank Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056290#comment-16056290
 ] 

Priyank Shah commented on STORM-2562:
-

PR for master https://github.com/apache/storm/pull/2167
PR for 1.x https://github.com/apache/storm/pull/2168

> Use stronger key size for blow fish key generator and get rid of stack trace
> 
>
> Key: STORM-2562
> URL: https://issues.apache.org/jira/browse/STORM-2562
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Priyank Shah
>Assignee: Priyank Shah
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (STORM-2561) Netty Client is closed but the Worker is already using that Client.

2017-06-20 Thread ryan.jin (JIRA)
ryan.jin created STORM-2561:
---

 Summary: Netty Client is closed but the Worker is already using 
that Client.
 Key: STORM-2561
 URL: https://issues.apache.org/jira/browse/STORM-2561
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 0.10.1
Reporter: ryan.jin
 Attachments: ClientClosed.txt

The Worker 's Netty Client is been closed  and The field "closing" in 
backtype.storm.messaging.netty.Client has been updated  to "true".
{code:java}
@Override
public void close() {
if (!closing) {
LOG.info("closing Netty Client {}", dstAddressPrefixedName);
context.removeClient(dstAddress.getHostName(),dstAddress.getPort());
// Set closing to true to prevent any further reconnection attempts.
closing = true;
waitForPendingMessagesToBeSent();
closeChannel();
}
}
{code}

But the worker is already  using that Netty Client. Because of the field 
'closing' is true,  Connect in the Client will never reconnecting the target.

{code:java}
public void run(Timeout timeout) throws Exception {
if (reconnectingAllowed()) {
 .
 } else {
close();
throw new RuntimeException("Giving up to scheduleConnect to " + 
dstAddressPrefixedName + " after " +
connectionAttempts + " failed attempts. " + 
messagesLost.get() + " messages were lost");

}
{code}

{code:java}
private boolean reconnectingAllowed() {
return !closing;
}
{code}

So, How can me find the cause why Worker close the NettyClient while the 
NettyClient is just working.

The logs is uploaded to Attachments by command "cat 
stt-jstorm-spout-4-28-1497837924-worker-6709.log|grep '10.24.41.10:6710' > 
/tmp/ClientClosed.txt".

Please let me know any other useful logs  i can support.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (STORM-2153) New Metrics Reporting API

2017-06-20 Thread Srishty Agrawal (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056219#comment-16056219
 ] 

Srishty Agrawal commented on STORM-2153:


We are planning to upgrade the Storm version from {{v0.9.6}} to {{v1.1.0}} and 
were wondering if there are chances of new metrics framework being backported 
to {{v1.1.0}} in the future?

> New Metrics Reporting API
> -
>
> Key: STORM-2153
> URL: https://issues.apache.org/jira/browse/STORM-2153
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> This is a proposal to provide a new metrics reporting API based on [Coda 
> Hale's metrics library | http://metrics.dropwizard.io/3.1.0/] (AKA 
> Dropwizard/Yammer metrics).
> h2. Background
> In a [discussion on the dev@ mailing list | 
> http://mail-archives.apache.org/mod_mbox/storm-dev/201610.mbox/%3ccagx0urh85nfh0pbph11pmc1oof6htycjcxsxgwp2nnofukq...@mail.gmail.com%3e]
>   a number of community and PMC members recommended replacing Storm’s metrics 
> system with a new API as opposed to enhancing the existing metrics system. 
> Some of the objections to the existing metrics API include:
> # Metrics are reported as an untyped Java object, making it very difficult to 
> reason about how to report it (e.g. is it a gauge, a counter, etc.?)
> # It is difficult to determine if metrics coming into the consumer are 
> pre-aggregated or not.
> # Storm’s metrics collection occurs through a specialized bolt, which in 
> addition to potentially affecting system performance, complicates certain 
> types of aggregation when the parallelism of that bolt is greater than one.
> In the discussion on the developer mailing list, there is growing consensus 
> for replacing Storm’s metrics API with a new API based on Coda Hale’s metrics 
> library. This approach has the following benefits:
> # Coda Hale’s metrics library is very stable, performant, well thought out, 
> and widely adopted among open source projects (e.g. Kafka).
> # The metrics library provides many existing metric types: Meters, Gauges, 
> Counters, Histograms, and more.
> # The library has a pluggable “reporter” API for publishing metrics to 
> various systems, with existing implementations for: JMX, console, CSV, SLF4J, 
> Graphite, Ganglia.
> # Reporters are straightforward to implement, and can be reused by any 
> project that uses the metrics library (i.e. would have broader application 
> outside of Storm)
> As noted earlier, the metrics library supports pluggable reporters for 
> sending metrics data to other systems, and implementing a reporter is fairly 
> straightforward (an example reporter implementation can be found here). For 
> example if someone develops a reporter based on Coda Hale’s metrics, it could 
> not only be used for pushing Storm metrics, but also for any system that used 
> the metrics library, such as Kafka.
> h2. Scope of Effort
> The effort to implement a new metrics API for Storm can be broken down into 
> the following development areas:
> # Implement API for Storms internal worker metrics: latencies, queue sizes, 
> capacity, etc.
> # Implement API for user defined, topology-specific metrics (exposed via the 
> {{org.apache.storm.task.TopologyContext}} class)
> # Implement API for storm daemons: nimbus, supervisor, etc.
> h2. Relationship to Existing Metrics
> This would be a new API that would not affect the existing metrics API. Upon 
> completion, the old metrics API would presumably be deprecated, but kept in 
> place for backward compatibility.
> Internally the current metrics API uses Storm bolts for the reporting 
> mechanism. The proposed metrics API would not depend on any of Storm's 
> messaging capabilities and instead use the [metrics library's built-in 
> reporter mechanism | 
> http://metrics.dropwizard.io/3.1.0/manual/core/#man-core-reporters]. This 
> would allow users to use existing {{Reporter}} implementations which are not 
> Storm-specific, and would simplify the process of collecting metrics. 
> Compared to Storm's {{IMetricCollector}} interface, implementing a reporter 
> for the metrics library is much more straightforward (an example can be found 
> [here | 
> https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/ConsoleReporter.java].
> The new metrics capability would not use or affect the ZooKeeper-based 
> metrics used by Storm UI.
> h2. Relationship to JStorm Metrics
> [TBD]
> h2. Target Branches
> [TBD]
> h2. Performance Implications
> [TBD]
> h2. Metrics Namespaces
> [TBD]
> h2. Metrics Collected
> *Worker*
> || Namespace || Metric Type || Description ||
> *Nimbus*
> || Namespace || Metric Type || Description ||
> *Supervisor*
> || Namespace || Metric Type || Description ||
> h2. User-Defined Metrics
>