[
https://issues.apache.org/jira/browse/EAGLE-513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449104#comment-15449104
]
Senthilkumar commented on EAGLE-513:
------------------------------------
Added code in hadoo_jmx_kafka.py file
Sample Kafka Message:
{"timestamp": 1472563981729, "metric": "hadoop.namenode.nodeusage.median",
"component": "namenode", "site": "apollo", "value": 99.989999999999995, "host":
"namenode"}
{"timestamp": 1472563981729, "metric": "hadoop.namenode.nodeusage.min",
"component": "namenode", "site": "apollo", "value": 1.2, "host": "namenode"}
{"timestamp": 1472563981729, "metric": "hadoop.namenode.nodeusage.stddev",
"component": "namenode", "site": "apollo", "value": 19.920000000000002, "host":
"namenode"}
{"timestamp": 1472563981729, "metric": "hadoop.namenode.nodeusage.max",
"component": "namenode", "site": "apollo", "value": 100.04000000000001, "host":
"namenode"}
> Add DataNodes usages Metrics in JMX Collector
> ---------------------------------------------
>
> Key: EAGLE-513
> URL: https://issues.apache.org/jira/browse/EAGLE-513
> Project: Eagle
> Issue Type: Improvement
> Affects Versions: v0.5.0
> Reporter: Senthilkumar
> Assignee: Senthilkumar
>
> Instead of Capacity check , Its better to track "DataNodes usages" Median
> Values. As we won't hit 90% DFS utilisation before that will have More Jobs
> failures because of DataNode Issues..
> So we should clear/Balance the Cluster effectively By tracking DataNode
> Median. 95% Median would be Nice to Start.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)