[ https://issues.apache.org/jira/browse/HDDS-919?focusedWorklogId=206064&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-206064 ]
ASF GitHub Bot logged work on HDDS-919: --------------------------------------- Author: ASF GitHub Bot Created on: 28/Feb/19 21:20 Start Date: 28/Feb/19 21:20 Worklog Time Spent: 10m Work Description: bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes URL: https://github.com/apache/hadoop/pull/502#issuecomment-468442678 Hi @elek When I am planning to commit, just seen test failures. And also, in MiniOzoneClusterImpl, in configureHddsDatanodes() we need to set this port address to 0. As when multiple dn's start on the localhost, start of httpserver will fail. I think this patch needs some more work, see below error. ``` 2019-02-28 20:08:24,593 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1641)) - Starting Web-server for hddsDatanode at: http://0.0.0.0:9882 2019-02-28 20:08:24,594 ERROR ozone.HddsDatanodeService (HddsDatanodeService.java:start(189)) - HttpServer failed to start. java.io.FileNotFoundException: webapps/hddsDatanode not found in CLASSPATH at org.apache.hadoop.http.HttpServer2.getWebAppsPath(HttpServer2.java:1070) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:536) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:119) at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:433) at org.apache.hadoop.hdds.server.BaseHttpServer.<init>(BaseHttpServer.java:90) at org.apache.hadoop.ozone.HddsDatanodeHttpServer.<init>(HddsDatanodeHttpServer.java:34) at org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:186) at org.apache.hadoop.ozone.MiniOzoneClusterImpl.lambda$startHddsDatanodes$2(MiniOzoneClusterImpl.java:367) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.apache.hadoop.ozone.MiniOzoneClusterImpl.startHddsDatanodes(MiniOzoneClusterImpl.java:367) at org.apache.hadoop.ozone.om.TestScmChillMode.init(TestScmChillMode.java:99) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 206064) Time Spent: 3.5h (was: 3h 20m) > Enable prometheus endpoints for Ozone datanodes > ----------------------------------------------- > > Key: HDDS-919 > URL: https://issues.apache.org/jira/browse/HDDS-919 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Reporter: Elek, Marton > Assignee: Elek, Marton > Priority: Major > Labels: pull-request-available > Time Spent: 3.5h > Remaining Estimate: 0h > > HDDS-846 provides a new metric endpoint which publishes the available Hadoop > metrics in prometheus friendly format with a new servlet. > Unfortunately it's enabled only on the scm/om side. It would be great to > enable it in the Ozone/HDDS datanodes on the web server of the HDDS Rest > endpoint. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org