saintstack commented on a change in pull request #1814:
URL: https://github.com/apache/hbase/pull/1814#discussion_r432978537
##########
File path: hbase-common/src/main/resources/hbase-default.xml
##########
@@ -1727,6 +1727,15 @@ possible configurations would overwhelm and obscure the
important.
ThreadPool.
</description>
</property>
+ <property>
+ <name>hbase.http.enable.prometheus.servlets</name>
+ <value>false</value>
+ <description>
+ Enable prometheus servlets /prom and /prom2 for prometheus based
monitoring.
+ /prom is based on new HBase metrics API and all metrics are not exported
for now.
+ /prom2 is based on the old hadoop2 metrics API and has all the metrics.
Review comment:
Is 'new metrics' our use of 'hadoop metrics', a move we made years ago?
What versions in release do the old way of metrics? Thanks.
##########
File path: hbase-common/src/main/resources/hbase-default.xml
##########
@@ -1727,6 +1727,15 @@ possible configurations would overwhelm and obscure the
important.
ThreadPool.
</description>
</property>
+ <property>
+ <name>hbase.http.enable.prometheus.servlets</name>
+ <value>false</value>
+ <description>
+ Enable prometheus servlets /prom and /prom2 for prometheus based
monitoring.
+ /prom is based on new HBase metrics API and all metrics are not exported
for now.
+ /prom2 is based on the old hadoop2 metrics API and has all the metrics.
Review comment:
Why not /prom and /prom-old ? Why have the old at all (maybe you want to
backport this)? When /prom and /prom2, w/o close reading, users will think they
need to read from /prom2 because 2 is a later number than no number?
Also, can you say more on old vs new metrics. I'm not clear. Would be good
to also clean up the description in here so clear to the casual reader.
Thakns for adding the flag.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]