We created our own MetricContext for reading these metrics. Basically your metric context gets called every X seconds based on your hadoop-metrics.properties, so you can add whatever else you want in there. We also were concerned with HDFS usage, and while we couldn't get it to pull that in specifically we did use the java File API to just get the current used disk space for our various mounted drives. This has worked reasonably well, though does not mirror HDFS usage exactly. This is per-server though as opposed to per-table or whatever.
You can take a look at the GangliaContext for an example, in fact our MetricContext extends GangliaContext so we can still report to ganglia but also report to our own status system as well. Just put it in a jar, put the jar on the classpath, and reference it in your hadoop-metrics.properties. On Mon, May 7, 2012 at 9:37 AM, Doug Meil <[email protected]>wrote: > > You're right, it's not currently a metric. > > But there is an entry for the disk usage here... > > http://hbase.apache.org/book.html#trouble.namenode > > > > > > On 5/6/12 10:41 PM, "Otis Gospodnetic" <[email protected]> wrote: > > >Hello, > > > >Does HBase know how much space it is occupying on HDFS? > >I looked at these two: > > > http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/metri > >cs/RegionServerMetrics.html > > > >http://hbase.apache.org/book/hbase_metrics.html > > > > > >But I couldn't find any mentions of such a metric. > > > >Is this just a matter of exposing this metric? Or...? > > > >Thanks, > >Otis > >---- > >Performance Monitoring for Solr / ElasticSearch / HBase - > >http://sematext.com/spm > > > > > > >
