Hey Joe,

Hadoop and HBase are pretty monitoring tool agnostic. It does provide
a number of metrics via JMX and a REST interface which you can tie
into the monitoring tool of your choice. You can enable collection via
the REST service by editing
$HADOOP_HOME/conf/hadoop-metrics.properties and setting the *.class
settings, e.g.

# Configuration of the "dfs" context for /metrics
dfs.class=org.apache.hadoop.metrics.spi.NoEmitMetricsContext

# Configuration of the "mapred" context for /metrics
mapred.class=org.apache.hadoop.metrics.spi.NoEmitMetricsContext

Would configure both HDFS and MapReduce to make the metrics available,
but not write them to anything. There is also a GangliaContext for
integrating directly with Ganglia. Similar settings exist for HBase.

-Joey

On Mon, Jul 25, 2011 at 8:09 AM, Joseph Coleman
<[email protected]> wrote:
> Greetings,
>
> I am relatively new to Hadoop but we now have an 10 node cluster up and 
> running just DFS for now and will be expanding this rapidly as well as adding 
> Hbase. I am looking to find out what people are using for monitoring Hadoop 
> currently. I want to be notified if a node fails, performance statistics, 
> failed drive or services ect. I was thinking of using Opsview and trying in 
> Ganglia. Thanks in advance
>
> Joe
>
>



-- 
Joseph Echeverria
Cloudera, Inc.
443.305.9434

Reply via email to