Hi Steven, Check out the links I sent
http://wiki.apache.org/hadoop/GangliaMetrics http://hadoop.apache.org/hbase/docs/current/metrics.html And there is also the home page of Ganglia: http://ganglia.sourceforge.net/ - Andy > From: steven zhuang <steven.zhuang.1...@gmail.com> > Subject: Re: get the impact hbase brings to HDFS, datanode log exploded after > we started HBase. > To: hbase-user@hadoop.apache.org, apurt...@apache.org > Date: Thursday, April 8, 2010, 6:49 PM > thanks Andrew, > > > On Fri, Apr 9, 2010 at 2:30 AM, Andrew Purtell <apurt...@apache.org> > wrote: > > > My suggestions: > > > > Don't run below INFO logging level for performance > reasons once you have a > > cluster up and running. > > > > > we are running the cluster at INFO level. > > > > Instead of using DN logs, instead export HBase and > HDFS metrics via > > Ganglia. > > > > http://wiki.apache.org/hadoop/GangliaMetrics > > > > http://hadoop.apache.org/hbase/docs/current/metrics.html > > > the Ganglia HBase metrics sounds good, where can I > get more information > about it. > thanks. > > > > > - Andy > > > > > On Thu, Apr 8, 2010 at 2:51 AM, steven zhuang > > > <steven.zhuang.1...@gmail.com> > wrote: > > > >... > > > > At present, my > idea is calculating the data > > > > IO quantity of both HDFS and HBase for a > given day, and > > > > with the result we can have a rough > estimate > > > > of the situation. > > > > > > > > > > > > >