You will have to wait for Chukwa 0.5 or use trunk + HBase to solve this issue. Chukwa 0.4 was designed to work on Redhat EL 5.0 and 5.1 only. Chukwa trunk has made things a lot more generic and easier to use. It is not feature completed from end user's perspective, but we are working on it.
regards, Eric On Sat, Nov 20, 2010 at 5:20 PM, ZJL <[email protected]> wrote: > Thanks for erics reply.but I got puzzle in some point. > 1. clentTrace:I have copied hadoop-log4j.properties to HADOOP_HOME/conf and > change the file of name to log4j.properties since the chukwa setup.after your > instruction,I check my deployment according to admin.html again,but no error > was found. > 2. you said "For cluster_disk, and disk metrics is not working because the df > output can not be parsed on your system.",but how can I slove this problem? > > > -----Original Message----- > From: [email protected] > [mailto:[email protected]] On > Behalf Of Eric Yang > Sent: 2010年11月21日 3:18 > To: [email protected] > Subject: Re: some field can not been scrapped > > ClientTrace needs to be stream over by modifying the log4j.properties > file to have: > > # ClientTrace (Shuffle bytes) > log4j.appender.MR_CLIENTTRACE=org.apache.hadoop.chukwa.inputtools.log4j.ChukwaDailyRollingFileAppender > log4j.appender.MR_CLIENTTRACE.File=${hadoop.log.dir}/mr_clienttrace.log > log4j.appender.MR_CLIENTTRACE.recordType=ClientTrace > log4j.appender.MR_CLIENTTRACE.chukwaClientHostname=localhost > log4j.appender.MR_CLIENTTRACE.chukwaClientPortNum=9093 > log4j.appender.MR_CLIENTTRACE.DatePattern=.yyyy-MM-dd > log4j.appender.MR_CLIENTTRACE.layout=org.apache.log4j.PatternLayout > log4j.appender.MR_CLIENTTRACE.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n > log4j.logger.org.apache.hadoop.mapred.TaskTracker.clienttrace=INFO,MR_CLIENTTRACE > log4j.additivity.org.apache.hadoop.mapred.TaskTracker.clienttrace=false > > This is documented in the Chukwa administration guide: > > http://incubator.apache.org/chukwa/docs/r0.4.0/admin.html > > Verify you have completed instruction for: Configuring Hadoop for monitoring > > For Hodjob and node_activity, hod_job_digest and hod_machines, this > was for supporting legacy Hadoop On Demand. This should not be used. > For cluster_disk, and disk metrics is not working because the df > output can not be parsed on your system. > > For mapreduce_fsm, mr_job, mr_task,you need to have client trace working > first. > > User_job_summary requires to configure JobInstrumentation class in > Hadoop to use Chukwa's job instrumentation class. However, this was > experimental code, I don't recommend to use it, hence it is not > documented. > > Hope this helps. > > regards, > Eric > > On Sat, Nov 20, 2010 at 4:23 AM, ZJL <[email protected]> wrote: >> Hi all: >> >> I running the chukwa one month,but I found some field data is empty. I don’t >> know why. >> >> The filed is : >> >> ClentTrace-*-*,Hodjob-*-*,cluster_disk-*-*,dfs_fsnamesystem-*-*,dfs_namenode-*-*,disk-*-*,filesystem_fsm-*-*,hdfs_usage,hod_job_digest-*-*,hod_machine-*-*, >> >> Mapreudce_fsm-*-*,mr_job,mr_task-*-*,node_activity-*-*,user_job_summary-*-*, >> user_util-*-*,util-*-* >> >> The other field have data except for above field. >> >> If anybody know cause ,could you tell me. Thank you. >> >> >> >> BR zhu, junliang >> >> > > __________ Information from ESET NOD32 Antivirus, version of virus signature > database 5632 (20101119) __________ > > The message was checked by ESET NOD32 Antivirus. > > http://www.eset.com > > > > >
