We'd better to do the modification in Hadoop side, like:

{
          final MetricsRegistry registry = new MetricsRegistry("shuffleInput");
      final MetricMutableCounterLong inputBytes =
          registry.newCounter("shuffle_input_bytes", "", 0L);
      final MetricMutableCounterInt failedFetches =
          registry.newCounter("shuffle_failed_fetches", "", 0);
      final MetricMutableCounterInt successFetches =
          registry.newCounter("shuffle_success_fetches", "", 0);
      private volatile int threadsBusy = 0;

      @SuppressWarnings("deprecation")
      ShuffleClientInstrumentation(JobConf conf) {
        registry.setContext("mapred")           <<< fix point
                                .tag("user", "User name", conf.getUser())
                .tag("jobName", "Job name", conf.getJobName())
                .tag("jobId", "Job ID", ReduceTask.this.getJobID().toString())
                .tag("taskId", "Task ID", getTaskID().toString())
                .tag("sessionId", "Session ID", conf.getSessionId());
      }
}

Thank you && Best Regards,
Grace (Huang Jie)


-----Original Message-----
From: Ariel Rabkin [mailto:[email protected]] 
Sent: Tuesday, July 17, 2012 10:33 AM
To: [email protected]
Subject: Re: ShuffleInput is not under the "mapred" context since Hadoop 1.0 
metrics2

Aha.  Do you have a proposed fix for the problem?

--Ari

On Mon, Jul 16, 2012 at 10:16 PM, Huang, Jie <[email protected]> wrote:
> Hi all,
>
> Trying to upgrade Hadoop to 1.0.x, we found that the shuffleInput metric (for 
> each reduce task) is not under the “mapred” context, according to the 
> following implementation in ReduceTask.java. Currently, the ShuffleInput 
> metric is under “default” context, since it is not set explicitly.
>
> Consequently, we cannot have “Hadoop_mapred_shuffleInput” directory under 
> repos/*/ folder.




-- 
Ari Rabkin [email protected]
Princeton Computer Science Department

Reply via email to