I am not sure if you want to use APIs, but to access through APIs, you can use the following API in Counters.java:
/** * Find a counter given the group and the name. * @param group the name of the group * @param name the internal name of the counter * @return the counter for that name */ public synchronized Counter findCounter(String group, String name) { ... } For an example on how to use it, look at FileSystemStatisticUpdater in Task.java. There are two more APIs, you can find in Counters.java. Thanks, Raj On Wed, Apr 25, 2012 at 12:01 PM, Devaraj k <devara...@huawei.com> wrote: > Hi Qu, > > You can access the HDFS read/write bytes for each task or job level > using the below counters. > > FileSystemCounters : HDFS_BYTES_READ > FILE_BYTES_WRITTEN > > These can be accessed by using UI or API. > > > > Thanks > Devaraj > > ________________________________________ > From: George Datskos [george.dats...@jp.fujitsu.com] > Sent: Wednesday, April 25, 2012 6:36 AM > To: mapreduce-user@hadoop.apache.org > Subject: Re: How to get the HDFS I/O information > > Qu, > > Every job has a history file that is, by default, stored under > $HADOOP_LOG_DIR/history. These "job history" files list the amount of > hdfs read/write (and lots of other things) for every task. > > On 2012/04/25 7:25, Qu Chen wrote: > > Let me add, I'd like to do this periodically to gather some > > performance profile information. > > > > On Tue, Apr 24, 2012 at 5:47 PM, Qu Chen <chenqu...@gmail.com > > <mailto:chenqu...@gmail.com>> wrote: > > > > I am trying to gather the info regarding the amount of HDFS > > read/write for each task in a given map-reduce job. How can I do > that? > > > > >