Hi Qu,

     You can access the HDFS read/write bytes for each task or job level using 
the below counters.

FileSystemCounters :  HDFS_BYTES_READ  
                                  FILE_BYTES_WRITTEN  

These can be accessed by using UI or API.



Thanks
Devaraj

________________________________________
From: George Datskos [george.dats...@jp.fujitsu.com]
Sent: Wednesday, April 25, 2012 6:36 AM
To: mapreduce-user@hadoop.apache.org
Subject: Re: How to get the HDFS I/O information

Qu,

Every job has a history file that is, by default, stored under
$HADOOP_LOG_DIR/history.  These "job history" files list the amount of
hdfs read/write (and lots of other things) for every task.

On 2012/04/25 7:25, Qu Chen wrote:
> Let me add, I'd like to do this periodically to gather some
> performance profile information.
>
> On Tue, Apr 24, 2012 at 5:47 PM, Qu Chen <chenqu...@gmail.com
> <mailto:chenqu...@gmail.com>> wrote:
>
>     I am trying to gather the info regarding the amount of HDFS
>     read/write for each task in a given map-reduce job. How can I do that?
>
>

Reply via email to