Qu,

Every job has a history file that is, by default, stored under $HADOOP_LOG_DIR/history. These "job history" files list the amount of hdfs read/write (and lots of other things) for every task.

On 2012/04/25 7:25, Qu Chen wrote:
Let me add, I'd like to do this periodically to gather some performance profile information.

On Tue, Apr 24, 2012 at 5:47 PM, Qu Chen <chenqu...@gmail.com <mailto:chenqu...@gmail.com>> wrote:

    I am trying to gather the info regarding the amount of HDFS
    read/write for each task in a given map-reduce job. How can I do that?





Reply via email to