Gang,

In the log/history directory, two files are created for each job--one
xml file that records the configuration and the other file has log
entries. These log entries have all the information about the
individual map and reduce tasks related to a job--which nodes they ran
on, duration, input size, etc.

A single log/history directory is created by Hadoop and files related
to all the jobs executed are stored there.

Abhishek

On Tue, Mar 30, 2010 at 8:50 PM, Gang Luo <[email protected]> wrote:
> Hi all,
> I find there is a directory "_log/history/..." under the output directory of 
> a mapreduce job. Is the file in that directory a log file? Is the information 
> there sufficient to allow me to figure out what nodes the job runs on? 
> Besides, not every job has such a directory. Is there such settings 
> controlling this? Or is there other ways to get the nodes my job runs on?
>
> Thanks,
> -Gang
>
>
>
>
  • log Gang Luo
    • Re: log abhishek sharma

Reply via email to