Thanks Abhishek. but I observe that some of my job output has no such _log directory. Actually, I run a script which launch 100+ jobs. I didn't find the log for any of the output. Any ideas?
Thanks, -Gang ----- 原始邮件 ---- 发件人: abhishek sharma <[email protected]> 收件人: [email protected] 发送日期: 2010/3/31 (周三) 1:15:48 下午 主 题: Re: log Gang, In the log/history directory, two files are created for each job--one xml file that records the configuration and the other file has log entries. These log entries have all the information about the individual map and reduce tasks related to a job--which nodes they ran on, duration, input size, etc. A single log/history directory is created by Hadoop and files related to all the jobs executed are stored there. Abhishek On Tue, Mar 30, 2010 at 8:50 PM, Gang Luo <[email protected]> wrote: > Hi all, > I find there is a directory "_log/history/..." under the output directory of > a mapreduce job. Is the file in that directory a log file? Is the information > there sufficient to allow me to figure out what nodes the job runs on? > Besides, not every job has such a directory. Is there such settings > controlling this? Or is there other ways to get the nodes my job runs on? > > Thanks, > -Gang > > > >
