I was able to overcome the permission exception in the log by creating an HDFS tmp folder (hadoop fs -mkdir /tmp) and opening it up to the world (hadoop fs -chmod a+rwx /tmp). That got rid of the exception put I still am able to connect to port 50030 to see M/R status. More ideas?

Even though the exception was missing from the logs of one server in the cluster, l looked on another server and found essentially the same permission problem:

2013-04-26 13:34:56,462 FATAL org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: Error starting JobHistoryServer org.apache.hadoop.yarn.YarnException: Error creating done directory: [hdfs://devubuntu05:9000/tmp/hadoop-yarn/staging/history/done] at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.init(HistoryFileManager.java:424) at org.apache.hadoop.mapreduce.v2.hs.JobHistory.init(JobHistory.java:87) at org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)

. . . . .

On Fri, Apr 26, 2013 at 10:37 AM, Rishi Yadav wrote:

do you see "retired jobs" on job tracker page. There is also "job tracker history" on the bottom of page. 

something like this  http://nn.zettabyte.com:50030/jobtracker.jsp <http://nn.zettabyte.com:50030/jobtracker.jsp>
Thanks and Regards,
Rishi Yadav



On Fri, Apr 26, 2013 at 7:36 AM, < [email protected] <javascript:parent.wgMail.openComposeWindow('[email protected]')>
wrote:
When I submit a simple "Hello World" M/R job like WordCount it takes less than 5 seconds. The texts show numerous methods for monitoring M/R jobs as they are happening but I have yet to see any that show statistics about a job after it has completed. Obviously simple jobs that take a short amount of time don't allow time to fire up any web mage or monitoring tool to see how it progresses through the JobTracker and TaskTracker as well as which node it is processed on. Any suggestions on how could see this kind of data *after* a job has completed?

Reply via email to