I ran into this problem also. From your logs, it seems like you haven't set mapred.system.dir to be a fixed variable: http://wiki.apache.org/hadoop/FAQ#14.
The impact is that your job control files are written from your submit machine into the HDFS at /tmp/hadoop-user2/mapred/system, while your datanodes are looking for the info at /tmp/hadoop-user1/mapred/system. Norbert On Fri, May 30, 2008 at 1:46 PM, Rui Shi <[EMAIL PROTECTED]> wrote: > Hi Ted, > > The one I am using is 0.16.3. > > Thanks, > > Rui > > > ----- Original Message ---- > From: Ted Dunning <[EMAIL PROTECTED]> > To: core-user@hadoop.apache.org > Sent: Friday, May 30, 2008 8:42:44 AM > Subject: Re: Error when running job as a different user > > What version of hadoop are you running? > > On Fri, May 30, 2008 at 3:44 AM, Steve Loughran <[EMAIL PROTECTED]> wrote: > >> Rui Shi wrote: >> >>> Hi, >>> >>> After I start the cluster as user1, I submit a job as a different user >>> (user2) and get the following error. It seems that the job submitter still >>> tries to act as user1 and locate the job.xml from /tmp/hadoop-user1 which >>> does not exist. Anything wrong here? >>> >>> Exception in thread "main" org.apache.hadoop.ipc.RemoteException: >>> java.io.IOException: >>> /tmp/hadoop-user1/mapred/system/job_200805292307_0001/job.xml: No such file >>> or directory >>> >> >> I've seen this error if the client's hadoop XML configuration files werent >> right, and the client was looking in the wrong place for job status files. >> Check your XML >> > > > > -- > ted > > > >