Hi all,

We just moved to the 0.14.0 distribution of hadoop. Until now, we were
running the 0.10.1 one.

Important point : the client submitting jobs is on a total different
machine from the master and the slaves and also it is a total different
user.

The main problem is the parameter 'hadoop.tmp.dir' which default value
is '/tmp/hadoop-${user.name} which means it is based on the user name.

Step 1 : The client (user A) is submitting a job using the JobClient
class. So the job jar and job files are uploaded to the DFS into the
directory /tmp/hadoop-A/mared/system/job-id

Step2 : The server (jobtracker) (user B) is receiving the job submission
and will try to read the job files into the directory
/tmp/hadoop-B/mapred/system/job-id

You get my problem ?

This didn't happen before because, when submitting a job, the job client
was submitting the full path of the job. But now, only the job id is
submitted and appended to the 'hadoop.tmp.dir'.

Of course, I can set the 'hadoop.tmp.dir' to the same value both server
side and client side but that is not what I want and my question is :
Why a client can change the parameters used by a server ?

Do you have some solutions to solve my issue ?

Thanks for any help.

Cheers,
Thomas.

Reply via email to