Hello,

On a cluster where I run Hadoop, it seems that the temp directory created by 
Hadoop (in our case, /tmp/hadoop/) gets its permissions set to "drwxrwxr-x" 
owned by the first person that runs a job after the Hadoop services are 
started. This causes file permissions problems as we try to run jobs.

For example, user1:user1 starts Hadoop using ./start-all.sh. Then user2:user2 
runs a Hadoop job. Temp directories (/tmp/hadoop/) are now created in all nodes 
in the cluster owned by user2 with permissions "drwxrwxr-x". Now user3:user3 
tries to run a job and gets the following exception:

java.io.IOException: Permission denied
     at java.io.UnixFileSystem.createFileExclusively(Native Method)
     at java.io.File.checkAndCreate(File.java:1704)
     at java.io.File.createTempFile(File.java:1793)
     at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
     at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
     at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)

Why does this happen and how can we fix this? Our current stop gap measure is 
to run a job as the user that started Hadoop. That is, in our example, after 
user1 starts Hadoop, user1 runs a job. Everything seems to work fine then.

Thanks,
Joman Chu

Reply via email to