Hi Joman, The temp directory we talking here is the temp directory in the local file system (i.e. Unix in your case). There is a config property hadoop.tmp.dir (see hadoop-default.xml), which specifies the path of temp directory. Before you start the cluster, you should set this property and chmod on the temp directory to make sure that all users have permission to create files under it.
Hope it helps. Nicholas Sze ----- Original Message ---- > From: Joman Chu <[EMAIL PROTECTED]> > To: [email protected] > Sent: Wednesday, July 9, 2008 4:15:39 AM > Subject: Re: File permissions issue > > So we can fix this issue by putting all three users in a common group? We did > that after we encountered the issue, but we still got the errors. Note that > we > had not restarted hadoop, so the permissions were still as described earlier. > Should we have restarted Hadoop after the grouping? > > On Wed, July 9, 2008 2:05 am, heyongqiang said: > > because in your permission set, the other role can not write the temp > > directory. and user3 is not in the same group with user2. > > > > > > > > > > > > heyongqiang 2008-07-09 > > > > > > > > ·¢¼þÈË£º Joman Chu ·¢ËÍʱ¼ä£º 2008-07-09 13:06:51 ÊÕ¼þÈË£º > > [email protected] ³ËÍ£º Ö÷Ì⣺ File permissions issue > > > > Hello, > > > > On a cluster where I run Hadoop, it seems that the temp directory created > > by Hadoop (in our case, /tmp/hadoop/) gets its permissions set to > > "drwxrwxr-x" owned by the first person that runs a job after the Hadoop > > services are started. This causes file permissions problems as we try to > > run jobs. > > > > For example, user1:user1 starts Hadoop using ./start-all.sh. Then > > user2:user2 runs a Hadoop job. Temp directories (/tmp/hadoop/) are now > > created in all nodes in the cluster owned by user2 with permissions > > "drwxrwxr-x". Now user3:user3 tries to run a job and gets the following > > exception: > > > > java.io.IOException: Permission denied at > > java.io.UnixFileSystem.createFileExclusively(Native Method) at > > java.io.File.checkAndCreate(File.java:1704) at > > java.io.File.createTempFile(File.java:1793) at > > org.apache.hadoop.util.RunJar.main(RunJar.java:115) at > > org.apache.hadoop.mapred.JobShell.run(JobShell.java:194) at > > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at > > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at > > org.apache.hadoop.mapred.JobShell.main(JobShell.java:220) > > > > Why does this happen and how can we fix this? Our current stop gap > > measure is to run a job as the user that started Hadoop. That is, in our > > example, after user1 starts Hadoop, user1 runs a job. Everything seems to > > work fine then. > > > > Thanks, Joman Chu > > > > > -- > Joman Chu > AIM: ARcanUSNUMquam > IRC: irc.liquid-silver.net
