[ https://issues.apache.org/jira/browse/HADOOP-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12556030#action_12556030 ]
Doug Cutting commented on HADOOP-2025: -------------------------------------- > the client side may not have permission to create the working dir. Then that would throw an exception. Or we could explicitly check for that, as you suggest, if you think that would yield a more user-friendly exception. > This would require that admins to religiously create home dirs (and > trash-bins) for all its users. Home dirs, yes, but, if the home dir exists, can't a user create his own trash there on demand? > have the job framework check that the working dir exists before starting the > job/task. Do mapred jobs require that the working dir exist? They require that the input exists and is readable, and that the output location is writable. Now that we have permissions these checks could be improved. FileInputFormat#validateInput() could check readability. And OutputFormatBase#checkOutputSpecs() could check that the parent of the output directory is writable. That's probably a separate issue, no? > Instantiating a FileSystem object should guarantee the existence of the > working directory > ----------------------------------------------------------------------------------------- > > Key: HADOOP-2025 > URL: https://issues.apache.org/jira/browse/HADOOP-2025 > Project: Hadoop > Issue Type: Improvement > Components: fs > Affects Versions: 0.14.1 > Reporter: Sameer Paranjpye > Assignee: Chris Douglas > Fix For: 0.16.0 > > Attachments: 2025-1.patch, 2025.patch > > > Issues like HADOOP-1891 and HADOOP-1916 illustrate the need for this behavior. > In HADOOP-1916 the problem is that the default working directory for a user > on HDFS '/user/<username>' does not exist. This results in the command > 'hadoop dfs -copyFromLocal foo ." creating a *file* called /user/<username> > and copying the contents of the file 'foo' into this file. > HADOOP-1891 is basically the same problem. The problem that Olga observed was > that copying a file to '.' on HDFS when her 'home directory' did not exist > resulted in the creation of a file with the path as her home directory. The > problem is incorrectly filed as a bug in the Path class. The behavior of Path > is correct, as Doug points out, it is perfectly reasonable for Path(".") to > convert to an empty path. When this empty path is resolved in HDFS or any > other filesystem the resolution to '/user/<username>' is also correct (at > least for HDFS). The problem IMO is that the existence of the working > directory is not guaranteed. > When I log in to a machine my default working directory is '/home/sameerp' > and filesystem operations that I execute with relative paths all work > correctly because this directory exists. My home directory lives on a filer, > in the event of it being unmountable the default working directory I get is > '/' which also is guaranteed to exist. > In the context of Hadoop, instantiating a FileSystem object is the analogue > of logging in and should result in a working directory whose existence has > been validated. In the case of HDFS this should be '/user/<username>' or '/' > if the directory does not exist. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.