[ https://issues.apache.org/jira/browse/HADOOP-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12544008 ]
Doug Cutting commented on HADOOP-2025: -------------------------------------- > I left it public so the Trash would default to /user/$user instead of the cwd > [ ... ] We should uniformly resolve relative paths to the connected directory, no? > it also leaves open the possibility of applications (like FsShell) creating > the default working directory if it doesn't exist or permitting operations > like "dfs -mkdirs ." Let's revisit that then. We should err on the conservative side when it comes to adding new public methods. And if we want to later auto-create working directories, we can do that in getDefaultWorkingDir() itself if we want to make that feature FileSystem-specific, or in FileSystem#get() if we want to make it universal. So I don't see that making getDefaultWorkingDir() public is required to implement this. > Further, the Trash is initialized before the FileSystem [...] Things are > similar in JobTracker initialization, where we encounter an infinite loop > [...] Can you please elaborate on this? What're the methods in the loop? Are there other ways to break it besides having folks use getDefaultWorkingDir() to resolve relative paths? Why are Trash and JobTracker any different from other uses of FileSystem that must resolve relative paths? > Instantiating a FileSystem object should guarantee the existence of the > working directory > ----------------------------------------------------------------------------------------- > > Key: HADOOP-2025 > URL: https://issues.apache.org/jira/browse/HADOOP-2025 > Project: Hadoop > Issue Type: Improvement > Components: fs > Affects Versions: 0.14.1 > Reporter: Sameer Paranjpye > Assignee: Chris Douglas > Fix For: 0.16.0 > > Attachments: 2025-1.patch, 2025.patch > > > Issues like HADOOP-1891 and HADOOP-1916 illustrate the need for this behavior. > In HADOOP-1916 the problem is that the default working directory for a user > on HDFS '/user/<username>' does not exist. This results in the command > 'hadoop dfs -copyFromLocal foo ." creating a *file* called /user/<username> > and copying the contents of the file 'foo' into this file. > HADOOP-1891 is basically the same problem. The problem that Olga observed was > that copying a file to '.' on HDFS when her 'home directory' did not exist > resulted in the creation of a file with the path as her home directory. The > problem is incorrectly filed as a bug in the Path class. The behavior of Path > is correct, as Doug points out, it is perfectly reasonable for Path(".") to > convert to an empty path. When this empty path is resolved in HDFS or any > other filesystem the resolution to '/user/<username>' is also correct (at > least for HDFS). The problem IMO is that the existence of the working > directory is not guaranteed. > When I log in to a machine my default working directory is '/home/sameerp' > and filesystem operations that I execute with relative paths all work > correctly because this directory exists. My home directory lives on a filer, > in the event of it being unmountable the default working directory I get is > '/' which also is guaranteed to exist. > In the context of Hadoop, instantiating a FileSystem object is the analogue > of logging in and should result in a working directory whose existence has > been validated. In the case of HDFS this should be '/user/<username>' or '/' > if the directory does not exist. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.