[ https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13663199#comment-13663199 ]
Robert Joseph Evans commented on HADOOP-9438: --------------------------------------------- I think the patch looks fine and I am +1 for this, but I really would like someone who is much more on the HDFS side to also take a look before checking it in. Especially because this is technically an incompatible change. > LocalFileContext does not throw an exception on mkdir for already existing > directory > ------------------------------------------------------------------------------------ > > Key: HADOOP-9438 > URL: https://issues.apache.org/jira/browse/HADOOP-9438 > Project: Hadoop Common > Issue Type: Bug > Affects Versions: 2.0.3-alpha > Reporter: Robert Joseph Evans > Priority: Critical > Attachments: HADOOP-9438.20130501.1.patch, > HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch > > > according to > http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29 > should throw a FileAlreadyExistsException if the directory already exists. > I tested this and > {code} > FileContext lfc = FileContext.getLocalFSFileContext(new Configuration()); > Path p = new Path("/tmp/bobby.12345"); > FsPermission cachePerms = new FsPermission((short) 0755); > lfc.mkdir(p, cachePerms, false); > lfc.mkdir(p, cachePerms, false); > {code} > never throws an exception. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira