[
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649120#comment-13649120
]
Ivan Mitic commented on HADOOP-9438:
------------------------------------
I agree with Robert, changing the javadoc and the interface is the preferable
fix for the reasons mentioned. The behavior of LocalFs and Hdfs is consistent
in this scenario. Would be good to do a quick scan thru the Hadoop codebase for
FileAlreadyExistsExceptionand and see if there are some other changes that need
to happen. Not sure if there is something else to worry about?
> LocalFileContext does not throw an exception on mkdir for already existing
> directory
> ------------------------------------------------------------------------------------
>
> Key: HADOOP-9438
> URL: https://issues.apache.org/jira/browse/HADOOP-9438
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.0.3-alpha
> Reporter: Robert Joseph Evans
> Priority: Critical
> Attachments: HADOOP-9438.20130501.1.patch, HADOOP-9438.patch,
> HADOOP-9438.patch
>
>
> according to
> http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
> should throw a FileAlreadyExistsException if the directory already exists.
> I tested this and
> {code}
> FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
> Path p = new Path("/tmp/bobby.12345");
> FsPermission cachePerms = new FsPermission((short) 0755);
> lfc.mkdir(p, cachePerms, false);
> lfc.mkdir(p, cachePerms, false);
> {code}
> never throws an exception.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira