[
https://issues.apache.org/jira/browse/HADOOP-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12689684#action_12689684
]
Konstantin Shvachko commented on HADOOP-5551:
---------------------------------------------
+1 for {{FSNamesystem}} changes.
{{TestCreateFile}} could be simplified:
- you can assert right after creating the file, because the creation should not
be successful. Listing the directory does not really add any value.
- if by any chance the file is created you should close the output stream.
Something like this:
{code}
try {
FSDataOutputStream out = fs.create(dir1, true);
out.close();
assertTrue("Did not prevent directory from being overwritten.", false);
} catch (IOException ie) {
if (!ie.getMessage().contains("already exists as a directory.")) {
throw ie;
}
}
{code}
- also you got some indentation problems in the test code (should be 2 spaces)
- and please make sure that lines do not exceed 80 symbols.
> Namenode permits directory destruction on overwrite
> ---------------------------------------------------
>
> Key: HADOOP-5551
> URL: https://issues.apache.org/jira/browse/HADOOP-5551
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.19.1
> Reporter: Brian Bockelman
> Priority: Critical
> Fix For: 0.19.2, 0.20.0
>
> Attachments: HADOOP-5551-v2.patch
>
>
> The FSNamesystem's startFileInternal allows overwriting of directories. That
> is, if you have a directory named /foo/bar and you try to write a file named
> /foo/bar, the file is written and the directory disappears.
> This is most apparent for folks using libhdfs directly, as overwriting is
> always turned on. Therefore, if libhdfs applications do not check the
> existence of a directory first, then they will permit new files to destroy
> directories.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.