[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13470457#comment-13470457
 ] 

Hadoop QA commented on HADOOP-8849:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547999/HADOOP-8849-vs-trunk-3.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

                  org.apache.hadoop.ha.TestZKFailoverController

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1564//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1564//console

This message is automatically generated.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to