[ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495981#comment-16495981
 ] 

Íñigo Goiri commented on HDFS-13631:
------------------------------------

Thanks [~bharatviswa] for the review.
Yes, I think we can leave the sleep business for the future.
The unit tests passed 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24334/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/].
The test passed in 5 seconds, so it could use some optimization but I think if 
we go for optimizing we may want to target {{testListOpenFiles}} as it takes 27 
seconds.
Anyway, let's open a separate JIRA to improve the whole test.

+1 on  [^HDFS-13631.001.patch].
Committing.


> TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate 
> MiniDFSCluster path
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-13631
>                 URL: https://issues.apache.org/jira/browse/HDFS-13631
>             Project: Hadoop HDFS
>          Issue Type: Test
>            Reporter: Anbang Hu
>            Assignee: Anbang Hu
>            Priority: Minor
>              Labels: Windows
>         Attachments: HDFS-13631.000.patch, HDFS-13631.001.patch
>
>
> [TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
>  fails with error message:
> {color:#d04437}Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
> because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
> with the same base path as the one in @Before



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to