[
https://issues.apache.org/jira/browse/HDFS-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740821#comment-13740821
]
Hadoop QA commented on HDFS-5093:
---------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12597868/HDFS-5093.1.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-hdfs-project/hadoop-hdfs.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/4832//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4832//console
This message is automatically generated.
> TestGlobPaths should re-use the MiniDFSCluster to avoid failure on Windows
> --------------------------------------------------------------------------
>
> Key: HDFS-5093
> URL: https://issues.apache.org/jira/browse/HDFS-5093
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Affects Versions: 3.0.0, 2.1.1-beta
> Reporter: Chuan Liu
> Assignee: Chuan Liu
> Priority: Minor
> Attachments: HDFS-5093.1.patch, HDFS-5093.patch
>
>
> Some test cases in TestGlobPaths fail on Windows because they try to create a
> new MiniDFSCluster though there is already one created at {{setUp()}}. This
> leads to failure on Windows because the new cluster will try to clean the old
> name node file that was opened by the existing cluster -- on Windows, the
> process or thread cannot delete the file opened in normal Java APIs by
> another process or thread.
> An example failure run looks like the following.
> {noformat}
> testGlobWithSymlinksOnFS(org.apache.hadoop.fs.TestGlobPaths) Time elapsed:
> 47 sec <<< ERROR!
> java.io.IOException: Could not fully delete
> E:\tr\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
> at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:334)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
> at
> org.apache.hadoop.fs.TestGlobPaths.testOnFileSystem(TestGlobPaths.java:805)
> at
> org.apache.hadoop.fs.TestGlobPaths.testGlobWithSymlinksOnFS(TestGlobPaths.java:889)
> ...
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira