[
https://issues.apache.org/jira/browse/HDFS-5100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth updated HDFS-5100:
--------------------------------
Status: Open (was: Patch Available)
> TestNamenodeRetryCache fails on Windows due to incorrect cleanup
> ----------------------------------------------------------------
>
> Key: HDFS-5100
> URL: https://issues.apache.org/jira/browse/HDFS-5100
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Affects Versions: 3.0.0, 2.1.1-beta
> Reporter: Chuan Liu
> Assignee: Chuan Liu
> Priority: Minor
> Attachments: HDFS-5100-trunk.patch, HDFS-5100-trunk.patch
>
>
> The test case fails on Windows with the following exceptions.
> {noformat}
> java.io.IOException: Could not fully delete
> C:\hdc\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
> at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:334)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits.setupCluster(TestInitializeSharedEdits.java:68)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {noformat}
> The root cause is that the {{cleanup()}} only try to delete root directory
> instead of shutting down the MiniDFSCluster. Every test case in this unit
> test will create a new MiniDFSCluster during {{setup()}} step. Without
> shutting down the previous cluster, the new cluster creation will fail with
> the above exception due to blocking file handling on Windows.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira