[
https://issues.apache.org/jira/browse/HDFS-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Anbang Hu updated HDFS-13630:
-----------------------------
Description:
32 tests in
[TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
fail on Windows after testUpgradeCommand with error message:
Could not format one or more JournalNodes. 1 exceptions thrown:
{color:#d04437}127.0.0.1:58098: Directory
F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
is in an inconsistent state: Can't format the storage directory because the
current directory is not empty.
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}
Restarting NN1 with -upgrade option seems to keep the journalnode directory
from being released after testUpgradeCommand.
{code:java}
// Start NN1 with -upgrade option
dfsCluster.getNameNodeInfos()[0].setStartOpt(
HdfsServerConstants.StartupOption.UPGRADE);
dfsCluster.restartNameNode(0, true);
{code}
branch-2 does not have this issue, because there is no testUpgradeCommand in
branch-2.
was:
32 tests in
[TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
fail on Windows after testUpgradeCommand with error message:
Could not format one or more JournalNodes. 1 exceptions thrown:
{color:#d04437}127.0.0.1:58098: Directory
F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
is in an inconsistent state: Can't format the storage directory because the
current directory is not empty.
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}
Restarting NN1 with -upgrade option seems to keep the journalnode directory
from being released after testUpgradeCommand.
{code:java}
// Start NN1 with -upgrade option
dfsCluster.getNameNodeInfos()[0].setStartOpt(
HdfsServerConstants.StartupOption.UPGRADE);
dfsCluster.restartNameNode(0, true);
{code}
> testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows
> -------------------------------------------------------------------------
>
> Key: HDFS-13630
> URL: https://issues.apache.org/jira/browse/HDFS-13630
> Project: Hadoop HDFS
> Issue Type: Test
> Reporter: Anbang Hu
> Priority: Minor
>
> 32 tests in
> [TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
> fail on Windows after testUpgradeCommand with error message:
> Could not format one or more JournalNodes. 1 exceptions thrown:
> {color:#d04437}127.0.0.1:58098: Directory
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
> is in an inconsistent state: Can't format the storage directory because the
> current directory is not empty.
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
> at
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
> at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
> at
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
> at
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
> at
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}
> Restarting NN1 with -upgrade option seems to keep the journalnode directory
> from being released after testUpgradeCommand.
> {code:java}
> // Start NN1 with -upgrade option
> dfsCluster.getNameNodeInfos()[0].setStartOpt(
> HdfsServerConstants.StartupOption.UPGRADE);
> dfsCluster.restartNameNode(0, true);
> {code}
> branch-2 does not have this issue, because there is no testUpgradeCommand in
> branch-2.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]