[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13145:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13145.000.patch, HDFS-13145.001.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message 

[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-03-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13145:
-
Fix Version/s: 3.1.0

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HDFS-13145.000.patch, HDFS-13145.001.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-02-26 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13145:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.2
   Status: Resolved  (was: Patch Available)

I just committed this to trunk, branch-3.1, and branch-3.0. Thank you [~csun]

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.0.2
>
> Attachments: HDFS-13145.000.patch, HDFS-13145.001.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> 

[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-02-23 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13145:

Affects Version/s: 3.0.0

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13145.000.patch, HDFS-13145.001.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-02-23 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13145:

Attachment: HDFS-13145.001.patch

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13145.000.patch, HDFS-13145.001.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-02-23 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13145:

Attachment: HDFS-13145.000.patch

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13145.000.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-02-23 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13145:

Attachment: (was: HDFS-13145.000.patch)

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13145.000.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-

[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-02-23 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13145:

Attachment: HDFS-13145.000.patch

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13145.000.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-02-23 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13145:

Status: Patch Available  (was: Open)

Submit patch v0.

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13145.000.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)