[ 
https://issues.apache.org/jira/browse/HDFS-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15418029#comment-15418029
 ] 

Jing Zhao commented on HDFS-10659:
----------------------------------

Thanks for updating the patch, [~hkoneru]. The new patch looks pretty good to 
me. Some minor comments about the code:
# We only need to add a new parameter NamespaceInfo into 
QJournalProtocol#startLogSegment, no need to keep the old API.
# The following code can be further simplified: we can create a Builder and set 
common attributes first.
{code}
201         if (namespaceInfo == null) {
202           req = StartLogSegmentRequestProto.newBuilder()
203             .setReqInfo(convert(reqInfo))
204             .setTxid(txid).setLayoutVersion(layoutVersion)
205             .build();
206         } else {
207           req = StartLogSegmentRequestProto.newBuilder()
208             .setReqInfo(convert(reqInfo))
209             .setTxid(txid).setLayoutVersion(layoutVersion)
210             .setNsInfo(PBHelper.convert(namespaceInfo))
211             .build();
212         }
{code}
# There is no need to start datanodes in the unit test. Also the try-catch can 
be skipped.
# We can add some more unit tests. E.g., can we add a test where we repeat the 
same process for the 2nd and 3rd JN? This will be the same steps as mentioned 
in Amit's first comment.

Following the above last comment, in case we refresh multiple JN, we now have a 
missing piece: HDFS-4025. Without HDFS-4025 and enabling the auto reformat at 
the same time, we now have the risk that an admin refreshes all 3 JN's disks 
one by one, and finally lose all the journal segments before the first refresh. 
Therefore we may need to add a bootstrap stage for each JN with fresh disk, 
during which the JN will synchronize past log segments.

> Namenode crashes after Journalnode re-installation in an HA cluster due to 
> missing paxos directory
> --------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10659
>                 URL: https://issues.apache.org/jira/browse/HDFS-10659
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: ha, journal-node
>    Affects Versions: 2.7.1
>            Reporter: Amit Anand
>            Assignee: Hanisha Koneru
>         Attachments: HDFS-10659.000.patch, HDFS-10659.001.patch
>
>
> In my environment I am seeing {{Namenodes}} crashing down after majority of 
> {{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
> upgrades followed by rolling re-install of each node including master(NN, JN, 
> RM, ZK) nodes. When a journal node is re-installed or moved to a new 
> disk/host, instead of running {{"initializeSharedEdits"}} command, I copy 
> {{VERSION}} file from one of the other {{Journalnode}} and that allows my 
> {{NN}} to start writing data to the newly installed {{Journalnode}}.
> To acheive quorum for JN and recover unfinalized segments NN during starupt 
> creates NNNN.tmp files under {{"<disk>/jn/current/paxos"}} directory . In 
> current implementation "paxos" directry is only created during 
> {{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
> directory is not created upon JN startup or by NN while writing NNNN.tmp 
> files which causes NN to crash with following error message:
> {code}
> 192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
> such file or directory)
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at 
> org.apache.hadoop.hdfs.util.AtomicFileOutputStream.<init>(AtomicFileOutputStream.java:58)
>         at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
>         at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
>         at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
>         at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249)
>         at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
> {code}
> The current 
> [getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130]
>  method simply returns a path to a file under "paxos" directory without 
> verifiying its existence. Since "paxos" directoy holds files that are 
> required for NN recovery and acheiving JN quorum my proposed solution is to 
> add a check to "getPaxosFile" method and create the {{"paxos"}} directory if 
> it is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to