[
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xudong Cao updated HDFS-14646:
------------------------------
Description:
*Problem Description:*
In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put the
image to all other NNs (whether the peer NN is an ANN or not), and even if the
peer NN immediately replies with an error (such as
TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult
.OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put
process immediately, but will put the FsImage completely to the peer NN, and
will not read the peer NN's reply until the put is completed.
Depending on the version of Jetty, this behavior can lead to different
consequences, I tested it under 2.7.2 and the latest 3.3.0 version.
*1. In Hadoop 2.7.2 (with Jetty 6.1.26)*
After perr NN called HttpServletResponse.sendError(), the underlying TCP
connection will still be established, and the data SNN sent will be read by
Jetty framework itself in the peer NN, so the SNN will still insignificantly
send the FsImage to the peer NN continuously, causing a waste of time and
bandwidth. In a relatively large HDFS cluster, the size of FsImage can often
reach about 30GB, This is indeed a big waste.
*2.In newest Hadoop-3.3.0-SNAPSHOT (with Jetty 9.3.27)*
After perr NN called HttpServletResponse.sendError(), the underlying TCP
connection will be auto closed, and then SNN will directly get an "Error
writing request body to server" exception, as below:
{code:java}
2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName:
/tmp/hadoop-root/dfs/name/current/fsimage_0000000000003364240, fileSize:
9864721. Sent total: 524288 bytes. Size of last segment intended to send: 4096
bytes.
java.io.IOException: Error writing request body to server
at
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
at
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
at
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
at
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
at
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
at
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
at
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
at
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName:
/tmp/hadoop-root/dfs/name/current/fsimage_0000000000003364240, fileSize:
9864721. Sent total: 851968 bytes. Size of last segment intended to send: 4096
bytes.
java.io.IOException: Error writing request body to server
at
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
at
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
at
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
at
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
{code}
*Solution:*
A standby NameNode should not upload fsimage to an inappropriate NameNode,
when the he plans to put a FsImage to the peer NN, he need to check whether he
really need to put it at this time.
was:
*Problem Description:*
In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put the
image to all other NNs (whether the peer NN is an ANN or not), and even if the
peer NN immediately replies with an error (such as
TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult
.OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put
process immediately, but will put the FsImage completely to the peer NN, and
will not read the peer NN's reply until the put is completed.
In a relatively large HDFS cluster, the size of FsImage can often reach about
30GB. In this case, this invalid put brings two problems:
# Wasting time and bandwidth.
# Since the ImageServlet of the peer NN no longer receives the FsImage, the
socket Send-Q of the local SNN is very large, and the ImageUpload thread will
be blocked in writing socket for a long time, eventually causing the local
StandbyCheckpointer thread often blocked for several hours.
*An example is as follows:*
In the following figure, the local NN 100.76.3.234 is a SNN, the peer NN
100.76.3.170 is another SNN, and the 8080 is NN Http port. When the local SNN
starts to put the FsImage, 170 will reply with a NOT_ACTIVE_NAMENODE_FAILURE
error immediately. In this case, the local SNN should terminate put
immediately, but in fact, local SNN has to wait until the image has been
completely put to the peer NN,and then can read the response.
# At this time, since the ImageServlet of the peer NN no longer receives the
FsImage, the socket Send-Q of the local SNN is very large:
!largeSendQ.png!
2. Moreover, the local SNN's ImageUpload thread will be blocked in
writing socket for a long time:
!blockedInWritingSocket.png! .
3. Eventually, the StandbyCheckpointer thread of local SNN is waiting for
the execution result of the ImageUpload thread, blocking in Future.get(), and
the blocking time may be as long as several hours:
!get1.png!
!get2.png!
*Solution:*
When the local SNN plans to put a FsImage to the peer NN, it need to test
whether he really need to put it at this time. The test process is:
# Establish an HTTP connection with the peer NN, send the put request, and
then immediately read the response (this is the key point). If the peer NN
replies any of the following errors (TransferResult.AUTHENTICATION_FAILURE,
TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult.
OLD_TRANSACTION_ID_FAILURE), immediately terminate the put process.
# If the peer NN is indeed the Active NameNode AND it's now in the appropriate
state to receive an image, it will reply an HTTP response 410
(HttpServletResponse.SC_GONE, which is TransferResult.UNEXPECTED_FAILURE). At
this time, the local SNN can really begin to put the image.
*Note:*
This problem needs to be reproduced in a large cluster (the size of FsImage in
our cluster is about 30GB). Therefore, unit testing is difficult to write. In
our cluster, after the modification, the problem has been solved and there is
no such thing as a large backlog of Send-Q.
> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> ------------------------------------------------------------------------
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs
> Affects Versions: 3.1.2
> Reporter: Xudong Cao
> Assignee: Xudong Cao
> Priority: Major
> Attachments: HDFS-14646.000.patch
>
>
> *Problem Description:*
> In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put
> the image to all other NNs (whether the peer NN is an ANN or not), and even
> if the peer NN immediately replies with an error (such as
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put
> process immediately, but will put the FsImage completely to the peer NN, and
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different
> consequences, I tested it under 2.7.2 and the latest 3.3.0 version.
> *1. In Hadoop 2.7.2 (with Jetty 6.1.26)*
> After perr NN called HttpServletResponse.sendError(), the underlying TCP
> connection will still be established, and the data SNN sent will be read by
> Jetty framework itself in the peer NN, so the SNN will still insignificantly
> send the FsImage to the peer NN continuously, causing a waste of time and
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often
> reach about 30GB, This is indeed a big waste.
> *2.In newest Hadoop-3.3.0-SNAPSHOT (with Jetty 9.3.27)*
> After perr NN called HttpServletResponse.sendError(), the underlying TCP
> connection will be auto closed, and then SNN will directly get an "Error
> writing request body to server" exception, as below:
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName:
> /tmp/hadoop-root/dfs/name/current/fsimage_0000000000003364240, fileSize:
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send:
> 4096 bytes.
> java.io.IOException: Error writing request body to server
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName:
> /tmp/hadoop-root/dfs/name/current/fsimage_0000000000003364240, fileSize:
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send:
> 4096 bytes.
> java.io.IOException: Error writing request body to server
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
> {code}
>
> *Solution:*
> A standby NameNode should not upload fsimage to an inappropriate NameNode,
> when the he plans to put a FsImage to the peer NN, he need to check whether
> he really need to put it at this time.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]