sodonnel commented on a change in pull request #2728:
URL: https://github.com/apache/hadoop/pull/2728#discussion_r610540938
##########
File path:
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##########
@@ -895,6 +895,8 @@ void waitForAckedSeqno(long seqno) throws IOException {
try (TraceScope ignored = dfsClient.getTracer().
newScope("waitForAckedSeqno")) {
LOG.debug("{} waiting for ack for: {}", this, seqno);
+ int dnodes = nodes != null ? nodes.length : 3;
+ int writeTimeout = dfsClient.getDatanodeWriteTimeout(dnodes);
Review comment:
This timeout is very long. For a 3 node pipeline, it will be 8 minutes +
3 * 5 seconds (for the extension).
I'm not sure I have a better suggestion for the timeout.
One question - I believe we saw this problem in a Hung Hive Server 2
process. Do we know how this problem causes the entire HS2 instance to get
hung? I would have thought this issue would block the closing of a single file
on HDFS and other files open within the same client could still progress as
normal?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]