virajjasani commented on a change in pull request #3756:
URL: https://github.com/apache/hadoop/pull/3756#discussion_r763869079



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
##########
@@ -2775,14 +2775,23 @@ public void waitActive(int nnIndex) throws IOException {
     DFSClient client = new DFSClient(addr, conf);
 
     // ensure all datanodes have registered and sent heartbeat to the namenode
-    while (shouldWait(client.datanodeReport(DatanodeReportType.LIVE), addr)) {
-      try {
+    int failedCount = 0;
+    try {
+      while (shouldWait(client.datanodeReport(DatanodeReportType.LIVE), addr)) 
{
         LOG.info("Waiting for cluster to become active");
         Thread.sleep(100);
-      } catch (InterruptedException e) {
       }
+    } catch (IOException e) {
+      failedCount++;
+      // Cached RPC connection to namenode, if any, is expected to fail once
+      if (failedCount > 1) {
+        LOG.warn("Tried waitActive() " + failedCount
+            + " time(s) and failed, giving up.  " + StringUtils
+            .stringifyException(e));
+        throw e;
+      }
+    } catch (InterruptedException e) {
     }

Review comment:
       How about re-throwing as IOE because ultimately main thread getting 
interrupted is going to be problematic? `throw new IOException(e)`
   Even without this patch, catching `InterruptedException` was no-op but I 
believe re-throwing it as IOException might help us analyze any hidden bug 
underneath. Thoughts?

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
##########
@@ -2815,23 +2824,11 @@ public Boolean get() {
    */
   public void waitActive() throws IOException {
     for (int index = 0; index < namenodes.size(); index++) {
-      int failedCount = 0;
       while (true) {

Review comment:
       After this change, while loop is not required because 
`waitActive(index)` would take care of internal retries if required.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
##########
@@ -2775,14 +2775,23 @@ public void waitActive(int nnIndex) throws IOException {
     DFSClient client = new DFSClient(addr, conf);
 
     // ensure all datanodes have registered and sent heartbeat to the namenode
-    while (shouldWait(client.datanodeReport(DatanodeReportType.LIVE), addr)) {
-      try {
+    int failedCount = 0;
+    try {
+      while (shouldWait(client.datanodeReport(DatanodeReportType.LIVE), addr)) 
{
         LOG.info("Waiting for cluster to become active");
         Thread.sleep(100);
-      } catch (InterruptedException e) {
       }
+    } catch (IOException e) {
+      failedCount++;
+      // Cached RPC connection to namenode, if any, is expected to fail once
+      if (failedCount > 1) {

Review comment:
       Now that we have kept entire while loop within try-catch, `failedCount` 
will be `1` here due to `failedCount++` and then we exit from try-catch so 
`failedCount` will never be recalculated. Recalculation will be possible only 
if try-catch is kept within while loop. Based on WARN log message, it seems 
that's what we should do because we are trying to add retries?

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java
##########
@@ -309,6 +309,14 @@ public void testSetUpFederatedCluster() throws Exception {
             DFSUtil.addKeySuffixes(
             DFS_NAMENODE_HTTP_ADDRESS_KEY, "ns1", "nn1")));
       }
+
+      // Shutdown namenodes individually.
+      cluster.shutdownNameNode(0);
+      cluster.shutdownNameNode(1);

Review comment:
       Shall we just do restarts rather than shutdown + restart?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to