[ 
https://issues.apache.org/jira/browse/HDFS-17030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17732283#comment-17732283
 ] 

ASF GitHub Bot commented on HDFS-17030:
---------------------------------------

xinglin commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1228797424


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverReadProxyProvider.java:
##########
@@ -58,30 +74,53 @@
  * NameNode to communicate with.
  */
 public class TestObserverReadProxyProvider {
+  private final static int SLOW_RESPONSE_SLEEP_TIME = 5000; // 5 s
+  private final static int NAMENODE_HA_STATE_PROBE_TIMEOUT_SHORT = 2000; // 2s
+  private final static int NAMENODE_HA_STATE_PROBE_TIMEOUT_LONG = 25000; // 25s
 
   private static final LocatedBlock[] EMPTY_BLOCKS = new LocatedBlock[0];
   private String ns;
   private URI nnURI;
-  private Configuration conf;
 
   private ObserverReadProxyProvider<ClientProtocol> proxyProvider;
+  @Mock private Logger logger;
+
   private NameNodeAnswer[] namenodeAnswers;
   private String[] namenodeAddrs;
 
   @Before
   public void setup() throws Exception {
     ns = "testcluster";
     nnURI = URI.create("hdfs://" + ns);
-    conf = new Configuration();
-    conf.set(HdfsClientConfigKeys.DFS_NAMESERVICES, ns);
-    // Set observer probe retry period to 0. Required by the tests that
-    // transition observer back and forth
-    conf.setTimeDuration(
-        OBSERVER_PROBE_RETRY_PERIOD_KEY, 0, TimeUnit.MILLISECONDS);
-    conf.setBoolean(HdfsClientConfigKeys.Failover.RANDOM_ORDER, false);
+
+    MockitoAnnotations.initMocks(this);
+  }
+
+  /**
+   * Replace LOG in ObserverReadProxy with a mocked logger.
+   */
+  private void setupMockLoggerForProxyProvider()

Review Comment:
   this is much easier! thanks. It have taken a couple of days for me to figure 
out a way to verify Logger with a mocked Logger.





> Limit wait time for getHAServiceState in ObserverReaderProxy
> ------------------------------------------------------------
>
>                 Key: HDFS-17030
>                 URL: https://issues.apache.org/jira/browse/HDFS-17030
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>    Affects Versions: 3.4.0
>            Reporter: Xing Lin
>            Assignee: Xing Lin
>            Priority: Minor
>              Labels: pull-request-available
>
> When namenode HA is enabled and a standby NN is not responsible, we have 
> observed it would take a long time to serve a request, even though we have a 
> healthy observer or active NN. 
> Basically, when a standby is down, the RPC client would (re)try to create 
> socket connection to that standby for _ipc.client.connect.timeout_ _* 
> ipc.client.connect.max.retries.on.timeouts_ before giving up. When we take a 
> heap dump at a standby, the NN still accepts the socket connection but it 
> won't send responses to these RPC requests and we would timeout after 
> _ipc.client.rpc-timeout.ms._ This adds a significantly latency. For clusters 
> at Linkedin, we set _ipc.client.rpc-timeout.ms_ to 120 seconds and thus a 
> request takes more than 2 mins to complete when we take a heap dump at a 
> standby. This has been causing user job failures. 
> We could set _ipc.client.rpc-timeout.ms to_ a smaller value when sending 
> getHAServiceState requests in ObserverReaderProxy (for user rpc requests, we 
> still use the original value from the config). However, that would double the 
> socket connection between clients and the NN (which is a deal-breaker). 
> The proposal is to add a timeout on getHAServiceState() calls in 
> ObserverReaderProxy and we will only wait for the timeout for an NN to 
> respond its HA state. Once we pass that timeout, we will move on to probe the 
> next NN. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to