xyuanlu commented on code in PR #2409:
URL: https://github.com/apache/helix/pull/2409#discussion_r1144144574


##########
meta-client/src/main/java/org/apache/helix/metaclient/impl/zk/ZkMetaClient.java:
##########
@@ -394,4 +414,69 @@ public byte[] serialize(T data, String path) {
   public T deserialize(byte[] bytes, String path) {
     return _zkClient.deserialize(bytes, path);
   }
+
+  /**
+   * A clean up method called when connect state change or MetaClient is 
closing.
+   * @param cancel If we want to cancel the reconnect monitor thread.
+   * @param close If we want to close ZkClient.
+   */
+  private void cleanUpAndClose(boolean cancel, boolean close) {
+    _zkClientConnectionMutex.lock();
+
+    if (close && !_zkClient.isClosed()) {
+      _zkClient.close();
+      LOG.info("ZkClient is closed");
+    }
+
+    if (cancel && _reconnectMonitorFuture != null) {
+      _reconnectMonitorFuture.cancel(true);
+      LOG.info("ZkClient reconnect monitor thread is canceled");
+    }
+
+    _zkClientConnectionMutex.unlock();
+  }
+
+  // Schedule a monitor to track ZkClient auto reconnect when Disconnected
+  // Cancel the monitor thread when connected.
+  @Override
+  public void handleStateChanged(Watcher.Event.KeeperState state) throws 
Exception {
+    if (state == Watcher.Event.KeeperState.Disconnected) {
+      // Expired. start a new event monitoring retry
+      _zkClientConnectionMutex.lockInterruptibly();

Review Comment:
   I would prefer to use one mutex as introducing 2 locks and acquire then one 
after another will have higher risk of dead lock. 



##########
meta-client/src/main/java/org/apache/helix/metaclient/impl/zk/ZkMetaClient.java:
##########
@@ -394,4 +414,69 @@ public byte[] serialize(T data, String path) {
   public T deserialize(byte[] bytes, String path) {
     return _zkClient.deserialize(bytes, path);
   }
+
+  /**
+   * A clean up method called when connect state change or MetaClient is 
closing.
+   * @param cancel If we want to cancel the reconnect monitor thread.
+   * @param close If we want to close ZkClient.
+   */
+  private void cleanUpAndClose(boolean cancel, boolean close) {
+    _zkClientConnectionMutex.lock();
+
+    if (close && !_zkClient.isClosed()) {
+      _zkClient.close();
+      LOG.info("ZkClient is closed");
+    }
+
+    if (cancel && _reconnectMonitorFuture != null) {
+      _reconnectMonitorFuture.cancel(true);
+      LOG.info("ZkClient reconnect monitor thread is canceled");
+    }
+
+    _zkClientConnectionMutex.unlock();
+  }
+
+  // Schedule a monitor to track ZkClient auto reconnect when Disconnected
+  // Cancel the monitor thread when connected.
+  @Override
+  public void handleStateChanged(Watcher.Event.KeeperState state) throws 
Exception {
+    if (state == Watcher.Event.KeeperState.Disconnected) {
+      // Expired. start a new event monitoring retry
+      _zkClientConnectionMutex.lockInterruptibly();

Review Comment:
   I would prefer to use one mutex as introducing 2 locks and acquire then one 
after another will have higher risk of dead lock.  I am open to discussion :D



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to