brfrn169 commented on a change in pull request #1954: HDFS-15217 Add more 
information to longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#discussion_r407152458
 
 

 ##########
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
 ##########
 @@ -159,10 +160,14 @@ public void readLockInterruptibly() throws 
InterruptedException {
   }
 
   public void readUnlock() {
-    readUnlock(OP_NAME_OTHER);
+    readUnlock(OP_NAME_OTHER, null);
   }
 
   public void readUnlock(String opName) {
+    readUnlock(opName, null);
+  }
+
+  public void readUnlock(String opName, Supplier<String> 
lockReportInfoSupplier) {
 
 Review comment:
   I use this Supplier in the following places:
   
https://github.com/brfrn169/hadoop/blob/HDFS-15217/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java#L189-L190
   
https://github.com/brfrn169/hadoop/blob/HDFS-15217/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java#L298-L299
   
   The reason why I use Supplier is that we don't always print the lock report, 
only when the lock interval is more than the threshold 
(**dfs.namenode.write-lock-reporting-threshold-ms** or 
**dfs.namenode.read-lock-reporting-threshold-ms**). We can do lazy building 
additional information with the **lockReportInfoSupplier**.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to