Hexiaoqiao commented on a change in pull request #4016:
URL: https://github.com/apache/hadoop/pull/4016#discussion_r816446862



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
##########
@@ -1378,7 +1378,7 @@ public long monotonicNow() {
     private final BlockingQueue<Runnable> queue;
 
     CommandProcessingThread(BPServiceActor actor) {
-      super("Command processor");
+      setName("Command processor-" + getId());

Review comment:
       > I think the issue here is in case of MiniDfsCluster all datanodes log 
at one place.Means if we spin a MiniDfsCluster with 9 datanodes, all Command 
processor threads will log at same place, and we can't distinguish, which 
thread belongs to which datanode. DN1 will also have same name & DN2 till DN9. 
If we add namenode address, then also the names of the thread will stay same 
right? All datanodes in a single MiniDfs will be connected to same set of 
namenodes, right?
   > 
   > May be adding DN address should be a good idea?
   
   Thanks @madrob @ayushtkn for your discussions and information. From my side, 
I am more concerned how to differ these threads service to which namespace when 
setup Federation production cluster. This may be helpful for digging issues 
when meet something not expected. I total agree to add `dn.getDisplayName()` or 
`getId()` or something else also that can improve the MiniDFSCluster log 
readable. IMO we should consider production env meanwhile.  FYI. Of course this 
is not the FATAL issue, my comment is not blocker. Please feel free to go ahead 
if possible.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to