ZanderXu commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1262143802

   ```
   try (AutoCloseDataSetLock l = lockManager.readLock(LockLevel.VOLUME, bpid, 
fsVolume.getStorageID())) {
           fsVolume.addBlockPool(bpid, this.conf, this.timer);
           fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
         } catch (IOException e) {
           LOG.warn("Caught exception when adding " + fsVolume +
               ". Will throw later.", e);
           exceptions.add(e);
         }
   ```
   Changing the code as above?  Emm.. Holding the BP read lock for a long time 
will have a great impact on the operations that need to acquire the BP write 
lock, such as: invalidate, recoverAppend, createTemporary.
   
   The current logic uses IOException to avoid the conflict case, I think it's 
ok. And there is no lock before HDFS-15382, means it's ok. If we can find one 
conflict case, we can use IOException to fix it.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to