ZanderXu commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1261833655

   @MingXiangLi @Hexiaoqiao Sir, thanks for the warm discussions.
   > 1、When fsVolume.getVolumeMap() scan the bock from disk to add block 
metadata,it may add new block metadata when another thread add block.
   
   This volume is still not added into the FsVolumeList here, means another 
thread can not add new block into this volume?  So this case not exists?
   
   > 2、How we deal conflict when remove BlockPool operating occur at same time.
   
   RemoveBlockPool just remove the blocks in the memory replicasMap, will not 
delete blocks on the disk. So remove block pool operation will not affect 
scanning disk. Let's see some stages:
   
   Case 1: It's ok.
   1. fsVolume.addBlockPool();
   2. volumeMap.cleanUpBlockPool(bpid);
   3. volumes.removeBlockPool(bpid, blocksPerVolume); Here will remove the 
blockPoolSlice
   4. fsVolume.getVolumeMap(); here will throw IOException, because the 
blockPoolSlice is null. 
   
   Case2:
   1. addVolume get NamespaceInfo
   2. volumeMap.cleanUpBlockPool(bpid);
   3. volumes.removeBlockPool(bpid, blocksPerVolume); Here will remove the 
blockPoolSlice
   4. fsVolume.addBlockPool();
   5. fsVolume.getVolumeMap(); here will be ok
   6. activateVolume(); here will add the removed bpid into the replicamap 
again. Maybe is not expected. But it's another problem, should be fixed by 
other PR.
   
   From my point, scanning the disk with a lock for a long time is not allowed. 
If there are some conflict here, we can find them and fix by other solution. 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to