Hexiaoqiao commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1273259888
I totally agree that we should not hold lock when ever IO operation,
especially scan the whole disk, it will be one terrible disaster even at
refresh volume. Of course it does not include during restart DataNode instance.
Back to this case. I think the point is that we should only hold block pool
lock (maybe write lock here) when get/set `BlockPoolSlice` rather than one
coarse grain lock.
So should we split the following segment and only hold lock for
`BlockPoolSlice`. And leave other logic without any level locks. I think it
could be acceptable if meet any conflicts or other exceptions about only one
volume which is being added.
```
try (AutoCloseDataSetLock l =
lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) {
fsVolume.addBlockPool(bpid, this.conf, this.timer);
fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
}
```
WDYT? cc @MingXiangLi @ZanderXu
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]