tomscut commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1295157989
> ```
> final FsVolumeImpl fsVolume =
> createFsVolume(sd.getStorageUuid(), sd, location);
> // no need to add lock
> final ReplicaMap tempVolumeMap = new ReplicaMap();
> ArrayList<IOException> exceptions = Lists.newArrayList();
>
> for (final NamespaceInfo nsInfo : nsInfos) {
> String bpid = nsInfo.getBlockPoolID();
> try (AutoCloseDataSetLock l =
lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) {
> fsVolume.addBlockPool(bpid, this.conf, this.timer);
> fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
> } catch (IOException e) {
> LOG.warn("Caught exception when adding " + fsVolume +
> ". Will throw later.", e);
> exceptions.add(e);
> }
> }
> ```
>
> The `fsVolume` here is a local temporary variable and still not be added
into the `volumes`, and add/remove bp operations just use the volume in
`volumes`, so there is no conflicts. So here doesn't need the lock for
`BlockPoolSlice`.
>
> @Hexiaoqiao Sir, can check it again?
I agree with @ZanderXu here. +1 from my side.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]