[ 
https://issues.apache.org/jira/browse/HDFS-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17625787#comment-17625787
 ] 

ASF GitHub Bot commented on HDFS-16785:
---------------------------------------

tomscut commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1295157989

   > ```
   >   final FsVolumeImpl fsVolume =
   >         createFsVolume(sd.getStorageUuid(), sd, location);
   >     // no need to add lock
   >     final ReplicaMap tempVolumeMap = new ReplicaMap();
   >     ArrayList<IOException> exceptions = Lists.newArrayList();
   > 
   >     for (final NamespaceInfo nsInfo : nsInfos) {
   >       String bpid = nsInfo.getBlockPoolID();
   >       try (AutoCloseDataSetLock l = 
lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) {
   >         fsVolume.addBlockPool(bpid, this.conf, this.timer);
   >         fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
   >       } catch (IOException e) {
   >         LOG.warn("Caught exception when adding " + fsVolume +
   >             ". Will throw later.", e);
   >         exceptions.add(e);
   >       }
   >     }
   > ```
   > 
   > The `fsVolume` here is a local temporary variable and still not be added 
into the `volumes`, and add/remove bp operations just use the volume in 
`volumes`, so there is no conflicts. So here doesn't need the lock for 
`BlockPoolSlice`.
   > 
   > @Hexiaoqiao Sir, can check it again?
   
   I agree with @ZanderXu here. +1 from my side.




> DataNode hold BP write lock to scan disk
> ----------------------------------------
>
>                 Key: HDFS-16785
>                 URL: https://issues.apache.org/jira/browse/HDFS-16785
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Major
>              Labels: pull-request-available
>
> When patching the fine-grained locking of datanode, I  found that `addVolume` 
> will hold the write block of the BP lock to scan the new volume to get the 
> blocks. If we try to add one full volume that was fixed offline before, i 
> will hold the write lock for a long time.
> The related code as bellows:
> {code:java}
> for (final NamespaceInfo nsInfo : nsInfos) {
>   String bpid = nsInfo.getBlockPoolID();
>   try (AutoCloseDataSetLock l = lockManager.writeLock(LockLevel.BLOCK_POOl, 
> bpid)) {
>     fsVolume.addBlockPool(bpid, this.conf, this.timer);
>     fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
>   } catch (IOException e) {
>     LOG.warn("Caught exception when adding " + fsVolume +
>         ". Will throw later.", e);
>     exceptions.add(e);
>   }
> } {code}
> And I noticed that this lock is added by HDFS-15382, means that this logic is 
> not in lock before. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to