[ 
https://issues.apache.org/jira/browse/HDFS-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17907766#comment-17907766
 ] 

ASF GitHub Bot commented on HDFS-16757:
---------------------------------------

tomscut commented on code in PR #6926:
URL: https://github.com/apache/hadoop/pull/6926#discussion_r1895316782


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -3863,5 +3864,27 @@ public void setLastDirScannerFinishTime(long time) {
   public long getPendingAsyncDeletions() {
     return asyncDiskService.countPendingDeletions();
   }
+
+  @Override
+  public void hardLinkOneBlock(ExtendedBlock srcBlock, ExtendedBlock dstBlock) 
throws IOException {
+    BlockLocalPathInfo blpi = getBlockLocalPathInfo(srcBlock);
+    FsVolumeImpl v = getVolume(srcBlock);
+
+    try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.VOLUME, 
dstBlock.getBlockPoolId(),
+        v.getStorageID())) {
+      File src = new File(blpi.getBlockPath());
+      File srcMeta = new File(blpi.getMetaPath());
+      BlockPoolSlice dstBPS = v.getBlockPoolSlice(dstBlock.getBlockPoolId());
+      File dstBlockFile = dstBPS.hardLinkOneBlock(src, srcMeta, 
dstBlock.getLocalBlock());
+
+      ReplicaInfo replicaInfo =
+          new LocalReplicaInPipeline(dstBlock.getBlockId(), 
dstBlock.getGenerationStamp(), v,
+              dstBlockFile.getParentFile(), 
dstBlock.getLocalBlock().getNumBytes());
+      dstBlockFile = dstBPS.addFinalizedBlock(dstBlock.getLocalBlock(), 
replicaInfo);
+      replicaInfo = new FinalizedReplica(dstBlock.getLocalBlock(), 
getVolume(srcBlock),
+          dstBlockFile.getParentFile());
+      volumeMap.add(dstBlock.getBlockPoolId(), replicaInfo);

Review Comment:
   > dst volume write lock protects hardlink file atomicity and 
volumeMap.add(). The best case scenario we should use src volume read lock and 
dst volume write lock to protect src and dst file operation.
   > 
   > And in DataXceiver, writeBlock() readBlock() transferBlock() , is as same 
as this only require lock with block meta operation.
   > 
   > So, I think this is enough for now. @Hexiaoqiao he @KeeProMise
   
   Based on the offline discussion, I think this change is acceptable for now. 
However, we need to restrict the scenario, such as the directory being updated 
cannot use fastcopy.





> Add a new method copyBlockCrossNamespace to DataNode
> ----------------------------------------------------
>
>                 Key: HDFS-16757
>                 URL: https://issues.apache.org/jira/browse/HDFS-16757
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: ZanderXu
>            Assignee: liuguanghua
>            Priority: Minor
>              Labels: pull-request-available
>
> Add a new method copyBlockCrossNamespace in DataTransferProtocol at the 
> DataNode Side.
> This method will copy a source block from one namespace to a target block 
> from a different namespace. If the target DN is the same with the current DN, 
> this method will copy the block via HardLink. If the target DN is different 
> with the current DN, this method will copy the block via TransferBlock.
> This method will contains some parameters:
>  * ExtendedBlock sourceBlock
>  * Token<BlockTokenIdentifier> sourceBlockToken
>  * ExtendedBlock targetBlock
>  * Token<BlockTokenIdentifier> targetBlockToken
>  * DatanodeInfo targetDN



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to