[
https://issues.apache.org/jira/browse/HDFS-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17802587#comment-17802587
]
Shilun Fan commented on HDFS-17125:
-----------------------------------
Bulk update: moved all 3.4.0 non-blocker issues, please move back if it is a
blocker. Retarget 3.5.0.
> Method checkAndUpdate should also resolve duplicate replicas when
> memBlockInfo.metadataExists() return false
> ------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-17125
> URL: https://issues.apache.org/jira/browse/HDFS-17125
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 3.4.0
> Reporter: farmmamba
> Assignee: farmmamba
> Priority: Major
>
> In method FsDatasetImpl#checkAndUpdate, there is below code snippet:
> {code:java}
> if (memBlockInfo.blockDataExists()) {
> if (memBlockInfo.getBlockURI().compareTo(diskFile.toURI()) != 0) {
> if (diskMetaFileExists) {
> if (memBlockInfo.metadataExists()) {
> // We have two sets of block+meta files. Decide which one to
> // keep.
> ReplicaInfo diskBlockInfo =
> new ReplicaBuilder(ReplicaState.FINALIZED)
> .setBlockId(blockId)
> .setLength(diskFile.length())
> .setGenerationStamp(diskGS)
> .setFsVolume(vol)
> .setDirectoryToUse(diskFile.getParentFile())
> .build();
> ((FsVolumeImpl) vol).resolveDuplicateReplicas(bpid,
> memBlockInfo, diskBlockInfo, volumeMap);
> }
> } else {
> // .....
> }
> if (!fileIoProvider.delete(vol, diskFile)) {
> LOG.warn("Failed to delete " + diskFile);
> }
> }
> }
> } {code}
> It does resolveDuplicateReplicas when memBlockInfo.metadataExists() returns
> true. Do we need to add some logic here when memBlockInfo.metadataExists()
> returns false?
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]