farmmamba created HDFS-17125: -------------------------------- Summary: Method checkAndUpdate should also resolve duplicate replicas when memBlockInfo.metadataExists() return false Key: HDFS-17125 URL: https://issues.apache.org/jira/browse/HDFS-17125 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 3.4.0 Reporter: farmmamba Assignee: farmmamba
In method FsDatasetImpl#checkAndUpdate, there is below code snippet: {code:java} if (memBlockInfo.blockDataExists()) { if (memBlockInfo.getBlockURI().compareTo(diskFile.toURI()) != 0) { if (diskMetaFileExists) { if (memBlockInfo.metadataExists()) { // We have two sets of block+meta files. Decide which one to // keep. ReplicaInfo diskBlockInfo = new ReplicaBuilder(ReplicaState.FINALIZED) .setBlockId(blockId) .setLength(diskFile.length()) .setGenerationStamp(diskGS) .setFsVolume(vol) .setDirectoryToUse(diskFile.getParentFile()) .build(); ((FsVolumeImpl) vol).resolveDuplicateReplicas(bpid, memBlockInfo, diskBlockInfo, volumeMap); } } else { // ..... } if (!fileIoProvider.delete(vol, diskFile)) { LOG.warn("Failed to delete " + diskFile); } } } } {code} It does resolveDuplicateReplicas when memBlockInfo.metadataExists() returns true. Do we need to add some logic here when memBlockInfo.metadataExists() returns false? -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org