[
https://issues.apache.org/jira/browse/HDFS-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808861#comment-17808861
]
ASF GitHub Bot commented on HDFS-17342:
---------------------------------------
haiyang1987 commented on code in PR #6464:
URL: https://github.com/apache/hadoop/pull/6464#discussion_r1460160197
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##########
@@ -2011,4 +2011,83 @@ public void tesInvalidateMissingBlock() throws Exception
{
cluster.shutdown();
}
}
+
+ @Test
+ public void testCheckFilesWhenInvalidateMissingBlock() throws Exception {
+ long blockSize = 1024;
+ int heartbeatInterval = 1;
+ HdfsConfiguration c = new HdfsConfiguration();
+ c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heartbeatInterval);
+ c.setLong(DFS_BLOCK_SIZE_KEY, blockSize);
+ MiniDFSCluster cluster = new MiniDFSCluster.Builder(c).
+ numDataNodes(1).build();
+ DataNodeFaultInjector oldDnInjector = DataNodeFaultInjector.get();
+ try {
+ cluster.waitActive();
+ BlockReaderTestUtil util = new BlockReaderTestUtil(cluster, new
+ HdfsConfiguration(conf));
+ Path path = new Path("/testFile");
+ util.writeFile(path, 1);
+ String bpid = cluster.getNameNode().getNamesystem().getBlockPoolId();
+ DataNode dn = cluster.getDataNodes().get(0);
+ FsDatasetImpl dnFSDataset = (FsDatasetImpl) dn.getFSDataset();
+ List<ReplicaInfo> replicaInfos = dnFSDataset.getFinalizedBlocks(bpid);
+ assertEquals(1, replicaInfos.size());
+ DFSTestUtil.readFile(cluster.getFileSystem(), path);
+ LocatedBlock blk = util.getFileBlocks(path, 512).get(0);
+ ExtendedBlock block = blk.getBlock();
+
+ // Append a new block with an incremented generation stamp.
+ long newGS = block.getGenerationStamp() + 1;
+ dnFSDataset.append(block, newGS, 1024);
+ block.setGenerationStamp(newGS);
+
+ DataNodeFaultInjector injector = new DataNodeFaultInjector() {
+ @Override
+ public void delayGetMetaDataInputStream() {
+ try {
+ Thread.sleep(8000);
+ } catch (InterruptedException e) {
+ // Ignore exception.
+ }
+ }
+ };
+ // Delay to getMetaDataInputStream.
+ DataNodeFaultInjector.set(injector);
+
+ ExecutorService executorService = Executors.newFixedThreadPool(2);
+ try {
+ Future<?> blockReaderFuture = executorService.submit(() -> {
+ try {
+ // Submit tasks for reading block.
+ BlockReaderTestUtil.getBlockReader(cluster.getFileSystem(), blk,
0, 512);
Review Comment:
Thanks @smarthanwang for your review.
here will not be thrown `java.io.FileNotFoundException` , because this
exception will be captured output to the exception stack when processing the
`DataXceiver.`
if we expect check `FileNotFoundException` maybe need to call the
initialization BlockSender directly.
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##########
@@ -2011,4 +2011,83 @@ public void tesInvalidateMissingBlock() throws Exception
{
cluster.shutdown();
}
}
+
+ @Test
+ public void testCheckFilesWhenInvalidateMissingBlock() throws Exception {
+ long blockSize = 1024;
+ int heartbeatInterval = 1;
+ HdfsConfiguration c = new HdfsConfiguration();
+ c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heartbeatInterval);
+ c.setLong(DFS_BLOCK_SIZE_KEY, blockSize);
+ MiniDFSCluster cluster = new MiniDFSCluster.Builder(c).
+ numDataNodes(1).build();
+ DataNodeFaultInjector oldDnInjector = DataNodeFaultInjector.get();
+ try {
+ cluster.waitActive();
+ BlockReaderTestUtil util = new BlockReaderTestUtil(cluster, new
+ HdfsConfiguration(conf));
+ Path path = new Path("/testFile");
+ util.writeFile(path, 1);
+ String bpid = cluster.getNameNode().getNamesystem().getBlockPoolId();
+ DataNode dn = cluster.getDataNodes().get(0);
+ FsDatasetImpl dnFSDataset = (FsDatasetImpl) dn.getFSDataset();
+ List<ReplicaInfo> replicaInfos = dnFSDataset.getFinalizedBlocks(bpid);
+ assertEquals(1, replicaInfos.size());
+ DFSTestUtil.readFile(cluster.getFileSystem(), path);
+ LocatedBlock blk = util.getFileBlocks(path, 512).get(0);
+ ExtendedBlock block = blk.getBlock();
+
+ // Append a new block with an incremented generation stamp.
+ long newGS = block.getGenerationStamp() + 1;
+ dnFSDataset.append(block, newGS, 1024);
+ block.setGenerationStamp(newGS);
+
+ DataNodeFaultInjector injector = new DataNodeFaultInjector() {
+ @Override
+ public void delayGetMetaDataInputStream() {
+ try {
+ Thread.sleep(8000);
+ } catch (InterruptedException e) {
+ // Ignore exception.
+ }
+ }
+ };
+ // Delay to getMetaDataInputStream.
+ DataNodeFaultInjector.set(injector);
+
+ ExecutorService executorService = Executors.newFixedThreadPool(2);
+ try {
+ Future<?> blockReaderFuture = executorService.submit(() -> {
+ try {
+ // Submit tasks for reading block.
+ BlockReaderTestUtil.getBlockReader(cluster.getFileSystem(), blk,
0, 512);
Review Comment:
Thanks @smarthanwang for your review.
here will not be thrown `java.io.FileNotFoundException` , because this
exception will be captured output to the exception stack when processing the
`DataXceiver.`
if we expect check `FileNotFoundException` maybe need to call the
initialization `BlockSender` directly.
> Fix DataNode may invalidates normal block causing missing block
> ---------------------------------------------------------------
>
> Key: HDFS-17342
> URL: https://issues.apache.org/jira/browse/HDFS-17342
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Reporter: Haiyang Hu
> Assignee: Haiyang Hu
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When users read an append file, occasional exceptions may occur, such as
> org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: xxx.
> This can happen if one thread is reading the block while writer thread is
> finalizing it simultaneously.
> *Root cause:*
> # The reader thread obtains a RBW replica from VolumeMap, such as:
> blk_xxx_xxx[RBW] and the data file should be in /XXX/rbw/blk_xxx.
> # Simultaneously, the writer thread will finalize this block, moving it from
> the RBW directory to the FINALIZE directory. the data file is move from
> /XXX/rbw/block_xxx to /XXX/finalize/block_xxx.
> # The reader thread attempts to open this data input stream but encounters a
> FileNotFoundException because the data file /XXX/rbw/blk_xxx or meta file
> /XXX/rbw/blk_xxx_xxx doesn't exist at this moment.
> # The reader thread will treats this block as corrupt, removes the replica
> from the volume map, and the DataNode reports the deleted block to the
> NameNode.
> # The NameNode removes this replica for the block.
> # If the current file replication is 1, this file will cause a missing block
> issue until this DataNode executes the DirectoryScanner again.
> As described above, when the reader thread encountered FileNotFoundException
> is as expected, because the file is moved.
> So we need to add a double check to the invalidateMissingBlock logic to
> verify whether the data file or meta file exists to avoid similar cases.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]