[
https://issues.apache.org/jira/browse/HDFS-16899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18033074#comment-18033074
]
ASF GitHub Bot commented on HDFS-16899:
---------------------------------------
github-actions[bot] closed pull request #5414: HDFS-16899. Fix
TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock
failed
URL: https://github.com/apache/hadoop/pull/5414
> Fix
> TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock
> failed
> ---------------------------------------------------------------------------------------------
>
> Key: HDFS-16899
> URL: https://issues.apache.org/jira/browse/HDFS-16899
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Haiyang Hu
> Assignee: Haiyang Hu
> Priority: Major
> Labels: pull-request-available
>
> TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock
> occasionally appear failed.
> Failed code Line is
> {code:java}
> // verify that all internal blocks exists except b0
> // the redundant internal blocks will not be deleted before the corrupted
> // block gets reconstructed. but since we set
> // DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY to 0, the reconstruction will
> // not happen
> lbs = cluster.getNameNodeRpc().getBlockLocations(filePath.toString(), 0,
> fileLen);
> bg = (LocatedStripedBlock) (lbs.get(0));
> assertEquals(groupSize + 1, bg.getBlockIndices().length); //here not equals.
> {code}
> From the perspective of normal logic, the internal blocks that need to be
> obtained are 10, 8 live and 2 redundant internal blocks,
> but due to the processing logic that occasionally triggers invalidate to
> remove redundant internal block,
> the final obtained internal blocks value does not meet expectations.
> So need avoid the redundant internal blocks will be deleted.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]