[ 
https://issues.apache.org/jira/browse/HDFS-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18035320#comment-18035320
 ] 

TangLin edited comment on HDFS-17850 at 11/4/25 12:26 PM:
----------------------------------------------------------

@Override  // FsDatasetSpi
  public ReplicaHandler append(ExtendedBlock b,
      long newGS, long expectedBlockLen) throws IOException {
    try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.DIR,
        b.getBlockPoolId(), getStorageUuidForLock(b),
        datasetSubLockStrategy.blockIdToSubLock(b.getBlockId()))) {
      // If the block was successfully finalized because all packets
      // were successfully processed at the Datanode but the ack for
      // some of the packets were not received by the client. The client
      // re-opens the connection and retries sending those packets.
      // The other reason is that an "append" is occurring to this block.

      // check the validity of the parameter
      if (newGS < b.getGenerationStamp())

{         throw new IOException("The new generation stamp " + newGS +           
  " should be greater than the replica " + b + "'s generation stamp");       }

==============================================

I think we could add an equals sign validation here:

==============================================

if (newGS <= b.getGenerationStamp())

{         throw new IOException("The new generation stamp " + newGS +           
  " should be greater than the replica " + b + "'s generation stamp"); }


was (Author: linwood):
@Override  // FsDatasetSpi
  public ReplicaHandler append(ExtendedBlock b,
      long newGS, long expectedBlockLen) throws IOException {
    try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.DIR,
        b.getBlockPoolId(), getStorageUuidForLock(b),
        datasetSubLockStrategy.blockIdToSubLock(b.getBlockId()))) {
      // If the block was successfully finalized because all packets
      // were successfully processed at the Datanode but the ack for
      // some of the packets were not received by the client. The client
      // re-opens the connection and retries sending those packets.
      // The other reason is that an "append" is occurring to this block.

      // check the validity of the parameter
      if (newGS < b.getGenerationStamp()) {
        throw new IOException("The new generation stamp " + newGS +
            " should be greater than the replica " + b + "'s generation stamp");
      }

I think we could add an equals sign validation here:

if (newGS <= b.getGenerationStamp()) {
        throw new IOException("The new generation stamp " + newGS +
            " should be greater than the replica " + b + "'s generation stamp");

}

> During append, it is necessary to verify whether the same GS exists in the 
> datanode.
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-17850
>                 URL: https://issues.apache.org/jira/browse/HDFS-17850
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: TangLin
>            Priority: Major
>
> When using a third-party HDFS client, if there is a problem with the client 
> logic and append does not update the GS, HDFS may report corrupted replicas.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to