deepujain opened a new pull request, #8334:
URL: https://github.com/apache/hadoop/pull/8334

   ### Summary
   
   When using a third-party HDFS client, if the client does not update the 
generation stamp (GS) correctly on append, the DataNode previously accepted 
`newGS == currentGS`, which can lead to corrupted replica state. This change 
requires the new generation stamp to be **strictly greater** than the replica's 
current generation stamp: append now rejects both `newGS < currentGS` 
(existing) and `newGS == currentGS` (new), so misbehaving clients cannot 
silently corrupt replica state.
   
   ### Change
   
   - **FsDatasetImpl.java**: In `append(ExtendedBlock b, long newGS, long 
expectedBlockLen)`, change the validity check from `newGS < 
b.getGenerationStamp()` to `newGS <= b.getGenerationStamp()`, so that equal 
generation stamps are also rejected with the same IOException message.
   - **TestFsDatasetImpl.java**: Add 
`testAppendRejectsSameOrLowerGenerationStamp()`: create a file, get the 
finalized block, then assert that `append(block, block.getGenerationStamp(), 
blockLen)` and `append(block, block.getGenerationStamp() - 1, blockLen)` throw 
IOException with "should be greater than the replica", and that `append(block, 
block.getGenerationStamp() + 1, blockLen)` succeeds (HDFS-17850).
   
   ### JIRA
   
   Fixes HDFS-17850
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to