[jira] [Updated] (HDFS-17453) IncrementalBlockReport can have race condition with Edit Log Tailer

2024-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17453:
--
Labels: pull-request-available  (was: )

> IncrementalBlockReport can have race condition with Edit Log Tailer
> ---
>
> Key: HDFS-17453
> URL: https://issues.apache.org/jira/browse/HDFS-17453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha, hdfs, namenode
>Affects Versions: 3.3.0, 3.3.1, 2.10.2, 3.3.2, 3.3.5, 3.3.4, 3.3.6
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
>  Labels: pull-request-available
>
> h2. Summary
> There is a race condition between IncrementalBlockReports (IBR) and 
> EditLogTailer in Standby NameNode (SNN) which can lead to leaked IBRs and 
> false corrupt blocks after HA Failover. The race condition occurs when the 
> SNN loads the edit logs before it receives the block reports from DataNode 
> (DN).
> h2. Example
> In the following example there is a block (b1) with 3 generation stamps (gs1, 
> gs2, gs3).
>  # SNN1 loads edit logs for b1gs1 and b1gs2.
>  # DN1 sends the IBR for b1gs1 to SNN1.
>  # SNN1 will determine that the reported block b1gs1 from DN1 is corrupt and 
> it will be queued for later. 
> [BlockManager.java|https://github.com/apache/hadoop/blob/6ed73896f6e8b4b7c720eff64193cb30b3e77fb2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L3447C1-L3464C6]
> {code:java}
>     BlockToMarkCorrupt c = checkReplicaCorrupt(
>         block, reportedState, storedBlock, ucState, dn);
>     if (c != null) {
>       if (shouldPostponeBlocksFromFuture) {
>         // If the block is an out-of-date generation stamp or state,
>         // but we're the standby, we shouldn't treat it as corrupt,
>         // but instead just queue it for later processing.
>         // Storing the reported block for later processing, as that is what
>         // comes from the IBR / FBR and hence what we should use to compare
>         // against the memory state.
>         // See HDFS-6289 and HDFS-15422 for more context.
>         queueReportedBlock(storageInfo, block, reportedState,
>             QUEUE_REASON_CORRUPT_STATE);
>       } else {
>         toCorrupt.add(c);
>       }
>       return storedBlock;
>     } {code}
>  # DN1 sends IBR for b1gs2 and b1gs3 to SNN1.
>  # SNN1 processes b1sg2 and updates the blocks map.
>  # SNN1 queues b1gs3 for later because it determines that b1gs3 is a future 
> genstamp.
>  # SNN1 loads b1gs3 edit logs and processes the queued reports for b1.
>  # SNN1 processes b1gs1 first and puts it back in the queue.
>  # SNN1 processes b1gs3 next and updates the blocks map.
>  # Later, SNN1 becomes the Active NameNode (ANN) during an HA Failover.
>  # SNN1 will catch to the latest edit logs, then process all queued block 
> reports to become the ANN.
>  # ANN1 will process b1gs1 and mark it as corrupt.
> If the example above happens for every DN which stores b1, then when the HA 
> failover happens, b1 will be incorrectly marked as corrupt. This will be 
> fixed when the first DN sends a FullBlockReport or an IBR for b1.
> h2. Logs from Active Cluster
> I added the following logs to confirm this issue in an active cluster:
> {code:java}
> BlockToMarkCorrupt c = checkReplicaCorrupt(
> block, reportedState, storedBlock, ucState, dn);
> if (c != null) {
>   DatanodeStorageInfo storedStorageInfo = storedBlock.findStorageInfo(dn);
>   LOG.info("Found corrupt block {} [{}, {}] from DN {}. Stored block {} from 
> DN {}",
>   block, reportedState.name(), ucState.name(), storageInfo, storedBlock, 
> storedStorageInfo);
>   if (storageInfo.equals(storedStorageInfo) &&
> storedBlock.getGenerationStamp() > block.getGenerationStamp()) {
> LOG.info("Stored Block {} from the same DN {} has a newer GenStamp." +
> storedBlock, storedStorageInfo);
>   }
>   if (shouldPostponeBlocksFromFuture) {
> // If the block is an out-of-date generation stamp or state,
> // but we're the standby, we shouldn't treat it as corrupt,
> // but instead just queue it for later processing.
> // Storing the reported block for later processing, as that is what
> // comes from the IBR / FBR and hence what we should use to compare
> // against the memory state.
> // See HDFS-6289 and HDFS-15422 for more context.
> queueReportedBlock(storageInfo, block, reportedState,
> QUEUE_REASON_CORRUPT_STATE);
> LOG.info("Queueing the block {} for later processing", block);
>   } else {
> toCorrupt.add(c);
> LOG.info("Marking the block {} as corrupt", block);
>   }
>   return storedBlock;
> } {code}
>  
> Logs from nn1 (Active):
> {code:java}
> 

[jira] [Updated] (HDFS-17453) IncrementalBlockReport can have race condition with Edit Log Tailer

2024-04-04 Thread Danny Becker (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-17453:

Description: 
h2. Summary

There is a race condition between IncrementalBlockReports (IBR) and 
EditLogTailer in Standby NameNode (SNN) which can lead to leaked IBRs and false 
corrupt blocks after HA Failover. The race condition occurs when the SNN loads 
the edit logs before it receives the block reports from DataNode (DN).
h2. Example

In the following example there is a block (b1) with 3 generation stamps (gs1, 
gs2, gs3).
 # SNN1 loads edit logs for b1gs1 and b1gs2.
 # DN1 sends the IBR for b1gs1 to SNN1.
 # SNN1 will determine that the reported block b1gs1 from DN1 is corrupt and it 
will be queued for later. 
[BlockManager.java|https://github.com/apache/hadoop/blob/6ed73896f6e8b4b7c720eff64193cb30b3e77fb2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L3447C1-L3464C6]
{code:java}
    BlockToMarkCorrupt c = checkReplicaCorrupt(
        block, reportedState, storedBlock, ucState, dn);
    if (c != null) {
      if (shouldPostponeBlocksFromFuture) {
        // If the block is an out-of-date generation stamp or state,
        // but we're the standby, we shouldn't treat it as corrupt,
        // but instead just queue it for later processing.
        // Storing the reported block for later processing, as that is what
        // comes from the IBR / FBR and hence what we should use to compare
        // against the memory state.
        // See HDFS-6289 and HDFS-15422 for more context.
        queueReportedBlock(storageInfo, block, reportedState,
            QUEUE_REASON_CORRUPT_STATE);
      } else {
        toCorrupt.add(c);
      }
      return storedBlock;
    } {code}

 # DN1 sends IBR for b1gs2 and b1gs3 to SNN1.
 # SNN1 processes b1sg2 and updates the blocks map.
 # SNN1 queues b1gs3 for later because it determines that b1gs3 is a future 
genstamp.
 # SNN1 loads b1gs3 edit logs and processes the queued reports for b1.
 # SNN1 processes b1gs1 first and puts it back in the queue.
 # SNN1 processes b1gs3 next and updates the blocks map.
 # Later, SNN1 becomes the Active NameNode (ANN) during an HA Failover.
 # SNN1 will catch to the latest edit logs, then process all queued block 
reports to become the ANN.
 # ANN1 will process b1gs1 and mark it as corrupt.

If the example above happens for every DN which stores b1, then when the HA 
failover happens, b1 will be incorrectly marked as corrupt. This will be fixed 
when the first DN sends a FullBlockReport or an IBR for b1.
h2. Logs from Active Cluster

I added the following logs to confirm this issue in an active cluster:
{code:java}
BlockToMarkCorrupt c = checkReplicaCorrupt(
block, reportedState, storedBlock, ucState, dn);
if (c != null) {
  DatanodeStorageInfo storedStorageInfo = storedBlock.findStorageInfo(dn);
  LOG.info("Found corrupt block {} [{}, {}] from DN {}. Stored block {} from DN 
{}",
  block, reportedState.name(), ucState.name(), storageInfo, storedBlock, 
storedStorageInfo);
  if (storageInfo.equals(storedStorageInfo) &&
storedBlock.getGenerationStamp() > block.getGenerationStamp()) {
LOG.info("Stored Block {} from the same DN {} has a newer GenStamp." +
storedBlock, storedStorageInfo);
  }
  if (shouldPostponeBlocksFromFuture) {
// If the block is an out-of-date generation stamp or state,
// but we're the standby, we shouldn't treat it as corrupt,
// but instead just queue it for later processing.
// Storing the reported block for later processing, as that is what
// comes from the IBR / FBR and hence what we should use to compare
// against the memory state.
// See HDFS-6289 and HDFS-15422 for more context.
queueReportedBlock(storageInfo, block, reportedState,
QUEUE_REASON_CORRUPT_STATE);
LOG.info("Queueing the block {} for later processing", block);
  } else {
toCorrupt.add(c);
LOG.info("Marking the block {} as corrupt", block);
  }
  return storedBlock;
} {code}
 

Logs from nn1 (Active):
{code:java}
2024-04-03T03:00:52.524-0700,INFO,[IPC Server handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700910634,
 newGS=65700925027, newLength=10485760, newNodes=[[DN1]:10010, [DN2]:10010, 
[DN3]:10010, client=client1)"
2024-04-03T03:00:52.539-0700,INFO,[IPC Server handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700910634
 => blk_66092666802_65700925027) success"
2024-04-03T03:01:07.413-0700,INFO,[IPC Server handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700925027,
 newGS=65700933553, newLength=20971520, newNodes=[[DN1]:10010, [DN2]:10010, 
[DN3]:10010, client=client1)"

[jira] [Updated] (HDFS-17453) IncrementalBlockReport can have race condition with Edit Log Tailer

2024-04-04 Thread Danny Becker (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-17453:

Description: 
h2. Summary

There is a race condition between IncrementalBlockReports (IBR) and 
EditLogTailer in Standby NameNode (SNN) which can lead to leaked IBRs and false 
corrupt blocks after HA Failover. The race condition occurs when the SNN loads 
the edit logs before it receives the block reports from DataNode (DN).
h2. Example

In the following example there is a block (b1) with 3 generation stamps (gs1, 
gs2, gs3).
 # SNN1 loads edit logs for b1gs1 and b1gs2.
 # DN1 sends the IBR for b1gs1 to SNN1.
 # SNN1 will determine that the reported block b1gs1 from DN1 is corrupt and it 
will be queued for later. 
[BlockManager.java|https://github.com/apache/hadoop/blob/6ed73896f6e8b4b7c720eff64193cb30b3e77fb2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L3447C1-L3464C6]
{code:java}
    BlockToMarkCorrupt c = checkReplicaCorrupt(
        block, reportedState, storedBlock, ucState, dn);
    if (c != null) {
      if (shouldPostponeBlocksFromFuture) {
        // If the block is an out-of-date generation stamp or state,
        // but we're the standby, we shouldn't treat it as corrupt,
        // but instead just queue it for later processing.
        // Storing the reported block for later processing, as that is what
        // comes from the IBR / FBR and hence what we should use to compare
        // against the memory state.
        // See HDFS-6289 and HDFS-15422 for more context.
        queueReportedBlock(storageInfo, block, reportedState,
            QUEUE_REASON_CORRUPT_STATE);
      } else {
        toCorrupt.add(c);
      }
      return storedBlock;
    } {code}

 # DN1 sends IBR for b1gs2 and b1gs3 to SNN1.
 # SNN1 processes b1sg2 and updates the blocks map.
 # SNN1 queues b1gs3 for later because it determines that b1gs3 is a future 
genstamp.
 # SNN1 loads b1gs3 edit logs and processes the queued reports for b1.
 # SNN1 processes b1gs1 first and puts it back in the queue.
 # SNN1 processes b1gs3 next and updates the blocks map.
 # Later, SNN1 becomes the Active NameNode (ANN) during an HA Failover.
 # SNN1 will catch to the latest edit logs, then process all queued block 
reports to become the ANN.
 # ANN1 will process b1gs1 and mark it as corrupt.

If the example above happens for every DN which stores b1, then when the HA 
failover happens, b1 will be incorrectly marked as corrupt. This will be fixed 
when the first DN sends a FullBlockReport or an IBR for b1.
h2. Logs from Active Cluster

I added the following logs to confirm this issue in an active cluster:
{code:java}
BlockToMarkCorrupt c = checkReplicaCorrupt(
block, reportedState, storedBlock, ucState, dn);
if (c != null) {
  DatanodeStorageInfo storedStorageInfo = storedBlock.findStorageInfo(dn);
  LOG.info("Found corrupt block {} [{}, {}] from DN {}. Stored block {} from DN 
{}",
  block, reportedState.name(), ucState.name(), storageInfo, storedBlock, 
storedStorageInfo);
  if (storageInfo.equals(storedStorageInfo) &&
storedBlock.getGenerationStamp() > block.getGenerationStamp()) {
LOG.info("Stored Block {} from the same DN {} has a newer GenStamp." +
storedBlock, storedStorageInfo);
  }
  if (shouldPostponeBlocksFromFuture) {
// If the block is an out-of-date generation stamp or state,
// but we're the standby, we shouldn't treat it as corrupt,
// but instead just queue it for later processing.
// Storing the reported block for later processing, as that is what
// comes from the IBR / FBR and hence what we should use to compare
// against the memory state.
// See HDFS-6289 and HDFS-15422 for more context.
queueReportedBlock(storageInfo, block, reportedState,
QUEUE_REASON_CORRUPT_STATE);
LOG.info("Queueing the block {} for later processing", block);
  } else {
toCorrupt.add(c);
LOG.info("Marking the block {} as corrupt", block);
  }
  return storedBlock;
} {code}
 

Logs from nn1 (Active):
{code:java}
2024-04-03T03:00:52.524-0700,INFO,[IPC Server handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700910634,
 newGS=65700925027, newLength=10485760, newNodes=[[DN1]:10010, [DN2]:10010, 
[DN3]:10010, client=client1)" 2024-04-03T03:00:52.539-0700,INFO,[IPC Server 
handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700910634
 => blk_66092666802_65700925027) success" 
2024-04-03T03:01:07.413-0700,INFO,[IPC Server handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700925027,
 newGS=65700933553, newLength=20971520, newNodes=[[DN1]:10010, [DN2]:10010, 
[DN3]:10010, client=client1)" 

[jira] [Updated] (HDFS-17453) IncrementalBlockReport can have race condition with Edit Log Tailer

2024-04-04 Thread Danny Becker (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-17453:

Description: 
h2. Summary

There is a race condition between IncrementalBlockReports (IBR) and 
EditLogTailer in Standby NameNode (SNN) which can lead to leaked IBRs and false 
corrupt blocks after HA Failover. The race condition occurs when the SNN loads 
the edit logs before it receives the block reports from DataNode (DN).
h2. Example

In the following example there is a block (b1) with 3 generation stamps (gs1, 
gs2, gs3).
 # SNN1 loads edit logs for b1gs1 and b1gs2.
 # DN1 sends the IBR for b1gs1 to SNN1.
 # SNN1 will determine that the reported block b1gs1 from DN1 is corrupt and it 
will be queued for later. 
[BlockManager.java|https://github.com/apache/hadoop/blob/6ed73896f6e8b4b7c720eff64193cb30b3e77fb2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L3447C1-L3464C6]
{code:java}
    BlockToMarkCorrupt c = checkReplicaCorrupt(
        block, reportedState, storedBlock, ucState, dn);
    if (c != null) {
      if (shouldPostponeBlocksFromFuture) {
        // If the block is an out-of-date generation stamp or state,
        // but we're the standby, we shouldn't treat it as corrupt,
        // but instead just queue it for later processing.
        // Storing the reported block for later processing, as that is what
        // comes from the IBR / FBR and hence what we should use to compare
        // against the memory state.
        // See HDFS-6289 and HDFS-15422 for more context.
        queueReportedBlock(storageInfo, block, reportedState,
            QUEUE_REASON_CORRUPT_STATE);
      } else {
        toCorrupt.add(c);
      }
      return storedBlock;
    } {code}

 # DN1 sends IBR for b1gs2 and b1gs3 to SNN1.
 # SNN1 processes b1sg2 and updates the blocks map.
 # SNN1 queues b1gs3 for later because it determines that b1gs3 is a future 
genstamp.
 # SNN1 loads b1gs3 edit logs and processes the queued reports for b1.
 # SNN1 processes b1gs1 first and puts it back in the queue.
 # SNN1 processes b1gs3 next and updates the blocks map.
 # Later, SNN1 becomes the Active NameNode (ANN) during an HA Failover.
 # SNN1 will catch to the latest edit logs, then process all queued block 
reports to become the ANN.
 # ANN1 will process b1gs1 and mark it as corrupt.

If the example above happens for every DN which stores b1, then when the HA 
failover happens, b1 will be incorrectly marked as corrupt. This will be fixed 
when the first DN sends a FullBlockReport or an IBR for b1.
h2. Logs from Active Cluster

I added the following logs to confirm this issue in an active cluster:
{code:java}
BlockToMarkCorrupt c = checkReplicaCorrupt(
block, reportedState, storedBlock, ucState, dn);
if (c != null) {
  DatanodeStorageInfo storedStorageInfo = storedBlock.findStorageInfo(dn);
  LOG.info("Found corrupt block {} [{}, {}] from DN {}. Stored block {} from DN 
{}",
  block, reportedState.name(), ucState.name(), storageInfo, storedBlock, 
storedStorageInfo);
  if (storageInfo.equals(storedStorageInfo) &&
storedBlock.getGenerationStamp() > block.getGenerationStamp()) {
LOG.info("Stored Block {} from the same DN {} has a newer GenStamp." +
storedBlock, storedStorageInfo);
  }
  if (shouldPostponeBlocksFromFuture) {
// If the block is an out-of-date generation stamp or state,
// but we're the standby, we shouldn't treat it as corrupt,
// but instead just queue it for later processing.
// Storing the reported block for later processing, as that is what
// comes from the IBR / FBR and hence what we should use to compare
// against the memory state.
// See HDFS-6289 and HDFS-15422 for more context.
queueReportedBlock(storageInfo, block, reportedState,
QUEUE_REASON_CORRUPT_STATE);
LOG.info("Queueing the block {} for later processing", block);
  } else {
toCorrupt.add(c);
LOG.info("Marking the block {} as corrupt", block);
  }
  return storedBlock;
} {code}
Logs from nn1 (Active):

 
{code:java}
2024-04-03T03:00:52.524-0700,INFO,[IPC Server handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700910634,
 newGS=65700925027, newLength=10485760, newNodes=[[DN1]:10010, [DN2]:10010, 
[DN3]:10010, client=client1)" 2024-04-03T03:00:52.539-0700,INFO,[IPC Server 
handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700910634
 => blk_66092666802_65700925027) success" 
2024-04-03T03:01:07.413-0700,INFO,[IPC Server handler 6 on default port 
443],org.apache.hadoop.hdfs.server.namenode.FSNamesystem,"updatePipeline(blk_66092666802_65700925027,
 newGS=65700933553, newLength=20971520, newNodes=[[DN1]:10010, [DN2]:10010, 
[DN3]:10010, client=client1)"