[ 
https://issues.apache.org/jira/browse/HDFS-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568046#comment-14568046
 ] 

Colin Patrick McCabe commented on HDFS-8486:
--------------------------------------------

Great find, [~daryn].  And nice work fixing it... as usual.

It sounds like this change:
{code}
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1370,9 +1370,9 @@ void initBlockPool(BPOfferService bpos) throws 
IOException {
     // failures.
     checkDiskError();
 
-    initDirectoryScanner(conf);
     data.addBlockPool(nsInfo.getBlockPoolID(), conf);
     blockScanner.enableBlockPoolId(bpos.getBlockPoolId());
+    initDirectoryScanner(conf);
{code}
should be sufficient to avoid the problem for the non-federation case, since 
the {{FsDatasetSpi#addBlockPool}} code path will do the initial scan even 
before the {{DirectoryScanner}} is created.

The change to {{selectReplicaToDelete}} should guard against the problem in the 
federation case, by never deleting a replica just because we already have a 
replica with the same path in the set.   It's a nice robustness improvement.

bq. Note I found writing a unit test to be extremely difficult. The 
BlockPoolSlice ctor has numerous side-effects. I instead split out part of 
duplicate resolution into a static method (sigh, makes future mocking 
impossible).

Hmm... it seems like you could create a mock for 
{{BlockPoolSlice#resolveDuplicateReplicas}}, which is the only caller of the 
static method.  For that reason, perhaps we should add {{\@VisibleForTesting}} 
to {{selectReplicaToDelete}}?

+1 pending that change.  Great work, Daryn.

> DN startup may cause severe data loss
> -------------------------------------
>
>                 Key: HDFS-8486
>                 URL: https://issues.apache.org/jira/browse/HDFS-8486
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 0.23.1, 2.0.0-alpha
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Blocker
>         Attachments: HDFS-8486.patch, HDFS-8486.patch
>
>
> A race condition between block pool initialization and the directory scanner 
> may cause a mass deletion of blocks in multiple storages.
> If block pool initialization finds a block on disk that is already in the 
> replica map, it deletes one of the blocks based on size, GS, etc.  
> Unfortunately it _always_ deletes one of the blocks even if identical, thus 
> the replica map _must_ be empty when the pool is initialized.
> The directory scanner starts at a random time within its periodic interval 
> (default 6h).  If the scanner starts very early it races to populate the 
> replica map, causing the block pool init to erroneously delete blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to