[
https://issues.apache.org/jira/browse/HDFS-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038002#comment-17038002
]
Íñigo Goiri commented on HDFS-15170:
------------------------------------
Minor comments:
* Let's expand the imports in TestErasureCodingCorruption.
* Let's add some comments with the main idea of this JIRA to
TestErasureCodingCorruption and the function.
* Instead of having 15*1024*1024, let's make those two a constant.
* Let's add a few break lines to testCorruptionDuringFailover to make it a
little more readable.
* We probably want to do the builder for MiniDFSCluster in a separate line to
make it more readable and just do the build in the try.
* Should we assert for getCorruptECBlockGroups() being a larger value than 0
before restarting or so? I would like to make sure the whole sequence happens
as expected. In addition to that assert, I would add a couple more checks to
guarantee that what we expect is happening.
> EC: Block gets marked as CORRUPT in case of failover and pipeline recovery
> --------------------------------------------------------------------------
>
> Key: HDFS-15170
> URL: https://issues.apache.org/jira/browse/HDFS-15170
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Ayush Saxena
> Assignee: Ayush Saxena
> Priority: Critical
> Attachments: HDFS-15170-01.patch, HDFS-15170-02.patch
>
>
> Steps to Repro :
> 1. Start writing a EC file.
> 2. After more than one stripe has been written, stop one datanode.
> 3. Post pipeline recovery, keep on writing the data.
> 4.Close the file.
> 5. transition the namenode to standby and back to active.
> 6. Turn on the shutdown datanode in step 2
> The BR from datanode 2 will make the block corrupt and during invalidate
> block won't remove it, since post failover the blocks would be on stale
> storage.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]