[
https://issues.apache.org/jira/browse/HDFS-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17037104#comment-17037104
]
Ayush Saxena edited comment on HDFS-15170 at 2/14/20 4:34 PM:
--------------------------------------------------------------
To repro using the test :
just change this line:
{code:java}
GenericTestUtils
.waitFor(() -> bm.getCorruptECBlockGroups() == 0, 100, 10000);
{code}
to
{code:java} GenericTestUtils
.waitFor(() -> bm.getCorruptECBlockGroups() == 1, 100, 10000);
{code}
and remove the prod change.
The getCorruptECBlockGroups count will come up as 1 post datanodes BR is
processed, if the fix isn't there.
was (Author: ayushtkn):
To repro using the test :
just change this line:
{code:java}
GenericTestUtils
.waitFor(() -> bm.getCorruptECBlockGroups() == 0, 100, 10000);
{code}
to
{code:java} GenericTestUtils
.waitFor(() -> bm.getCorruptECBlockGroups() == 1, 100, 10000);
{code}
The getCorruptECBlockGroups count will come up as 1 post datanodes BR is
processed, if the fix isn't there.
> EC: Block gets marked as CORRUPT in case of failover and pipeline recovery
> --------------------------------------------------------------------------
>
> Key: HDFS-15170
> URL: https://issues.apache.org/jira/browse/HDFS-15170
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Ayush Saxena
> Assignee: Ayush Saxena
> Priority: Critical
> Attachments: HDFS-15170-01.patch
>
>
> Steps to Repro :
> 1. Start writing a EC file.
> 2. After more than one stripe has been written, stop one datanode.
> 3. Post pipeline recovery, keep on writing the data.
> 4.Close the file.
> 5. transition the namenode to standby and back to active.
> 6. Turn on the shutdown datanode in step 2
> The BR from datanode 2 will make the block corrupt and during invalidate
> block won't remove it, since post failover the blocks would be on stale
> storage.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]