[ 
https://issues.apache.org/jira/browse/HDFS-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3161:
------------------------------
    Target Version/s:   (was: 1.3.0)

Doing some old JIRA cleanup. I know it's been years, but Uma / Vinay do you 
know if this issue still applies? If so we should set updated target versions.

> 20 Append: Excluded DN replica from recovery should be removed from DN.
> -----------------------------------------------------------------------
>
>                 Key: HDFS-3161
>                 URL: https://issues.apache.org/jira/browse/HDFS-3161
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 1.0.0
>            Reporter: suja s
>            Priority: Critical
>
> 1) DN1->DN2->DN3 are in pipeline.
> 2) Client killed abruptly
> 3) one DN has restarted , say DN3
> 4) In DN3 info.wasRecoveredOnStartup() will be true
> 5) NN recovery triggered, DN3 skipped from recovery due to above check.
> 6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older 
> generation stamp say 1 and also DN3 still has this block entry in 
> ongoingCreates
> 7) as part of recovery file has closed and got only two live replicas ( from 
> DN1 and DN2)
> 8) So, NN issued the command for replication. Now DN3 also has the replica 
> with newer generation stamp.
> 9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with 
> referring to blocksBeingWritten directory.
> When we call append/ leaseRecovery, it may again skip this node for that 
> recovery as blockId entry still presents in ongoingCreates with startup 
> recovery true.
> It may keep continue this dance for evry recovery.
> And this stale replica will not be cleaned untill we restart the cluster. 
> Actual replica will be trasferred to this node only through replication 
> process.
> Also unnecessarily that replicated blocks will get invalidated after next 
> recoveries....



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to