[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370138#comment-16370138
 ] 

Daryn Sharp commented on HDFS-12070:
------------------------------------

Back when I filed, I played around with a fix and didn't use close=false.  I 
too read the append design.  It reads is if the PD is supposed to obtain a new 
genstamp and retry but I don't think a DN can do that.  The reasoning for 
another round of commit sync wasn't explained.  Perhaps it was due to the 
earlier implementation or concerns over concurrent commit syncs but the 
recovery id feature should allow the NN to weed out prior commit syncs.

My concern is the NN has claimed the lease during commit sync.  Append, 
truncate, and non-overwrite creates will trigger an implicit commit sync.  
Normally it completes almost immediately, roughly up to the heartbeat interval, 
and the client succeeds on retry.  If another round of commit sync is required 
due to close=false, the client can re-trigger commit sync after the soft lease 
period (5 mins) – I don't think a client does or should retry for that long.  
Which means the operation will unnecessarily fail.  Also, it will take up to 
the hard lease period (1 hour) for the NN to fix the under replication.

In either case (close=true/false), the NN has removed the failed DNs from the 
expected locations.  Bad blocks should be invalidated if/when "failed" DNs 
block report in the wrong genstamp and/or size so I think it's safe for the PD 
to ignore failed nodes and close?

> Failed block recovery leaves files open indefinitely and at risk for data loss
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-12070
>                 URL: https://issues.apache.org/jira/browse/HDFS-12070
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.0.0-alpha
>            Reporter: Daryn Sharp
>            Assignee: Kihwal Lee
>            Priority: Major
>         Attachments: HDFS-12070.0.patch, lease.patch
>
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to