[ 
https://issues.apache.org/jira/browse/HDFS-1224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12881539#action_12881539
 ] 

Thanh Do commented on HDFS-1224:
--------------------------------

"Even so, does this cause any actual problems aside from a shorter pipeline?

I'm not sure, but based on the description, it sounds like dn2 thinks it has a 
block (but it is incomplete), so a client might end up trying to get a block 
from that node and get an incomplete block"

I think this does not create any problem aside from shorter pipeline.
dn2 has a block with old time stamp, because it misses updateBlock.
Hence the block at dn2 is finally deleted.
(but the append semantic is not guaranteed, right? because there are 3
alive datanodes, and write to all 3 is successful, but append only happen
successfully at 2 datanodes).


> Stale connection makes node miss append
> ---------------------------------------
>
>                 Key: HDFS-1224
>                 URL: https://issues.apache.org/jira/browse/HDFS-1224
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20-append
>            Reporter: Thanh Do
>
> - Summary: if a datanode crashes and restarts, it may miss an append.
>  
> - Setup:
> + # available datanodes = 3
> + # replica = 3 
> + # disks / datanode = 1
> + # failures = 1
> + failure type = crash
> + When/where failure happens = after the first append succeed
>  
> - Details:
> Since each datanode maintains a pool of IPC connections, whenever it wants
> to make an IPC call, it first looks into the pool. If the connection is not 
> there, 
> it is created and put in to the pool. Otherwise the existing connection is 
> used.
> Suppose that the append pipeline contains dn1, dn2, and dn3. Dn1 is the 
> primary.
> After the client appends to block X successfully, dn2 crashes and restarts.
> Now client writes a new block Y to dn1, dn2 and dn3. The write is successful.
> Client starts appending to block Y. It first calls dn1.recoverBlock().
> Dn1 will first create a proxy corresponding with each of the datanode in the 
> pipeline
> (in order to make RPC call like getMetadataInfo( )  or updateBlock( )). 
> However, because
> dn2 has just crashed and restarts, its connection in dn1's pool become stale. 
> Dn1 uses
> this connection to make a call to dn2, hence an exception. Therefore, append 
> will be
> made only to dn1 and dn3, although dn2 is alive and the write of block Y to 
> dn2 has
> been successful.
> This bug was found by our Failure Testing Service framework:
> http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-98.html
> For questions, please email us: Thanh Do (than...@cs.wisc.edu) and 
> Haryadi Gunawi (hary...@eecs.berkeley.edu)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to