Last block is temporary unavailable for readers because of crashed appender
---------------------------------------------------------------------------

                 Key: HDFS-1226
                 URL: https://issues.apache.org/jira/browse/HDFS-1226
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: data-node
    Affects Versions: 0.20.1
            Reporter: Thanh Do


- Summary: the last block is unavailable to subsequent readers if appender 
crashes in the
middle of appending workload.
 
- Setup:
# available datanodes = 3
# disks / datanode = 1
# failures = 1
failure type = crash
When/where failure happens = (see below)
 
- Details:
Say a client appending to block X at 3 datanodes: dn1, dn2 and dn3. After 
successful 
recoverBlock at primary datanode, client calls createOutputStream, which make 
all datanodes
move the block file and the meta file from current directory to tmp directory. 
Now suppose
the client crashes. Since all replicas of block X are in tmp folders of 
corresponding datanode,
subsequent readers cannot read block X.

- Summary: the last block is unavailable to subsequent readers if appender 
crashes in the
middle of appending workload.
 
- Setup:
# available datanodes = 3
# disks / datanode = 1
# failures = 1
failure type = crash
When/where failure happens = (see below)
 
- Details:
 
Say a client appending to block X at 3 datanodes: dn1, dn2 and dn3. After 
successful 
recoverBlock at primary datanode, client calls createOutputStream, which make 
all datanodes
move the block file and the meta file from current directory to tmp directory. 
Now suppose
the client crashes. Since all replicas of block X are in tmp folders of 
corresponding datanode,
subsequent readers cannot read block X.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to