[
https://issues.apache.org/jira/browse/HDFS-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12881783#action_12881783
]
sam rash commented on HDFS-1262:
--------------------------------
actually, here's another idea:
3) the NN thinks the client has a lease. it's right. the client just didn't
save enough information to handle the failure.
namenode.append() just returns the last block. The code in DFSClient:
{code}
OutputStream result = new DFSOutputStream(src, buffersize, progress,
lastBlock, stat, conf.getInt("io.bytes.per.checksum", 512));
leasechecker.put(src, result);
return result;
{code}
if in leasechecker we stored a pair, lastBlock and result (and did so in a
finally block):
{code}
OutputStream result = null;
try {
result = new DFSOutputStream(src, buffersize, progress,
lastBlock, stat, conf.getInt("io.bytes.per.checksum", 512));
} finally {
Pair<LocatedBlock, OutputStream> pair = new Pair(lastBlock, result);
leasechecker.put(src, pair);
return result;
}
{code}
and above, we only call namenode.append() if we don't have a lease already.
again, if we do find a solution, i'm happy to help out on this one
> Failed pipeline creation during append leaves lease hanging on NN
> -----------------------------------------------------------------
>
> Key: HDFS-1262
> URL: https://issues.apache.org/jira/browse/HDFS-1262
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client, name-node
> Affects Versions: 0.20-append
> Reporter: Todd Lipcon
> Priority: Critical
> Fix For: 0.20-append
>
>
> Ryan Rawson came upon this nasty bug in HBase cluster testing. What happened
> was the following:
> 1) File's original writer died
> 2) Recovery client tried to open file for append - looped for a minute or so
> until soft lease expired, then append call initiated recovery
> 3) Recovery completed successfully
> 4) Recovery client calls append again, which succeeds on the NN
> 5) For some reason, the block recovery that happens at the start of append
> pipeline creation failed on all datanodes 6 times, causing the append() call
> to throw an exception back to HBase master. HBase assumed the file wasn't
> open and put it back on a queue to try later
> 6) Some time later, it tried append again, but the lease was still assigned
> to the same DFS client, so it wasn't able to recover.
> The recovery failure in step 5 is a separate issue, but the problem for this
> JIRA is that the NN can think it failed to open a file for append when the NN
> thinks the writer holds a lease. Since the writer keeps renewing its lease,
> recovery never happens, and no one can open or recover the file until the DFS
> client shuts down.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.