[
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546446#comment-14546446
]
Ravi Prakash commented on HDFS-8344:
------------------------------------
Thanks for the explanation Kihwal! I've been trying to find out in code where
it does all these steps.
a.
[NameNodeRPCServer.addBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java#L711]
b.
[DataStreamer.nextBlockOutputStream()|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1448]
calls
[DataStreamer.createBlockOutputStream|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1470]
c. I see
[DataStreamer.writeBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java#L809]
->
[BlockReceiver.receiveBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java#L788]
->
[BlockReceiver.receivePacket|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java#L471]
:( but I can't find the call of the first IBR. Could you please point me to it?
d. Inside
[DataXceiver.writeBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java#L809]
->
[blockReceiver.receiveBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java#L788]
e. Seems to happen on
[DataXceiver.writeBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java#L833]
->
[DataNode.closeBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2242]
->
[BPOfferService.notifyNamenodeReceivedBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java#L242]
->
[BPServiceActor.notifyNamenodeBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java#L338]
f.
[DFSOutputStream.completeFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L811]
> NameNode doesn't recover lease for files with missing blocks
> ------------------------------------------------------------
>
> Key: HDFS-8344
> URL: https://issues.apache.org/jira/browse/HDFS-8344
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.7.0
> Reporter: Ravi Prakash
> Assignee: Ravi Prakash
> Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch
>
>
> I found another\(?) instance in which the lease is not recovered. This is
> reproducible easily on a pseudo-distributed single node cluster
> # Before you start it helps if you set. This is not necessary, but simply
> reduces how long you have to wait
> {code}
> public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
> public static final long LEASE_HARDLIMIT_PERIOD = 2 *
> LEASE_SOFTLIMIT_PERIOD;
> {code}
> # Client starts to write a file. (could be less than 1 block, but it hflushed
> so some of the data has landed on the datanodes) (I'm copying the client code
> I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar)
> # Client crashes. (I simulate this by kill -9 the $(hadoop jar
> TestHadoop.jar) process after it has printed "Wrote to the bufferedWriter"
> # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was
> only 1)
> I believe the lease should be recovered and the block should be marked
> missing. However this is not happening. The lease is never recovered.
> The effect of this bug for us was that nodes could not be decommissioned
> cleanly. Although we knew that the client had crashed, the Namenode never
> released the leases (even after restarting the Namenode) (even months
> afterwards). There are actually several other cases too where we don't
> consider what happens if ALL the datanodes die while the file is being
> written, but I am going to punt on that for another time.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)