[
https://issues.apache.org/jira/browse/HDFS-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384726#comment-14384726
]
Hadoop QA commented on HDFS-7996:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12707652/HDFS-7996.000.patch
against trunk revision 05499b1.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/10092//testReport/
Console output:
https://builds.apache.org/job/PreCommit-HDFS-Build/10092//console
This message is automatically generated.
> After swapping a volume, BlockReceiver reports ReplicaNotFoundException
> -----------------------------------------------------------------------
>
> Key: HDFS-7996
> URL: https://issues.apache.org/jira/browse/HDFS-7996
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.6.0
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Priority: Critical
> Attachments: HDFS-7996.000.patch
>
>
> When removing a disk from an actively writing DataNode, the BlockReceiver
> working on the disk throws {{ReplicaNotFoundException}} because the replicas
> are removed from the memory:
> {code}
> 2015-03-26 08:02:43,154 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Removed
> volume: /data/2/dfs/dn/current
> 2015-03-26 08:02:43,163 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Removing block level storage:
> /data/2/dfs/dn/current/BP-51301509-10.20.202.114-1427296597742
> 2015-03-26 08:02:43,163 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
> IOException in BlockReceiver.run():
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot
> append to a non-existent replica
> BP-51301509-10.20.202.114-1427296597742:blk_1073742979_2160
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:615)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1362)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.finalizeBlock(BlockReceiver.java:1281)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1241)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> {{FsVolumeList#removeVolume}} waits all threads release {{FsVolumeReference}}
> on the volume to be removed, however, in {{PacketResponder#finalizeBlock()}},
> it calls
> {code}
> private void finalizeBlock(long startTime) throws IOException {
> BlockReceiver.this.close();
> final long endTime = ClientTraceLog.isInfoEnabled() ? System.nanoTime()
> : 0;
> block.setNumBytes(replicaInfo.getNumBytes());
> datanode.data.finalizeBlock(block);
> {code}
> The {{FsVolumeReference}} was released in {{BlockReceiver.this.close()}}
> before calling {{datanode.data.finalizeBlock(block)}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)