[
https://issues.apache.org/jira/browse/HDFS-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stanley shi updated HDFS-6522:
------------------------------
Description:
My environment: HA cluster with 4 datanodes;
Here're the steps to reproduce:
1. put one file (one block) to hdfs with repl=3; assume dn1, dn2, dn3 has block
for this file; dn4 don't have the block;
2. stop dn1;
4. append content to the file 100 times;
5. stop dn4 and start dn1;
6. append content to the file 100 times again;
append will fail during the 100 appends;
Check the datanode log on dn1, many of this log will show {quote}
2014-06-12 12:07:04,442 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
opWriteBlock BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
received exception org.apache.hadoop.hdfs.s
erver.datanode.ReplicaNotFoundException: Cannot append to a non-existent
replica BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
2014-06-12 12:07:04,442 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
hdsh145:50010:DataXceiver error processing WRITE_BLOCK operation src:
/10.37.7.146:55594 dest: /10.37.7
.145:50010
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot append
to a non-existent replica
BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:392)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:527)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:174)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:454)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
at java.lang.Thread.run(Thread.java:722)
{quote}
was:
My environment: HA cluster with 4 datanodes;
Here're the steps to reproduce:
1. put one file (one block) to hdfs with repl=3; assume dn1, dn2, dn3 has block
for this file; dn4 don't have the block;
2. stop the dn1;
4. append content to the file 100 times;
5. close dn1 and start dn4;
6. append content to the file 100 times again;
Check the datanode log on dn1, many of this log will show {quote}
2014-06-12 12:07:04,442 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
opWriteBlock BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
received exception org.apache.hadoop.hdfs.s
erver.datanode.ReplicaNotFoundException: Cannot append to a non-existent
replica BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
2014-06-12 12:07:04,442 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
hdsh145.lss.emc.com:50010:DataXceiver error processing WRITE_BLOCK operation
src: /10.37.7.146:55594 dest: /10.37.7
.145:50010
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot append
to a non-existent replica
BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:392)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:527)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:174)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:454)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
at java.lang.Thread.run(Thread.java:722)
{quote}
> DN will try to append to non-existent replica if the datanode has out-dated
> block
> ---------------------------------------------------------------------------------
>
> Key: HDFS-6522
> URL: https://issues.apache.org/jira/browse/HDFS-6522
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.2.0
> Reporter: stanley shi
>
> My environment: HA cluster with 4 datanodes;
> Here're the steps to reproduce:
> 1. put one file (one block) to hdfs with repl=3; assume dn1, dn2, dn3 has
> block for this file; dn4 don't have the block;
> 2. stop dn1;
> 4. append content to the file 100 times;
> 5. stop dn4 and start dn1;
> 6. append content to the file 100 times again;
> append will fail during the 100 appends;
> Check the datanode log on dn1, many of this log will show {quote}
> 2014-06-12 12:07:04,442 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> opWriteBlock BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
> received exception org.apache.hadoop.hdfs.s
> erver.datanode.ReplicaNotFoundException: Cannot append to a non-existent
> replica BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
> 2014-06-12 12:07:04,442 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: hdsh145:50010:DataXceiver
> error processing WRITE_BLOCK operation src: /10.37.7.146:55594 dest: /10.37.7
> .145:50010
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot
> append to a non-existent replica
> BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:392)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:527)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:174)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:454)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
> at java.lang.Thread.run(Thread.java:722)
> {quote}
--
This message was sent by Atlassian JIRA
(v6.2#6252)