[
https://issues.apache.org/jira/browse/HDFS-11346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
gehaijiang updated HDFS-11346:
------------------------------
Description:
datanode service restart, datanode logfile generate many error information:
example:
unlinked=false
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:248)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:541)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
at java.lang.Thread.run(Thread.java:745)
2017-01-19 00:10:11,772 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
guomai021141:50010:DataXceiver error processing READ_BLOCK operation src:
/10.11.2.188:54531 dst: /10.17.21.141:50010
was:
datanode restart, datanode logfile generate many error information:
example:
unlinked=false
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:248)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:541)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
at java.lang.Thread.run(Thread.java:745)
2017-01-19 00:10:11,772 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
guomai021141:50010:DataXceiver error processing READ_BLOCK operation src:
/10.11.2.188:54531 dst: /10.17.21.141:50010
Summary: DataXceiver error processing READ_BLOCK operation (was:
datanode reboot Many error “DataXceiver error processing READ_BLOCK operation”)
> DataXceiver error processing READ_BLOCK operation
> -------------------------------------------------
>
> Key: HDFS-11346
> URL: https://issues.apache.org/jira/browse/HDFS-11346
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.7.1
> Reporter: gehaijiang
>
> datanode service restart, datanode logfile generate many error information:
> example:
> unlinked=false
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:248)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:541)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-19 00:10:11,772 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> guomai021141:50010:DataXceiver error processing READ_BLOCK operation src:
> /10.11.2.188:54531 dst: /10.17.21.141:50010
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]