[ 
https://issues.apache.org/jira/browse/HADOOP-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639146#comment-14639146
 ] 

zhihai xu commented on HADOOP-12260:
------------------------------------

Hi [~butzy92], thanks for reporting this issue.
{code}
2015-07-17 16:33:45,671 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
BlockSender.sendChunks() exception:
java.io.IOException: Die Verbindung wurde vom Kommunikationspartner 
zurückgesetzt
        at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
        at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:443)
        at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:575)
        at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:728)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:496)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
        at java.lang.Thread.run(Thread.java:745)
{code}
Base on above stack trace, This looks like a HDFS issue, updated the component 
to HDFS.

> BlockSender.sendChunks() exception
> ----------------------------------
>
>                 Key: HADOOP-12260
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12260
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.6.0, 2.7.1
>         Environment: OS: CentOS Linux release 7.1.1503 (Core) 
> Kernel: 3.10.0-229.1.2.el7.x86_64
>            Reporter: Marius
>
> Hi
> I was running some streaming jobs with avro files from my hadoop cluster. 
> They performed poorly so i checked the logs of my datanodes and found this:
> http://pastebin.com/DXKJJ55z
> The cluster is running on CentOS machines:
> CentOS Linux release 7.1.1503 (Core) 
> This is the Kernel:
> 3.10.0-229.1.2.el7.x86_64
> No one on the userlist replied and i could not find anything helpful on the 
> internet despite disk failure which is unlikely to cause this because here 
> are several machines and its not very likely that all of their disks fail at 
> the same time.
> This error is not reported on the console when running a job and the error 
> occurs from time to time and then dissapears and comes back again.
> The block size of the cluster is the default value.
> This is my command:
> hadoop jar hadoop-streaming-2.7.1.jar -files mapper.py,reducer.py,avro-1.
> 7.7.jar,avro-mapred-1.7.7-hadoop2.jar -D mapreduce.job.reduces=15 -libjars 
> avro-1.7.7.jar,avro-mapred-1.7.7-hadoop2.jar -input /Y/Y1.avro -output 
> /htest/output -mapper mapper.py -reducer reducer.py -inputformat 
> org.apache.avro.mapred.AvroAsTextInputFormat
> Marius



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to