Andrian Jardan created HDFS-10771:
-------------------------------------

             Summary: Error while reading block java.io.IOException: Need xx 
bytes, but only yy bytes available
                 Key: HDFS-10771
                 URL: https://issues.apache.org/jira/browse/HDFS-10771
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: datanode
    Affects Versions: 2.6.0
         Environment: Hadoop 2.6.0-cdh5.7.0
Subversion http://github.com/cloudera/hadoop -r 
c00978c67b0d3fe9f3b896b5030741bd40bf541a
Compiled by jenkins on 2016-03-23T18:36Z
Compiled with protoc 2.5.0
>From source with checksum b2eabfa328e763c88cb14168f9b372
This command was run using 
/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/hadoop-common-2.6.0-cdh5.7.0.jar
            Reporter: Andrian Jardan
            Priority: Minor


Got an error every time we try to "distcp" a file from a cluster (cp works just 
fine). Here is what I found in the log on the data node distcp tries to copy 
from

{code}
2016-08-17 18:02:49,073 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
opReadBlock BP-626139917-127.0.0.1-1438009948483:blk_1152071533_78503164 
received exception java.io.IOException: Need 21925420 bytes, but only 16682940 
bytes available
2016-08-17 18:02:49,075 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(127.0.0.1, 
datanodeUuid=b6c35b7e-9ab7-4b1b-9258-69988346142b, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-56;cid=cluster6;nsid=895831559;c=0):Got exception while serving 
BP-626139917-127.0.0.1-1438009948483:blk_1152071533_78503164 to /127.0.0.2:43758
java.io.IOException: Need 21925420 bytes, but only 16682940 bytes available
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:473)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:241)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:531)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:148)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
        at java.lang.Thread.run(Thread.java:745)
2016-08-17 18:02:49,075 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
dn:50010:DataXceiver error processing READ_BLOCK operation  src: 
/127.0.0.2:43758 dst: /127.0.0.1:50010
java.io.IOException: Need 21925420 bytes, but only 16682940 bytes available
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:473)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:241)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:531)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:148)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
        at java.lang.Thread.run(Thread.java:745)
{code}

I suppose it is talking about RAM (heap) ? 

Why doesn't it retry from another DataNode (replication factor is 3 for this 
file)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to