[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15959956#comment-15959956
 ] 

Xiaoyu Yao edited comment on HDFS-11608 at 4/6/17 11:35 PM:
------------------------------------------------------------

Thanks [~xiaobingo] for the contribution and all for the reviews and 
discussions. I commit the patch to trunk, branch-2 and branch-2.8. 

[~xiaobingo] can you help preparing a patch for branch-2.7 which has the same 
issue? 


was (Author: xyao):
Thanks [~xiaobingo] for the contribution and all for the reviews and 
discussions. I commit the patch to trunk and branch-2. 

Also, I suggest to backport this to branch-2.7 and branch-2.8 which have the 
same issue. 

> HDFS write crashed with block size greater than 2 GB
> ----------------------------------------------------
>
>                 Key: HDFS-11608
>                 URL: https://issues.apache.org/jira/browse/HDFS-11608
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.8.0
>            Reporter: Xiaobing Zhou
>            Assignee: Xiaobing Zhou
>            Priority: Critical
>             Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
>         Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
>         at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to