SocketOutputStream.transferToFully fails for blocks >= 2GB on 32 bit JVM
------------------------------------------------------------------------
Key: HDFS-1527
URL: https://issues.apache.org/jira/browse/HDFS-1527
Project: Hadoop HDFS
Issue Type: Bug
Components: data-node
Affects Versions: 0.23.0
Environment: 32 bit JVM
Reporter: Patrick Kling
Fix For: 0.23.0
On 32 bit JVM, SocketOutputStream.transferToFully() fails if the block size is
>= 2GB. We should fall back to a normal transfer in this case.
{code}
2010-12-02 19:04:23,490 ERROR datanode.DataNode
(BlockSender.java:sendChunks(399)) - BlockSender.sendChunks() exception:
java.io.IOException: Value too large
for defined data type
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:418)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:519)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:204)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:386)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:475)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opReadBlock(DataXceiver.java:196)
at
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opReadBlock(DataTransferProtocol.java:356)
at
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:328)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
at java.lang.Thread.run(Thread.java:619)
{code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.