Hi,
I'm using hbase version 0.94.1 and hadoop version 1.0.3
I'm running HBase + HDFS on a 4 node cluster (48 GB RAM, 12TB DiskSpace on
each node).
1 HMaster + NameNode and
3 HRegionServer + DataNode
Replication is set to 2
Running 6 MapReduce jobs (two of which run concurrently)
When MapReduce jobs are triggered the datanode log shows exceptions like
this:
2012-11-26 17:37:38,672 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_-4043001352486758862_3090 received exception java.io.EOFException
2012-11-26 17:37:38,673 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
10.63.63.249:50010,
storageID=DS-778870342-10.63.63.249-50010-1353922061110, infoPort=50075,
ipcPort=50020):DataXceiver
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:298)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:351)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
at java.lang.Thread.run(Thread.java:619)
2012-11-26 17:37:38,675 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_5001084339060873354_3090 src: /10.63.63.249:37109 dest: /
10.63.63.249:50010
Xciever value is set as below in hdfs-site.xml
<property>
<name>dfs.datanode.max.xcievers</name>
<value>16384</value>
</property>
Could anyone give some more light on why this is happening.
Thanks,
Arati Patro