I am receiving the RetriesExhaustedException exception during
HTable.commit, the size of cell is 50 mb (2,500 * 2,500 double
entries).
Is there a configuration to avoid this problem?
Cluster : 4 node, 16 cores (Intel(R) Xeon(R) CPU 2.33GHz, SATA hard
disk, Physical Memory 16 GB)
Thanks.
----
08/12/11 11:40:58 INFO mapred.JobClient: map 100% reduce 76%
08/12/11 11:41:02 INFO mapred.JobClient: map 100% reduce 80%
08/12/11 11:42:07 INFO mapred.JobClient: Task Id :
attempt_200812100956_0044_r_000007_1, Status : FAILED
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
contact region server 61.247.201.164:60020 for region
DenseMatrix_randmmnwo,,1228961537371, row '1', but failed after 10
attempts.
Exceptions:
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:863)
at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:964)
at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:950)
at org.apache.hama.DenseMatrix.setBlock(DenseMatrix.java:496)
at
org.apache.hama.mapred.BlockingMapRed$BlockingReducer.reduce(BlockingMapRed.java:150)
at
org.apache.hama.mapred.BlockingMapRed$BlockingReducer.reduce(BlockingMapRed.java:122)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:318)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
--
Best Regards, Edward J. Yoon @ NHN, corp.
[EMAIL PROTECTED]
http://blog.udanax.org