Hi Edward,

You may want to update again. 

If you are running latest trunk as of the timestamp of this
reply to your email, and you continue to see the REE exceptions,
then the REEs are likely a result of your region server ceasing
to function and/or a failure to reassign a region after a
split. Master log entries for the region(s) in question would
be helpful. 

   - Andy


--- On Tue, 1/6/09, Edward J. Yoon <[email protected]> wrote:

> From: Edward J. Yoon <[email protected]>
> Subject: Exceptions at HTable.flushCommits()
> To: [email protected]
> Cc: [email protected]
> Date: Tuesday, January 6, 2009, 5:52 PM
> After hbase update, it shows
> 'RetriesExhaustedException' during write
> processing of record as below. What's wrong?
> 
> ----
> Wrote input for Map #1
> Wrote input for Map #2
> 09/01/07 10:39:51 WARN mapred.JobClient: Use
> GenericOptionsParser for
> parsing the arguments. Applications should implement Tool
> for the
> same.
> 09/01/07 10:39:51 WARN mapred.JobClient: Use genericOptions
> for the
> option -libjars
> 09/01/07 10:39:51 WARN mapred.JobClient: No job jar file
> set.  User
> classes may not be found. See JobConf(Class) or
> JobConf#setJar(String).
> 09/01/07 10:39:51 INFO mapred.FileInputFormat: Total input
> paths to process : 3
> 09/01/07 10:39:51 INFO mapred.JobClient: Running job:
> job_200901071022_0001
> 09/01/07 10:39:52 INFO mapred.JobClient:  map 0% reduce 0%
> 09/01/07 10:40:28 INFO mapred.JobClient:  map 33% reduce 0%
> 09/01/07 10:40:31 INFO mapred.JobClient:  map 100% reduce
> 0%
> 09/01/07 10:40:43 INFO mapred.JobClient:  map 100% reduce
> 22%
> 09/01/07 10:40:45 INFO mapred.JobClient:  map 100% reduce
> 44%
> 09/01/07 10:40:47 INFO mapred.JobClient:  map 100% reduce
> 67%
> 09/01/07 10:40:53 INFO mapred.JobClient:  map 100% reduce
> 68%
> 09/01/07 10:40:58 INFO mapred.JobClient:  map 100% reduce
> 69%
> 09/01/07 10:41:07 INFO mapred.JobClient:  map 100% reduce
> 70%
> 09/01/07 10:41:12 INFO mapred.JobClient:  map 100% reduce
> 71%
> 09/01/07 10:41:18 INFO mapred.JobClient:  map 100% reduce
> 72%
> 09/01/07 10:41:25 INFO mapred.JobClient:  map 100% reduce
> 73%
> 09/01/07 10:41:32 INFO mapred.JobClient:  map 100% reduce
> 74%
> 09/01/07 10:42:51 INFO mapred.JobClient:  map 100% reduce
> 0%
> 09/01/07 10:42:51 INFO mapred.JobClient: Task Id :
> attempt_200901071022_0001_r_000001_0, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to
> contact region server Some server for region
> DenseMatrix_randhknkr,,1231292385276, row
> '000000000001168', but
> failed after 10 attempts.
> Exceptions:
> 
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:943)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1344)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1315)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1295)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:71)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:51)
>         at
> org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:405)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:63)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:36)
>         at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>         at
> org.apache.hadoop.mapred.Child.main(Child.java:155)
> 
> 09/01/07 10:42:51 INFO mapred.JobClient: Task Id :
> attempt_200901071022_0001_r_000000_0, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to
> contact region server Some server for region
> DenseMatrix_randhknkr,,1231292385276, row
> '000000000001053', but
> failed after 10 attempts.
> Exceptions:
> 
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:943)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1344)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1315)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1295)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:71)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:51)
>         at
> org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:405)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:63)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:36)
>         at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>         at
> org.apache.hadoop.mapred.Child.main(Child.java:155)
> 
> 09/01/07 10:42:51 INFO mapred.JobClient: Task Id :
> attempt_200901071022_0001_r_000002_0, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to
> contact region server Some server for region
> DenseMatrix_randhknkr,,1231292385276, row
> '000000000001061', but
> failed after 10 attempts.
> Exceptions:
> 
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:943)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1344)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1315)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1295)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:71)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:51)
>         at
> org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:405)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:63)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:36)
>         at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>         at
> org.apache.hadoop.mapred.Child.main(Child.java:155)
> 
> 09/01/07 10:43:01 INFO mapred.JobClient:  map 100% reduce
> 45%
> 09/01/07 10:43:05 INFO mapred.JobClient:  map 100% reduce
> 68%
> 09/01/07 10:43:10 INFO mapred.JobClient:  map 100% reduce
> 69%
> 09/01/07 10:43:15 INFO mapred.JobClient:  map 100% reduce
> 70%
> 09/01/07 10:43:25 INFO mapred.JobClient:  map 100% reduce
> 71%
> 09/01/07 10:43:30 INFO mapred.JobClient:  map 100% reduce
> 72%
> 09/01/07 10:43:40 INFO mapred.JobClient:  map 100% reduce
> 73%
> 09/01/07 10:43:45 INFO mapred.JobClient:  map 100% reduce
> 74%
> 09/01/07 10:43:51 INFO mapred.JobClient:  map 100% reduce
> 75%
> 09/01/07 10:45:11 INFO mapred.JobClient: Task Id :
> attempt_200901071022_0001_r_000002_1, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to
> contact region server Some server for region
> DenseMatrix_randhknkr,000000000000257,1231292488131, row
> '000000000001172', but failed after 10 attempts.
> Exceptions:
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:943)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1344)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1315)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1295)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:71)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:51)
>         at
> org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:405)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:63)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:36)
>         at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>         at
> org.apache.hadoop.mapred.Child.main(Child.java:155)
> 
> 09/01/07 10:45:11 INFO mapred.JobClient: Task Id :
> attempt_200901071022_0001_r_000001_1, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to
> contact region server Some server for region
> DenseMatrix_randhknkr,000000000000257,1231292488131, row
> '000000000001330', but failed after 10 attempts.
> Exceptions:
> 
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:943)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1344)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1315)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1295)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:71)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:51)
>         at
> org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:405)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:63)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:36)
>         at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>         at
> org.apache.hadoop.mapred.Child.main(Child.java:155)
> 
> 09/01/07 10:45:11 INFO mapred.JobClient: Task Id :
> attempt_200901071022_0001_r_000000_1, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to
> contact region server Some server for region
> DenseMatrix_randhknkr,000000000000257,1231292488131, row
> '000000000001344', but failed after 10 attempts.
> Exceptions:
> 
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:943)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1344)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1315)
>         at
> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1295)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:71)
>         at
> org.apache.hama.mapred.VectorOutputFormat$TableRecordWriter.write(VectorOutputFormat.java:51)
>         at
> org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:405)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:63)
>         at
> org.apache.hama.mapred.RandomMatrixReduce.reduce(RandomMatrixReduce.java:36)
>         at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>         at
> org.apache.hadoop.mapred.Child.main(Child.java:155)
> 
> 09/01/07 10:45:21 INFO mapred.JobClient:  map 100% reduce
> 45%
> 09/01/07 10:45:26 INFO mapred.JobClient:  map 100% reduce
> 67%
> 09/01/07 10:45:31 INFO mapred.JobClient:  map 100% reduce
> 68%
> 09/01/07 10:45:36 INFO mapred.JobClient:  map 100% reduce
> 69%
> 09/01/07 10:45:41 INFO mapred.JobClient:  map 100% reduce
> 70%
> 09/01/07 10:47:03 INFO mapred.JobClient:  map 100% reduce
> 0%
> 09/01/07 10:47:03 INFO mapred.JobClient: Task Id :
> attempt_200901071022_0001_r_000002_2, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to
> contact region server Some server for region
> DenseMatrix_randhknkr,000000000000257,1231292627625, row
> '000000000000635', but failed after 10 attempts.
> Exceptions:
> 
> -- 
> Best Regards, Edward J. Yoon @ NHN, corp.
> [email protected]
> http://blog.udanax.org


      

Reply via email to