Hi Ryan, Table is online, since other mapred tasks continue to run without fail.
There was a major compaction running in the region server which took almost a minute . I am assuming one minute since there was no log entry for one minute, before it completed the compaction. And from the exception it looks client tried only once, bcos it says 1 times Thanks Charan Sent from my iPhone On Jan 25, 2011, at 7:42 PM, Ryan Rawson <[email protected]> wrote: > the problem is the client was talking to the given regionserver, and > that regionserver kept on rejecting the requests - NSRE. Are you sure > your table is online? Are all regions online? Anything interesting > in the master log? > > -ryan > > On Tue, Jan 25, 2011 at 7:32 PM, charan kumar <[email protected]> wrote: >> Hi, >> >> Map Reduce Tasks are failing with the following exception. There was >> major compaction running on the region server around the same time. >> >> no. of retries are not customized, which is 10 by default. But I get this >> exception for the first time , it gets this exception. Any suggestions? >> >> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: >> Failed 1 action: NotServingRegionException: 1 time, servers with issues: >> XXXXXXX:60020, >> at >> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1220) >> at >> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1234) >> at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:819) >> at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:675) >> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:660) >> at >> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:126) >> at >> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:81) >> at >> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:508) >> at >> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80) >> at >> com.ask.af.segscan.SegmentScanner$WebTableReducer.reduce(SegmentScanner.java:284) >> at >> com.ask.af.segscan.SegmentScanner$WebTableReducer.reduce(SegmentScanner.java:91) >> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176) >> at >> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:566) >> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408) >> at org.apache.hadoop.mapred.Child.main(Child.java:170) >> >> Thanks, >> Charan >>
