Thanks Ted, any way I can fix this in 0.20.6? How can a single Put refer to
two rows? Is there any coding practice with which I can avoid this? This
exception is not fatal in the sense that the process still gets completed, I
just have a few failed tasks, but this leads to waste of time.

Hari

On Tue, Feb 22, 2011 at 9:05 PM, Ted Yu <[email protected]> wrote:

> The put() call handles more than one row, destined for more than one region
> server.
> HConnectionManager wasn't able to find the region server which serves the
> row, hence the error.
>
> Please upgrade to 0.90.1
>
> On Tue, Feb 22, 2011 at 6:27 AM, Hari Sreekumar <[email protected]
> >wrote:
>
> > What does this exception signify:
> >
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
> contact
> > region server Some server, retryOnlyOne=true, index=0, islastrow=false,
> > tries=9, numtries=10, i=0, listsize=405,
> > region=NwKeywordTest,20927_57901_277247_8728141,1298383184948 for region
> > KeywordTest,20927_57901_277247_8728141,1298383184948, row
> > '20927_57902_277417_8744379', but failed after 10 attempts.
> > Exceptions:
> >
> >        at
> >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1157)
> >        at
> >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1238)
> >        at
> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
> >        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:510)
> >        at
> >
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
> >        at
> >
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:55)
> >        at
> >
> >
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:498)
> >        at
> >
> >
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
> >        at
> >
> >
> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
> > Source)
> >        at
> >
> >
> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
> > Source)
> >        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> >        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> >        at org.apache.hadoop.mapred.Child.main(Child.java:170)
> >
> > How can I avoid it?
> >
> > Thanks,
> > Hari
> >
>

Reply via email to