We fixed a lot of the exception handling in 0.90.  The exception text
is much better. Check it out!

-ryan

On Wed, Feb 23, 2011 at 11:18 AM, Jean-Daniel Cryans
<[email protected]> wrote:
> It could be due to slow splits, heavy GC, etc. Make sure your machines
> don't swap at all, that HBase has plenty of memory, that you're not
> trying to use more CPUs than your machines actually have (like setting
> 4 maps on a 4 cores machine when also using hbase), etc.
>
> Also upgrading to 0.90.1 will help.
>
> J-D
>
> On Tue, Feb 22, 2011 at 10:18 PM, Hari Sreekumar
> <[email protected]> wrote:
>> Thanks Ted, any way I can fix this in 0.20.6? How can a single Put refer to
>> two rows? Is there any coding practice with which I can avoid this? This
>> exception is not fatal in the sense that the process still gets completed, I
>> just have a few failed tasks, but this leads to waste of time.
>>
>> Hari
>>
>> On Tue, Feb 22, 2011 at 9:05 PM, Ted Yu <[email protected]> wrote:
>>
>>> The put() call handles more than one row, destined for more than one region
>>> server.
>>> HConnectionManager wasn't able to find the region server which serves the
>>> row, hence the error.
>>>
>>> Please upgrade to 0.90.1
>>>
>>> On Tue, Feb 22, 2011 at 6:27 AM, Hari Sreekumar <[email protected]
>>> >wrote:
>>>
>>> > What does this exception signify:
>>> >
>>> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
>>> contact
>>> > region server Some server, retryOnlyOne=true, index=0, islastrow=false,
>>> > tries=9, numtries=10, i=0, listsize=405,
>>> > region=NwKeywordTest,20927_57901_277247_8728141,1298383184948 for region
>>> > KeywordTest,20927_57901_277247_8728141,1298383184948, row
>>> > '20927_57902_277417_8744379', but failed after 10 attempts.
>>> > Exceptions:
>>> >
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1157)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1238)
>>> >        at
>>> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
>>> >        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:510)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:55)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:498)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>>> >        at
>>> >
>>> >
>>> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
>>> > Source)
>>> >        at
>>> >
>>> >
>>> com.clickable.dataengine.hbase.upload.BulkUploadtoHBase$BulkUploadMapper.map(Unknown
>>> > Source)
>>> >        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>>> >        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>>> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>>> >        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>> >
>>> > How can I avoid it?
>>> >
>>> > Thanks,
>>> > Hari
>>> >
>>>
>>
>

Reply via email to