Thanks J-D
I do see this in region server log

2011-01-26 03:03:24,459 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
Responder, call next(5800409546372591083, 1000) from 172.29.253.231:35656: 
output error
2011-01-26 03:03:24,462 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server 
handler 256 on 60020 caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1164)
    at 
org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1125)
    at 
org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:615)
    at 
org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:679)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:943)

 

 ...


 
2011-01-26 03:04:17,961 ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer: 
org.apache.hadoop.hbase.UnknownScannerException: Scanner was closed (timed 
out?) after we renewed it. Could be caused by a very slow scanner or a lengthy 
garbage collection
    at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1865)
    at 
org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1897)
    at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
2011-01-26 03:04:17,966 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 5800409546372591083 
lease expired
2011-01-26 03:04:17,966 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 4439572834176684295 
lease expired


 

-----Original Message-----
From: Jean-Daniel Cryans <[email protected]>
To: [email protected]
Sent: Wed, Jan 26, 2011 5:26 pm
Subject: Re: getting retries exhausted exception


It seems to be coming from the region server side... so one thing you

can check is the region server logs and see if the NPEs are there. If

not, and there's nothing suspicious, then consider enabling DEBUG for

hbase and re-run the job to hopefully get more information.



J-D



On Wed, Jan 26, 2011 at 8:44 AM, Venkatesh <[email protected]> wrote:

>

>

>

>  Using 0.20.6..any solutions? Occurs during mapper phase..will increasing 

retry count fix this?

> thanks

>

> here's the stack trace

>

> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact 

region server null for region , row '', but failed after 10 attempts.

>

> Exceptions:

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

> java.lang.NullPointerException

>

>

>

>    at 
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:1045)

>

>    at 
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:2003)

>

>    at 
> org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1923)

>

>    at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:403)

>

>    at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:110)

>

>    at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.nextKeyValue(TableInputFormatBase.java:210)

>

>    at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:423)

>

>    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)

>

>    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)

>

>    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)

>

>    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)

>

>    at org.apache.hadoop.mapred.Child.main(Child.java:170)

>

>


 

Reply via email to