Hi Nishanth,

There are too many things that might be causing that problem and might
depend on your cluster deployment (cluster size, network, spindles, etc.)
or even related to the key design or how you are using filters in the
scanner that triggers the timeout. Have you looked into if the regions are
evenly distributed, if regions are balanced across the cluster and data
locality is acceptable? If you could share with us more details that would
be really useful.

thanks.
esteban.





--
Cloudera, Inc.


On Thu, Jan 22, 2015 at 11:52 AM, Nishanth S <nishanth.2...@gmail.com>
wrote:

> Hi All,
> I am running a map reduce job which scans the hbase table for  a particular
> time period and then  creates some files from that.The job runs fine for 10
> minutes or so and few around 10% of maps get completed succesfully.Here is
> the error that I am getting.Can some one help?
>
>
> 15/01/22 19:34:33 INFO mapreduce.TableRecordReaderImpl: recovered from
> org.apache.hadoop.hbase.client.ScannerTimeoutException: 559843ms
> passed since the last invocation, timeout is currently set to 60000
>         at
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:352)
>         at
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:194)
>         at
> org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:138)
>         at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
>         at
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
>         at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hadoop.hbase.UnknownScannerException:
> org.apache.hadoop.hbase.UnknownScannerException: Name:
> 3432603283499371482, already closed?
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:2973)
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
>         at
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>         at
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:277)
>         at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:198)
>         at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:57)
>         at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
>         at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96)
>         at
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:336)
>         ... 13 more
>

Reply via email to