One additional problem here (perhaps the more important issue): even though
the MR job failed because of this exception, the underlying index table
still got set to ACTIVE. That I think is a bug.
I'm using v4.7 against HBase 1.2.1 in EMR.

On Wed, Aug 17, 2016 at 10:26 AM, Nathan Davis <nathan.da...@salesforce.com>
wrote:

> Hi All,
> I'm getting the following error (sorry, it is the full error stack) when I
> run the IndexTool MR job to populate an index I created with ASYNC. I have
> been able to use IndexTool successfully previously.
>
> 2016-08-17 13:30:48,024 INFO  [main] mapreduce.Job: Task Id :
>> attempt_1471372816200_0005_m_000051_0, Status : FAILED
>> Error: java.lang.RuntimeException: 
>> org.apache.phoenix.exception.PhoenixIOException:
>> 60209ms passed since the last invocation, timeout is currently set to 60000
>> at org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(
>> PhoenixRecordReader.java:159)
>> at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
>> nextKeyValue(MapTask.java:565)
>> at org.apache.hadoop.mapreduce.task.MapContextImpl.
>> nextKeyValue(MapContextImpl.java:80)
>> at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
>> nextKeyValue(WrappedMapper.java:91)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at org.apache.hadoop.security.UserGroupInformation.doAs(
>> UserGroupInformation.java:1657)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>> Caused by: org.apache.phoenix.exception.PhoenixIOException: 60209ms
>> passed since the last invocation, timeout is currently set to 60000
>> at org.apache.phoenix.util.ServerUtil.parseServerException(
>> ServerUtil.java:111)
>> at org.apache.phoenix.iterate.ScanningResultIterator.next(
>> ScanningResultIterator.java:65)
>> at org.apache.phoenix.iterate.TableResultIterator.next(
>> TableResultIterator.java:110)
>> at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(
>> LookAheadResultIterator.java:47)
>> at org.apache.phoenix.iterate.LookAheadResultIterator.next(
>> LookAheadResultIterator.java:67)
>> at org.apache.phoenix.iterate.RoundRobinResultIterator$
>> RoundRobinIterator.next(RoundRobinResultIterator.java:309)
>> at org.apache.phoenix.iterate.RoundRobinResultIterator.next(
>> RoundRobinResultIterator.java:97)
>> at org.apache.phoenix.jdbc.PhoenixResultSet.next(
>> PhoenixResultSet.java:778)
>> at org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(
>> PhoenixRecordReader.java:152)
>> ... 11 more
>> Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
>> 60209ms passed since the last invocation, timeout is currently set to 60000
>> at org.apache.hadoop.hbase.client.ClientScanner.
>> loadCache(ClientScanner.java:438)
>> at org.apache.hadoop.hbase.client.ClientScanner.next(
>> ClientScanner.java:370)
>> at org.apache.phoenix.iterate.ScanningResultIterator.next(
>> ScanningResultIterator.java:55)
>> ... 18 more
>> Caused by: org.apache.hadoop.hbase.UnknownScannerException:
>> org.apache.hadoop.hbase.UnknownScannerException: Name: 149, already
>> closed?
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
>> scan(RSRpcServices.java:2374)
>> at org.apache.hadoop.hbase.protobuf.generated.
>> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
>> RpcExecutor.java:133)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>> at java.lang.Thread.run(Thread.java:745)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
>> NativeConstructorAccessorImpl.java:57)
>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
>> DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at org.apache.hadoop.ipc.RemoteException.instantiateException(
>> RemoteException.java:106)
>> at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(
>> RemoteException.java:95)
>> at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(
>> ProtobufUtil.java:329)
>> at org.apache.hadoop.hbase.client.ScannerCallable.call(
>> ScannerCallable.java:262)
>> at org.apache.hadoop.hbase.client.ScannerCallable.call(
>> ScannerCallable.java:64)
>> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(
>> RpcRetryingCaller.java:200)
>> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
>> RetryingRPC.call(ScannerCallableWithReplicas.java:360)
>> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
>> RetryingRPC.call(ScannerCallableWithReplicas.java:334)
>> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
>> RpcRetryingCaller.java:126)
>> at org.apache.hadoop.hbase.client.ResultBoundedCompletionService
>> $QueueingFuture.run(ResultBoundedCompletionService.java:65)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.
>> apache.hadoop.hbase.UnknownScannerException): 
>> org.apache.hadoop.hbase.UnknownScannerException:
>> Name: 149, already closed?
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
>> scan(RSRpcServices.java:2374)
>> at org.apache.hadoop.hbase.protobuf.generated.
>> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
>> RpcExecutor.java:133)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>> at java.lang.Thread.run(Thread.java:745)
>> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(
>> RpcClientImpl.java:1268)
>> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(
>> AbstractRpcClient.java:226)
>> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$
>> BlockingRpcChannelImplementation.callBlockingMethod(
>> AbstractRpcClient.java:331)
>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$
>> BlockingStub.scan(ClientProtos.java:34094)
>> at org.apache.hadoop.hbase.client.ScannerCallable.call(
>> ScannerCallable.java:219)
>> ... 9 more
>
>
>
> It seems like I need to set either/both of 
> 'phoenix.query.timeoutMs'/'hbase.rpc.timeout'.
> But not sure how to configure those settings for the MR job's internal
> HBase client...
>
> Thanks for the help,
> -nathan
>
>

Reply via email to