I googled and only found the error:
https://issues.apache.org/jira/browse/HBASE-10848

This one is very old and now I am using HBase 0.98, seems still hit this
problem, so guess not Drill problem but HBase

@Abhishek, thanks, I will upgrade Drill to 1.0.0 in near future



On Thu, Jun 11, 2015 at 1:22 PM, Abhishek Girish <[email protected]>
wrote:

> Looking up the stacktrace on Google points towards an issue with HBase. May
> be you could first confirm that.
>
> Also, on a different note, i see you are using Drill 0.9.0. An updated
> version (v1.0.0) is out if you'd like to try it out.
>
> On Wed, Jun 10, 2015 at 10:12 PM, Abhishek Girish <
> [email protected]
> > wrote:
>
> > Are you able to successfully run queries such as select * with limit on
> > Drill? Also can you do a full table scan via HBase shell? I'm not sure
> > what's the issue here - may be someone else can help. But my guess would
> be
> > an issue with HBase.
> >
> > On Wed, Jun 10, 2015 at 8:50 PM, George Lu <[email protected]>
> wrote:
> >
> >> OK, My queries is count(*)
> >>
> >> Error log below:
> >>
> >> Fragment 1:2
> >>
> >> [6171962e-ffa7-4355-b1d2-21d569e1bfe3 on prod8:31010]
> >> at
> >>
> >>
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:465)
> >> ~[drill-common-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:262)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:232)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> >> [drill-common-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >> [na:1.8.0_25]
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >> [na:1.8.0_25]
> >> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
> >> Caused by: org.apache.drill.common.exceptions.DrillRuntimeException:
> >> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
> >> OutOfOrderScannerNextException: was there a rpc timeout?
> >> at
> >>
> >>
> org.apache.drill.exec.store.hbase.HBaseRecordReader.next(HBaseRecordReader.java:191)
> >> ~[drill-storage-hbase-0.9.0.jar:0.9.0]
> >> at
> org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:170)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:91)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:91)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:96)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:91)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:130)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:74)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:91)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:64)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:199)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:193)
> >> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> ~[na:1.8.0_25]
> >> at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_25]
> >> at
> >>
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> >> ~[hadoop-common-2.4.1.jar:na]
> >> at
> >>
> >>
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:193)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> ... 4 common frames omitted
> >> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after
> >> retry of OutOfOrderScannerNextException: was there a rpc timeout?
> >> at
> >>
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:410)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.drill.exec.store.hbase.HBaseRecordReader.next(HBaseRecordReader.java:184)
> >> ~[drill-storage-hbase-0.9.0.jar:0.9.0]
> >> ... 32 common frames omitted
> >> Caused by:
> >> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> >> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> >> Expected
> >> nextCallSeq: 1 But the nextCallSeq got from client: 0;
> request=scanner_id:
> >> 2199 number_of_rows: 4000 close_scanner: false next_call_seq: 0
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3195)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29941)
> >> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2029)
> >> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> >> at
> >>
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:112)
> >> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:92)
> >> at java.lang.Thread.run(Thread.java:745)
> >>
> >> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> >> ~[na:1.8.0_25]
> >> at
> >>
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >> ~[na:1.8.0_25]
> >> at
> >>
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> ~[na:1.8.0_25]
> >> at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> >> ~[na:1.8.0_25]
> >> at
> >>
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> >> ~[hadoop-common-2.4.1.jar:na]
> >> at
> >>
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> >> ~[hadoop-common-2.4.1.jar:na]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:284)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> ... 33 common frames omitted
> >> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
> >> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> >> Expected
> >> nextCallSeq: 1 But the nextCallSeq got from client: 0;
> request=scanner_id:
> >> 2199 number_of_rows: 4000 close_scanner: false next_call_seq: 0
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3195)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29941)
> >> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2029)
> >> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> >> at
> >>
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:112)
> >> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:92)
> >> at java.lang.Thread.run(Thread.java:745)
> >>
> >> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:30328)
> >> ~[hbase-protocol-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> at
> >>
> >>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:174)
> >> ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
> >> ... 37 common frames omitted
> >> 2015-06-11 11:42:35,517 [BitServer-5] INFO
> >>  o.a.drill.exec.work.foreman.Foreman - State change requested.  RUNNING
> >> -->
> >> FAILED
> >> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:
> >> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
> >> OutOfOrderScannerNextException: was there a rpc timeout?
> >>
> >> Fragment 1:2
> >>
> >> [6171962e-ffa7-4355-b1d2-21d569e1bfe3 on prod8:31010]
> >> at
> >>
> >>
> org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:409)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.work.batch.ControlHandlerImpl.handle(ControlHandlerImpl.java:81)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.rpc.control.ControlServer.handle(ControlServer.java:60)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> org.apache.drill.exec.rpc.control.ControlServer.handle(ControlServer.java:38)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:57)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:194)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:173)
> >> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> >> at
> >>
> >>
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
> >> [netty-codec-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
> >> [netty-codec-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:161)
> >> [netty-codec-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> >> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> >> at
> >>
> >>
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> >> [netty-common-4.0.24.Final.jar:4.0.24.Final]
> >> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
> >> 2015-06-11 11:42:35,547 [BitServer-5] INFO
> >>  o.a.drill.exec.work.foreman.Foreman - foreman cleaning up.
> >> 2015-06-11 11:42:35,548 [BitServer-5] INFO
> >>  o.a.drill.exec.work.foreman.Foreman - State change requested.  FAILED
> -->
> >> COMPLETED
> >> 2015-06-11 11:42:35,548 [BitServer-5] WARN
> >>  o.a.drill.exec.work.foreman.Foreman - Dropping request to move to
> >> COMPLETED state as query is already at FAILED state (which is terminal).
> >>
> >> On Thu, Jun 11, 2015 at 11:42 AM, Abhishek Girish <
> >> [email protected]
> >> > wrote:
> >>
> >> > It does look like an issue with HBase timeout. Does is occur
> >> consistently?
> >> >
> >> > Can you share more details about the queries you ran and error
> messages
> >> in
> >> > logs?
> >> >
> >> > -Abhishek
> >> >
> >> > On Wed, Jun 10, 2015 at 8:32 PM, George Lu <[email protected]>
> >> wrote:
> >> >
> >> > > Query failed: SYSTEM ERROR:
> >> > org.apache.hadoop.hbase.DoNotRetryIOException:
> >> > > Failed after retry of OutOfOrderScannerNextException: was there a
> rpc
> >> > > timeout?
> >> > >
> >> >
> >>
> >
> >
>

Reply via email to