[ 
https://issues.apache.org/jira/browse/HBASE-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15273978#comment-15273978
 ] 

Robert Fiser commented on HBASE-15757:
--------------------------------------

Row is not big. One column family and max 10 columns with Integer or double 
value. After reducing the start and stop row getting results is pretty fast.

Maybe I've forget to tell one important thing. It is a phoenix table and first 
implementation read data through a phoenix client. It cause the same 
OutOfOrderScannerNextException error but much more often. The second 
implementation is the current one with simple scan. Maybe it has something to 
do with phoenix coprocessors which are still presents on table.

{TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec',
 coprocessor$6 => 
'|org.apache.hadoop.hbase.regionserver.LocalIndexSplitter|805306366|'}

> Reverse scan fails with no obvious cause
> ----------------------------------------
>
>                 Key: HBASE-15757
>                 URL: https://issues.apache.org/jira/browse/HBASE-15757
>             Project: HBase
>          Issue Type: Bug
>          Components: Client, Scanners
>    Affects Versions: 0.98.12
>         Environment: ubuntu 14.04, amazon cloud; 10 datanodes d2.4xlarge - 
> 16cores, 12x200GB HDD, 122GB RAM
>            Reporter: Robert Fiser
>
> related issue on stackoverflow: 
> http://stackoverflow.com/questions/37001169/hbase-reverse-scan-error?noredirect=1#comment61558097_37001169
> this works well:
>     scan = new Scan(startRow, stopRow);
> this throws exception sometimes:
>     scan = new Scan(stopRow, startRow);
>       scan.setReversed(true);
> throwing exception while traffic is at least 100 req/s. there are actually no 
> timeouts, exception is fired immediately for 1-10% requests
> hbase: 0.98.12-hadoop2;
> hadoop: 2.7.0;
> cluster in AWS, 10 datanodes: d2.4xlarge
> I think it's maybe related with this issue but I'm not using any filters 
> http://apache-hbase.679495.n3.nabble.com/Exception-during-a-reverse-scan-with-filter-td4069721.html
>       java.lang.RuntimeException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>                       at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
>                       at 
> com.socialbakers.broker.client.hbase.htable.AbstractHtableListScanner.scanToList(AbstractHtableListScanner.java:30)
>                       at 
> com.socialbakers.broker.client.hbase.htable.AbstractHtableListSingleScanner.invokeOperation(AbstractHtableListSingleScanner.java:23)
>                       at 
> com.socialbakers.broker.client.hbase.htable.AbstractHtableListSingleScanner.invokeOperation(AbstractHtableListSingleScanner.java:11)
>                       at 
> com.socialbakers.broker.client.hbase.AbstractHbaseApi.endPointMethod(AbstractHbaseApi.java:40)
>                       at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown 
> Source)
>                       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>                       at java.lang.reflect.Method.invoke(Method.java:497)
>                       at 
> com.socialbakers.broker.client.Route.invoke(Route.java:241)
>                       at 
> com.socialbakers.broker.client.handler.EndpointHandler.invoke(EndpointHandler.java:173)
>                       at 
> com.socialbakers.broker.client.handler.EndpointHandler.process(EndpointHandler.java:69)
>                       at 
> com.thetransactioncompany.jsonrpc2.server.Dispatcher.process(Dispatcher.java:196)
>                       at 
> com.socialbakers.broker.client.RejectableRunnable.run(RejectableRunnable.java:38)
>                       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>                       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>                       at java.lang.Thread.run(Thread.java:745)
>                       Caused by: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>                       at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:430)
>                       at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:333)
>                       at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:91)
>                       ... 15 more
>                       Caused by: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 2 But the nextCallSeq got from client: 1; request=scanner_id: 
> 27700695 number_of_rows: 100 close_scanner: false next_call_seq: 1
>                       at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3231)
>                       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30946)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
>                       at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>                       at java.lang.Thread.run(Thread.java:745)
>                       at 
> sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source)
>                       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>                       at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>                       at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>                       at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>                       at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:287)
>                       at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:214)
>                       at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:58)
>                       at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:115)
>                       at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:91)
>                       at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:375)
>                       ... 17 more
>                       Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException):
>  org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 2 But the nextCallSeq got from client: 1; request=scanner_id: 
> 27700695 number_of_rows: 100 close_scanner: false next_call_seq: 1
>                       at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3231)
>                       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30946)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
>                       at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>                       at java.lang.Thread.run(Thread.java:745)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
>                       at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
>                       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31392)
>                       at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:173)
>                       ... 21 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to