[ 
https://issues.apache.org/jira/browse/PHOENIX-3754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979798#comment-15979798
 ] 

liyong commented on PHOENIX-3754:
---------------------------------

hit the same problem. 

I was upgrading from phoenix4.4 (bundled with ambari) to latest phoenix4.10, by 
manually copy the 4.10 jars to replace those files come with ambari. After that 
I try to run the sample commands to verify. When I run sqlline.py I also hit an 
error message pointing to upgrade issues as in 
https://issues.apache.org/jira/browse/PHOENIX-2067

And when running this command:  psql.py localhost ../examples/web_stat.sql 
../examples/web_stat.csv ../examples/web_stat_queries.sql, I hit the error. 

More thorough exception messages are like below. I believe the root cause is 
not rpc time out (which is set long enough)
Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after retry 
of OutOfOrderScannerNextException: was there a rpc timeout?
        at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
        at 
org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:65)
        at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:139)
        ... 11 more
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of 
OutOfOrderScannerNextException: was there a rpc timeout?
        at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:483)
        at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:371)
        at 
org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
        ... 12 more
Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 20 
number_of_rows: 100 close_scanner: false next_call_seq: 0 
client_handles_partials: true client_handles_heartbeats: true 10: 0
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2278)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:326)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:257)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:63)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
        at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
        ... 3 more

And in regionserver.log, it shows some info like :

2017-04-22 14:20:10,918 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: 
totalSize=794.63 KB, freeSize=400.82 MB, max=401.60 MB, blockCount=45, 
accesses=228, hits=183, hitRatio=80.26%, , cachingAccesses=228, 
cachingHits=183, cachingHitsRatio=80.26%, evictions=29, evicted=0, 
evictedPerRun=0.0
2017-04-22 14:20:42,174 INFO  [bigtest,16020,1492841703487_ChoreService_1] 
regionserver.HRegionServer: bigtest,16020,1492841703487-MemstoreFlusherChore 
requesting flush for region hbase:meta,,1.1588230740 after a delay of 8565
2017-04-22 14:20:50,750 INFO  [MemStoreFlusher.1] regionserver.HRegion: Started 
memstore flush for hbase:meta,,1.1588230740, current region memstore size 
188.87 KB, and 1/1 column families' memstores are being flushed.
2017-04-22 14:20:50,871 INFO  [MemStoreFlusher.1] 
regionserver.DefaultStoreFlusher: Flushed, sequenceid=2159, memsize=188.9 K, 
hasBloomFilter=false, into tmp file 
hdfs://bigtest:8020/apps/hbase/data/data/hbase/meta/1588230740/.tmp/12a0f1b7158b4ee89a2f8ba850ff72a4
2017-04-22 14:20:51,039 INFO  [MemStoreFlusher.1] regionserver.HStore: Added 
hdfs://bigtest:8020/apps/hbase/data/data/hbase/meta/1588230740/info/12a0f1b7158b4ee89a2f8ba850ff72a4,
 entries=789, sequenceid=2159, filesize=97.8 K
2017-04-22 14:20:51,042 INFO  [MemStoreFlusher.1] regionserver.HRegion: 
Finished memstore flush of ~188.87 KB/193400, currentsize=0 B/0 for region 
hbase:meta,,1.1588230740 in 292ms, sequenceid=2159, compaction requested=true
2017-04-22 14:20:51,068 INFO  
[regionserver/bigtest/192.168.100.155:16020-shortCompactions-1492842051044] 
regionserver.HRegion: Starting compaction on info in region 
hbase:meta,,1.1588230740
2017-04-22 14:20:51,068 INFO  
[regionserver/bigtest/192.168.100.155:16020-shortCompactions-1492842051044] 
regionserver.HStore: Starting compaction of 3 file(s) in info of 
hbase:meta,,1.1588230740 into 
tmpdir=hdfs://bigtest:8020/apps/hbase/data/data/hbase/meta/1588230740/.tmp, 
totalSize=423.1 K
2017-04-22 14:20:51,077 INFO  
[regionserver/bigtest/192.168.100.155:16020-shortCompactions-1492842051044] 
hfile.CacheConfig: blockCache=LruBlockCache{blockCount=45, currentSize=813704, 
freeSize=420294424, maxSize=421108128, heapSize=813704, minSize=400052704, 
minFactor=0.95, multiSize=200026352, multiFactor=0.5, singleSize=100013176, 
singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, 
cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, 
cacheDataCompressed=false, prefetchOnOpen=false
2017-04-22 14:20:51,300 INFO  
[regionserver/bigtest/192.168.100.155:16020-shortCompactions-1492842051044] 
regionserver.HStore: Completed compaction of 3 (all) file(s) in info of 
hbase:meta,,1.1588230740 into faabb65d36de4ec1846629c3361f91f0(size=411.9 K), 
total size for store is 411.9 K. This selection was in queue for 0sec, and took 
0sec to execute.
2017-04-22 14:20:51,301 INFO  
[regionserver/bigtest/192.168.100.155:16020-shortCompactions-1492842051044] 
regionserver.CompactSplitThread: Completed compaction: Request = 
regionName=hbase:meta,,1.1588230740, storeName=info, fileCount=3, 
fileSize=423.1 K, priority=7, time=909897626168; duration=0sec
2017-04-22 14:25:10,918 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: 
totalSize=974.31 KB, freeSize=400.65 MB, max=401.60 MB, blockCount=66, 
accesses=409, hits=331, hitRatio=80.93%, , cachingAccesses=357, 
cachingHits=291, cachingHitsRatio=81.51%, evictions=59, evicted=0, 
evictedPerRun=0.0
2017-04-22 14:25:30,742 ERROR [IndexRpcServer.handler=0,queue=0,port=16020] 
ipc.RpcServer: Unexpected throwable object 
java.lang.IllegalArgumentException: Cell 
EUSalesforce.com\x00Login\x00\x80\x00\x01<,F(\xC8/STATS:\x80\x0D/1492835991910/Put/vlen=4/seqid=5
 with column qualifier 13 belongs at index 13. It cannot be added at the 
position 11 to which the previous next() or previous() was pointing to.
        at 
org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList$ListItr.set(EncodedColumnQualiferCellsList.java:528)
        at 
org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList$ListItr.set(EncodedColumnQualiferCellsList.java:473)
        at java.util.Collections.sort(Collections.java:221)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateFromJoinedHeap(HRegion.java:5475)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5714)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5440)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5426)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$1.next(GroupedAggregateRegionObserver.java:497)
        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.next(BaseRegionScanner.java:44)
        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:59)
        at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
        at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
        at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:283)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2396)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)

> The query in the Getting Started page fails (OutOfOrderScannerNextException)
> ----------------------------------------------------------------------------
>
>                 Key: PHOENIX-3754
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3754
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.10.0
>         Environment: Phoenix: 4.10
> HBase: 1.2.4
>            Reporter: Yasunari Shimizu
>
> The following command fails on 4.10.0, passes on 4.9.0.
> > psql.py localhost ../examples/web_stat.sql ../examples/web_stat.csv 
> > ../examples/web_stat_queries.sql
> Exception occured at the following query in web_stat_queries.sql.
> --
> SELECT HOST, SUM(ACTIVE_VISITOR) TOTAL_ACTIVE_VISITORS 
> FROM WEB_STAT 
> WHERE DB > (CORE * 10) 
> GROUP BY HOST;
> --
> # check data
> 0: jdbc:phoenix:zookeeper-1.vnet> SELECT * FROM WEB_STAT WHERE DB > (CORE * 
> 10);
> +-------+-----------------+----------+--------------------------+-------+-----+-----------------+
> | HOST  |     DOMAIN      | FEATURE  |           DATE           | CORE  | DB  
> | ACTIVE_VISITOR  |
> +-------+-----------------+----------+--------------------------+-------+-----+-----------------+
> | EU    | Salesforce.com  | Login    | 2013-01-12 01:01:01.000  | 5     | 62  
> | 150             |
> | NA    | Salesforce.com  | Login    | 2013-01-04 06:01:21.000  | 3     | 52  
> | 1               |
> +-------+-----------------+----------+--------------------------+-------+-----+-----------------+
> 2 rows selected (0.077 seconds)
> LOG
> ---
> 17/03/29 04:44:25 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
> no rows upserted
> Time: 2.289 sec(s)
> csv columns from database.
> CSV Upsert complete. 39 rows upserted
> Time: 0.07 sec(s)
> DOMAIN                                                          
> AVERAGE_CPU_USAGE                         AVERAGE_DB_USAGE 
> ---------------------------------------- 
> ---------------------------------------- 
> ---------------------------------------- 
> Salesforce.com                                                            
> 260.727                                  257.636 
> Google.com                                                                
> 212.875                                   213.75 
> Apple.com                                                                 
> 114.111                                  119.556 
> Time: 0.039 sec(s)
> DAY                                              TOTAL_CPU_USAGE              
>               MIN_CPU_USAGE                            MAX_CPU_USAGE 
> ----------------------- ---------------------------------------- 
> ---------------------------------------- 
> ---------------------------------------- 
> 2013-01-01 00:00:00.000                                       35              
>                          35                                       35 
> 2013-01-02 00:00:00.000                                      150              
>                          25                                      125 
> 2013-01-03 00:00:00.000                                       88              
>                          88                                       88 
> 2013-01-04 00:00:00.000                                       26              
>                           3                                       23 
> 2013-01-05 00:00:00.000                                      550              
>                          75                                      475 
> 2013-01-06 00:00:00.000                                       12              
>                          12                                       12 
> 2013-01-08 00:00:00.000                                      345              
>                         345                                      345 
> 2013-01-09 00:00:00.000                                      390              
>                          35                                      355 
> 2013-01-10 00:00:00.000                                      345              
>                         345                                      345 
> 2013-01-11 00:00:00.000                                      335              
>                         335                                      335 
> 2013-01-12 00:00:00.000                                        5              
>                           5                                        5 
> 2013-01-13 00:00:00.000                                      355              
>                         355                                      355 
> 2013-01-14 00:00:00.000                                        5              
>                           5                                        5 
> 2013-01-15 00:00:00.000                                      720              
>                          65                                      655 
> 2013-01-16 00:00:00.000                                      785              
>                         785                                      785 
> 2013-01-17 00:00:00.000                                     1590              
>                         355                                     1235 
> Time: 0.037 sec(s)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>       at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>       at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>       at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>       at 
> org.apache.phoenix.iterate.RowKeyOrderedAggregateResultIterator.getIterators(RowKeyOrderedAggregateResultIterator.java:61)
>       at 
> org.apache.phoenix.iterate.RowKeyOrderedAggregateResultIterator.nextTuple(RowKeyOrderedAggregateResultIterator.java:108)
>       at 
> org.apache.phoenix.iterate.RowKeyOrderedAggregateResultIterator.advance(RowKeyOrderedAggregateResultIterator.java:146)
>       at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
>       at 
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:65)
>       at 
> org.apache.phoenix.iterate.RowKeyOrderedAggregateResultIterator.next(RowKeyOrderedAggregateResultIterator.java:133)
>       at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>       at 
> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:372)
>       at 
> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:318)
>       at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:265)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>       at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>       at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>       at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
>       ... 11 more
>  :
>  :



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to