[ https://issues.apache.org/jira/browse/PHOENIX-3320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514437#comment-15514437 ]
James Taylor commented on PHOENIX-3320: --------------------------------------- Have you tried this in our calcite branch, [~jleach]? FWIW, this is where we'd add support for any missing syntax. FYI, [~maryannxue]. > TPCH 100G: Query 20 Execution Exception > --------------------------------------- > > Key: PHOENIX-3320 > URL: https://issues.apache.org/jira/browse/PHOENIX-3320 > Project: Phoenix > Issue Type: Bug > Reporter: John Leach > > {NOFORMAT} > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > 16/09/13 20:52:21 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 16/09/13 20:52:22 WARN shortcircuit.DomainSocketFactory: The short-circuit > local reads feature cannot be used because libhadoop cannot be loaded. > 1/1 SELECT > S_NAME, > S_ADDRESS > FROM > TPCH.SUPPLIER, > TPCH.NATION > WHERE > S_SUPPKEY IN ( > SELECT PS_SUPPKEY > FROM > TPCH.PARTSUPP > WHERE > PS_PARTKEY IN ( > SELECT P_PARTKEY > FROM > TPCH.PART > WHERE > P_NAME LIKE 'FOREST%' > ) > AND PS_AVAILQTY > ( > SELECT 0.5 * SUM(L_QUANTITY) > FROM > TPCH.LINEITEM > WHERE > L_PARTKEY = PS_PARTKEY > AND L_SUPPKEY = PS_SUPPKEY > AND L_SHIPDATE >= TO_DATE('1994-01-01') > AND L_SHIPDATE < TO_DATE('1995-01-01') > ) > ) > AND S_NATIONKEY = N_NATIONKEY > AND N_NAME = 'CANADA' > ORDER BY > S_NAME > ; > 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed > org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to > stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local > exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=411932, waitTime=149848 > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534) > at > org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=411932, waitTime=149848 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575) > 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed > org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to > stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local > exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=411900, waitTime=149850 > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534) > at > org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=411900, waitTime=149850 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575) > 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed > org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to > stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local > exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=429309, waitTime=141264 > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534) > at > org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=429309, waitTime=141264 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575) > 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed > org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to > stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local > exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=410547, waitTime=150442 > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534) > at > org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=410547, waitTime=150442 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575) > 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed > org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to > stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local > exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=398391, waitTime=157015 > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534) > at > org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=398391, waitTime=157015 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575) > 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed > org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to > stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local > exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=411906, waitTime=149849 > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534) > at > org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=411906, waitTime=149849 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575) > 16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed > org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to > stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local > exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=432939, waitTime=133167 > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534) > at > org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) > at > org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: > Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. > Call id=432939, waitTime=133167 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575) > Error: Encountered exception in sub plan [1] execution. (state=,code=0) > java.sql.SQLException: Encountered exception in sub plan [1] execution. > at > org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198) > at > org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:143) > at > org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138) > at > org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281) > at > org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > at > org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265) > at > org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444) > at sqlline.Commands.execute(Commands.java:822) > at sqlline.Commands.sql(Commands.java:732) > at sqlline.SqlLine.dispatch(SqlLine.java:807) > at sqlline.SqlLine.runCommands(SqlLine.java:1710) > at sqlline.Commands.run(Commands.java:1285) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) > at sqlline.SqlLine.dispatch(SqlLine.java:803) > at sqlline.SqlLine.initArgs(SqlLine.java:613) > at sqlline.SqlLine.begin(SqlLine.java:656) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > Caused by: java.sql.SQLException: Encountered exception in sub plan [1] > execution. > at > org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198) > at > org.apache.phoenix.execute.TupleProjectionPlan.iterator(TupleProjectionPlan.java:69) > at > org.apache.phoenix.execute.TupleProjectionPlan.iterator(TupleProjectionPlan.java:64) > at > org.apache.phoenix.execute.TupleProjectionPlan.iterator(TupleProjectionPlan.java:59) > at > org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:394) > at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:167) > at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:163) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.phoenix.exception.PhoenixIOException: > org.apache.phoenix.exception.PhoenixIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: > TPCH.L_PART_IDX,,1473763283275.507bfe656ccfe78af0f755f08c3146e3.: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) > at > org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:249) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.memory.GlobalMemoryManager.waitForBytesToFree(GlobalMemoryManager.java:94) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:76) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:105) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:111) > at > org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:356) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:388) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:162) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215) > ... 12 more > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:774) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:720) > at > org.apache.phoenix.iterate.MergeSortResultIterator.getMinHeap(MergeSortResultIterator.java:72) > at > org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:93) > at > org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:58) > at > org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) > at > org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44) > at > org.apache.phoenix.execute.TupleProjectionPlan$1.next(TupleProjectionPlan.java:73) > at > org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:107) > at > org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83) > at > org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:385) > ... 7 more > Caused by: java.util.concurrent.ExecutionException: > org.apache.phoenix.exception.PhoenixIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: > TPCH.L_PART_IDX,,1473763283275.507bfe656ccfe78af0f755f08c3146e3.: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) > at > org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:249) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.memory.GlobalMemoryManager.waitForBytesToFree(GlobalMemoryManager.java:94) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:76) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:105) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:111) > at > org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:356) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:388) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:162) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215) > ... 12 more > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:202) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:769) > ... 17 more > Caused by: org.apache.phoenix.exception.PhoenixIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: > TPCH.L_PART_IDX,,1473763283275.507bfe656ccfe78af0f755f08c3146e3.: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) > at > org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:249) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.memory.GlobalMemoryManager.waitForBytesToFree(GlobalMemoryManager.java:94) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:76) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:105) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:111) > at > org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:356) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:388) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:162) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215) > ... 12 more > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111) > at > org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:174) > at > org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:124) > at > org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254) > at > org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at > org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106) > ... 5 more > Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: > TPCH.L_PART_IDX,,1473763283275.507bfe656ccfe78af0f755f08c3146e3.: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) > at > org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:249) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.memory.GlobalMemoryManager.waitForBytesToFree(GlobalMemoryManager.java:94) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:76) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:105) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:111) > at > org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:356) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:388) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:162) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215) > ... 12 more > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:325) > at > org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:381) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:200) > at > org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:350) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:324) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) > at > org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64) > ... 3 more > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: > TPCH.L_PART_IDX,,1473763283275.507bfe656ccfe78af0f755f08c3146e3.: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) > at > org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:249) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested > memory of 309116 bytes could not be allocated. Using memory of 3213134264 > bytes from global pool of 3213174374 bytes after waiting for 10000ms. > at > org.apache.phoenix.memory.GlobalMemoryManager.waitForBytesToFree(GlobalMemoryManager.java:94) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:76) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:105) > at > org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:111) > at > org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:356) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:388) > at > org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:162) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215) > ... 12 more > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1235) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) > at > org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:373) > ... 10 more > Aborting command set because "force" is false and command failed: "SELECT > S_NAME, > S_ADDRESS > FROM > TPCH.SUPPLIER, > TPCH.NATION > WHERE > S_SUPPKEY IN ( > SELECT PS_SUPPKEY > FROM > TPCH.PARTSUPP > WHERE > PS_PARTKEY IN ( > SELECT P_PARTKEY > FROM > TPCH.PART > WHERE > P_NAME LIKE 'FOREST%' > ) > AND PS_AVAILQTY > ( > SELECT 0.5 * SUM(L_QUANTITY) > FROM > TPCH.LINEITEM > WHERE > L_PARTKEY = PS_PARTKEY > AND L_SUPPKEY = PS_SUPPKEY > AND L_SHIPDATE >= TO_DATE('1994-01-01') > AND L_SHIPDATE < TO_DATE('1995-01-01') > ) > ) > AND S_NATIONKEY = N_NATIONKEY > AND N_NAME = 'CANADA' > ORDER BY > S_NAME > ;" > Closing: org.apache.phoenix.jdbc.PhoenixConnection > {NOFORMAT} -- This message was sent by Atlassian JIRA (v6.3.4#6332)