[
https://issues.apache.org/jira/browse/PHOENIX-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514567#comment-15514567
]
Aaron Molitor commented on PHOENIX-3322:
----------------------------------------
Thanks [~jamestaylor], the second error is with
{{phoenix.query.maxServerCacheBytes = 2147483648}}. I'll have to try the query
hint.
Here's the explain plan that was generated for this query (should have included
originally):
{noformat}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 16:22:17 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
16/09/13 16:22:18 WARN shortcircuit.DomainSocketFactory: The short-circuit
local reads feature cannot be used because libhadoop cannot be loaded.
1/1 EXPLAIN SELECT
S_ACCTBAL,
S_NAME,
N_NAME,
P_PARTKEY,
P_MFGR,
S_ADDRESS,
S_PHONE,
S_COMMENT
FROM
TPCH.PART,
TPCH.SUPPLIER,
TPCH.PARTSUPP,
TPCH.NATION,
TPCH.REGION
WHERE
P_PARTKEY = PS_PARTKEY
AND S_SUPPKEY = PS_SUPPKEY
AND P_SIZE = 15
AND P_TYPE LIKE '%BRASS'
AND S_NATIONKEY = N_NATIONKEY
AND N_REGIONKEY = R_REGIONKEY
AND R_NAME = 'EUROPE'
AND PS_SUPPLYCOST = (
SELECT MIN(PS_SUPPLYCOST)
FROM
TPCH.PARTSUPP,
TPCH.SUPPLIER,
TPCH.NATION,
TPCH.REGION
WHERE
P_PARTKEY = PS_PARTKEY
AND S_SUPPKEY = PS_SUPPKEY
AND S_NATIONKEY = N_NATIONKEY
AND N_REGIONKEY = R_REGIONKEY
AND R_NAME = 'EUROPE'
)
ORDER BY
S_ACCTBAL DESC,
N_NAME,
S_NAME,
P_PARTKEY
LIMIT 100
;
+----------------------------------------------------------------------------------------------------------------------------------+
| PLAN
|
+----------------------------------------------------------------------------------------------------------------------------------+
| CLIENT 26-CHUNK 19045195 ROWS 7549747668 BYTES PARALLEL 26-WAY FULL SCAN OVER
TPCH.PART |
| SERVER FILTER BY (P_SIZE = 15 AND P_TYPE LIKE '%BRASS')
|
| SERVER TOP 100 ROWS SORTED BY [TPCH.SUPPLIER.S_ACCTBAL DESC,
TPCH.NATION.N_NAME, TPCH.SUPPLIER.S_NAME, TPCH.PART.P_PARTKEY] |
| CLIENT MERGE SORT
|
| PARALLEL INNER-JOIN TABLE 0
|
| CLIENT 2-CHUNK 879771 ROWS 314572800 BYTES PARALLEL 1-WAY ROUND ROBIN
FULL SCAN OVER TPCH.SUPPLIER |
| PARALLEL INNER-JOIN TABLE 1(DELAYED EVALUATION)
|
| CLIENT 81-CHUNK 74110723 ROWS 21076381005 BYTES PARALLEL 1-WAY ROUND
ROBIN FULL SCAN OVER TPCH.PARTSUPP |
| PARALLEL INNER-JOIN TABLE 2(DELAYED EVALUATION)
|
| CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY ROUND ROBIN FULL SCAN
OVER TPCH.NATION |
| PARALLEL INNER-JOIN TABLE 3(DELAYED EVALUATION) (SKIP MERGE)
|
| CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY ROUND ROBIN FULL SCAN
OVER TPCH.REGION |
| SERVER FILTER BY R_NAME = 'EUROPE'
|
| PARALLEL INNER-JOIN TABLE 4(DELAYED EVALUATION) (SKIP MERGE)
|
| CLIENT 81-CHUNK 74110723 ROWS 21076381005 BYTES PARALLEL 1-WAY FULL
SCAN OVER TPCH.PARTSUPP |
| SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY
[TPCH.PARTSUPP.PS_PARTKEY] |
| PARALLEL INNER-JOIN TABLE 0
|
| CLIENT 2-CHUNK 879771 ROWS 314572800 BYTES PARALLEL 1-WAY
ROUND ROBIN FULL SCAN OVER TPCH.SUPPLIER |
| PARALLEL INNER-JOIN TABLE 1(DELAYED EVALUATION)
|
| CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY ROUND ROBIN FULL
SCAN OVER TPCH.NATION |
| PARALLEL INNER-JOIN TABLE 2(DELAYED EVALUATION) (SKIP MERGE)
|
| CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY ROUND ROBIN FULL
SCAN OVER TPCH.REGION |
| SERVER FILTER BY R_NAME = 'EUROPE'
|
| DYNAMIC SERVER FILTER BY TPCH.PART.P_PARTKEY IN
(TPCH.PARTSUPP.PS_PARTKEY) |
| DYNAMIC SERVER FILTER BY TPCH.PART.P_PARTKEY IN ($1.$2)
|
+----------------------------------------------------------------------------------------------------------------------------------+
25 rows selected (0.357 seconds)
Closing: org.apache.phoenix.jdbc.PhoenixConnection
sqlline version 1.1.9
{noformat}
> TPCH 100 query 2 exceeds size of hash cache
> -------------------------------------------
>
> Key: PHOENIX-3322
> URL: https://issues.apache.org/jira/browse/PHOENIX-3322
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.8.0
> Environment: HDP 2.4.2 + 4.0.8 binary download
> Reporter: Aaron Molitor
>
> Executing TPC-H query 2 results in the following error:
> h5. output from sqlline:
> {noformat}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 16/09/13 20:35:29 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:35:30 WARN shortcircuit.DomainSocketFactory: The short-circuit
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1 SELECT
> S_ACCTBAL,
> S_NAME,
> N_NAME,
> P_PARTKEY,
> P_MFGR,
> S_ADDRESS,
> S_PHONE,
> S_COMMENT
> FROM
> TPCH.PART,
> TPCH.SUPPLIER,
> TPCH.PARTSUPP,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND P_SIZE = 15
> AND P_TYPE LIKE '%BRASS'
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> AND PS_SUPPLYCOST = (
> SELECT MIN(PS_SUPPLYCOST)
> FROM
> TPCH.PARTSUPP,
> TPCH.SUPPLIER,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> )
> ORDER BY
> S_ACCTBAL DESC,
> N_NAME,
> S_NAME,
> P_PARTKEY
> LIMIT 100
> ;
> Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:143)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.runCommands(SqlLine.java:1710)
> at sqlline.Commands.run(Commands.java:1285)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:803)
> at sqlline.SqlLine.initArgs(SqlLine.java:613)
> at sqlline.SqlLine.begin(SqlLine.java:656)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size
> of hash cache (104857615 bytes) exceeds the maximum allowed size (104857600
> bytes)
> at
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:110)
> at
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
> at
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:385)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:167)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:163)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Aborting command set because "force" is false and command failed: "SELECT
> S_ACCTBAL,
> S_NAME,
> N_NAME,
> P_PARTKEY,
> P_MFGR,
> S_ADDRESS,
> S_PHONE,
> S_COMMENT
> FROM
> TPCH.PART,
> TPCH.SUPPLIER,
> TPCH.PARTSUPP,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND P_SIZE = 15
> AND P_TYPE LIKE '%BRASS'
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> AND PS_SUPPLYCOST = (
> SELECT MIN(PS_SUPPLYCOST)
> FROM
> TPCH.PARTSUPP,
> TPCH.SUPPLIER,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> )
> ORDER BY
> S_ACCTBAL DESC,
> N_NAME,
> S_NAME,
> P_PARTKEY
> LIMIT 100
> ;"
> Closing: org.apache.phoenix.jdbc.PhoenixConnection
> {noformat}
> adjusting the hash cache to 2GB results in the following:
> {noformat}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 16/09/14 03:58:11 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 16/09/14 03:58:12 WARN shortcircuit.DomainSocketFactory: The short-circuit
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1 SELECT
> S_ACCTBAL,
> S_NAME,
> N_NAME,
> P_PARTKEY,
> P_MFGR,
> S_ADDRESS,
> S_PHONE,
> S_COMMENT
> FROM
> TPCH.PART,
> TPCH.SUPPLIER,
> TPCH.PARTSUPP,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND P_SIZE = 15
> AND P_TYPE LIKE '%BRASS'
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> AND PS_SUPPLYCOST = (
> SELECT MIN(PS_SUPPLYCOST)
> FROM
> TPCH.PARTSUPP,
> TPCH.SUPPLIER,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> )
> ORDER BY
> S_ACCTBAL DESC,
> N_NAME,
> S_NAME,
> P_PARTKEY
> LIMIT 100
> ;
> 16/09/14 03:59:03 WARN execute.HashJoinPlan: Hash plan [0] execution seems
> too slow. Earlier hash cache(s) might have expired on servers.
> 16/09/14 03:59:08 WARN execute.HashJoinPlan: Hash plan [0] execution seems
> too slow. Earlier hash cache(s) might have expired on servers.
> Error: Encountered exception in sub plan [1] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [1] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:143)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.runCommands(SqlLine.java:1710)
> at sqlline.Commands.run(Commands.java:1285)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:803)
> at sqlline.SqlLine.initArgs(SqlLine.java:613)
> at sqlline.SqlLine.begin(SqlLine.java:656)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
> at java.util.Arrays.copyOf(Arrays.java:2271)
> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
> at
> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at org.apache.phoenix.util.TupleUtil.write(TupleUtil.java:153)
> at
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:108)
> at
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
> at
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:385)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:167)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:163)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Aborting command set because "force" is false and command failed: "SELECT
> S_ACCTBAL,
> S_NAME,
> N_NAME,
> P_PARTKEY,
> P_MFGR,
> S_ADDRESS,
> S_PHONE,
> S_COMMENT
> FROM
> TPCH.PART,
> TPCH.SUPPLIER,
> TPCH.PARTSUPP,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND P_SIZE = 15
> AND P_TYPE LIKE '%BRASS'
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> AND PS_SUPPLYCOST = (
> SELECT MIN(PS_SUPPLYCOST)
> FROM
> TPCH.PARTSUPP,
> TPCH.SUPPLIER,
> TPCH.NATION,
> TPCH.REGION
> WHERE
> P_PARTKEY = PS_PARTKEY
> AND S_SUPPKEY = PS_SUPPKEY
> AND S_NATIONKEY = N_NATIONKEY
> AND N_REGIONKEY = R_REGIONKEY
> AND R_NAME = 'EUROPE'
> )
> ORDER BY
> S_ACCTBAL DESC,
> N_NAME,
> S_NAME,
> P_PARTKEY
> LIMIT 100
> ;"
> Closing: org.apache.phoenix.jdbc.PhoenixConnection
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)