konstantinb commented on code in PR #6244:
URL: https://github.com/apache/hive/pull/6244#discussion_r2766430057
##########
ql/src/test/results/clientpositive/llap/pessimistic_stat_combiner_ndv.q.out:
##########
@@ -0,0 +1,495 @@
+PREHOOK: query: CREATE TABLE t1 (cat INT, val BIGINT, data STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@t1
+POSTHOOK: query: CREATE TABLE t1 (cat INT, val BIGINT, data STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@t1
+PREHOOK: query: ALTER TABLE t1 UPDATE STATISTICS
SET('numRows'='1000000','rawDataSize'='100000000')
+PREHOOK: type: ALTERTABLE_UPDATETABLESTATS
+PREHOOK: Input: default@t1
+PREHOOK: Output: default@t1
+POSTHOOK: query: ALTER TABLE t1 UPDATE STATISTICS
SET('numRows'='1000000','rawDataSize'='100000000')
+POSTHOOK: type: ALTERTABLE_UPDATETABLESTATS
+POSTHOOK: Input: default@t1
+POSTHOOK: Output: default@t1
+PREHOOK: query: ALTER TABLE t1 UPDATE STATISTICS FOR COLUMN cat
SET('numDVs'='100','numNulls'='0')
+PREHOOK: type: ALTERTABLE_UPDATETABLESTATS
+PREHOOK: Input: default@t1
+PREHOOK: Output: default@t1
+POSTHOOK: query: ALTER TABLE t1 UPDATE STATISTICS FOR COLUMN cat
SET('numDVs'='100','numNulls'='0')
+POSTHOOK: type: ALTERTABLE_UPDATETABLESTATS
+POSTHOOK: Input: default@t1
+POSTHOOK: Output: default@t1
+PREHOOK: query: EXPLAIN
+SELECT x, COUNT(*)
+FROM (SELECT IF(cat > 50, 'A', 'B') x FROM t1) sub
+GROUP BY x
+PREHOOK: type: QUERY
+PREHOOK: Input: default@t1
+#### A masked pattern was here ####
+POSTHOOK: query: EXPLAIN
+SELECT x, COUNT(*)
+FROM (SELECT IF(cat > 50, 'A', 'B') x FROM t1) sub
+GROUP BY x
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@t1
+#### A masked pattern was here ####
+STAGE DEPENDENCIES:
+ Stage-1 is a root stage
+ Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+ Stage: Stage-1
+ Tez
+#### A masked pattern was here ####
+ Edges:
+ Reducer 2 <- Map 1 (SIMPLE_EDGE)
+#### A masked pattern was here ####
+ Vertices:
+ Map 1
+ Map Operator Tree:
+ TableScan
+ alias: t1
+ Statistics: Num rows: 1000000 Data size: 4000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Select Operator
+ expressions: if((cat > 50), 'A', 'B') (type: string)
+ outputColumnNames: _col0
+ Statistics: Num rows: 1000000 Data size: 4000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Group By Operator
+ aggregations: count()
+ keys: _col0 (type: string)
+ minReductionHashAggr: 0.99
+ mode: hash
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 500000 Data size: 46500000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ null sort order: z
+ sort order: +
+ Map-reduce partition columns: _col0 (type: string)
+ Statistics: Num rows: 500000 Data size: 46500000 Basic
stats: COMPLETE Column stats: COMPLETE
+ value expressions: _col1 (type: bigint)
+ Execution mode: vectorized, llap
+ LLAP IO: all inputs
+ Reducer 2
+ Execution mode: vectorized, llap
+ Reduce Operator Tree:
+ Group By Operator
+ aggregations: count(VALUE._col0)
+ keys: KEY._col0 (type: string)
+ mode: mergepartial
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 250000 Data size: 23250000 Basic stats:
COMPLETE Column stats: COMPLETE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 250000 Data size: 23250000 Basic
stats: COMPLETE Column stats: COMPLETE
+ table:
+ input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
+ output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+
+ Stage: Stage-0
+ Fetch Operator
+ limit: -1
+ Processor Tree:
+ ListSink
+
+PREHOOK: query: EXPLAIN
+SELECT x, COUNT(*)
+FROM (
+ SELECT CASE WHEN cat < 30 THEN 'X' WHEN cat < 60 THEN 'Y' ELSE 'Z' END x
+ FROM t1
+) sub
+GROUP BY x
+PREHOOK: type: QUERY
+PREHOOK: Input: default@t1
+#### A masked pattern was here ####
+POSTHOOK: query: EXPLAIN
+SELECT x, COUNT(*)
+FROM (
+ SELECT CASE WHEN cat < 30 THEN 'X' WHEN cat < 60 THEN 'Y' ELSE 'Z' END x
+ FROM t1
+) sub
+GROUP BY x
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@t1
+#### A masked pattern was here ####
+STAGE DEPENDENCIES:
+ Stage-1 is a root stage
+ Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+ Stage: Stage-1
+ Tez
+#### A masked pattern was here ####
+ Edges:
+ Reducer 2 <- Map 1 (SIMPLE_EDGE)
+#### A masked pattern was here ####
+ Vertices:
+ Map 1
+ Map Operator Tree:
+ TableScan
+ alias: t1
+ Statistics: Num rows: 1000000 Data size: 4000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Select Operator
+ expressions: CASE WHEN ((cat < 30)) THEN ('X') WHEN ((cat
< 60)) THEN ('Y') ELSE ('Z') END (type: string)
+ outputColumnNames: _col0
+ Statistics: Num rows: 1000000 Data size: 4000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Group By Operator
+ aggregations: count()
+ keys: _col0 (type: string)
+ minReductionHashAggr: 0.99
+ mode: hash
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 500000 Data size: 46500000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ null sort order: z
+ sort order: +
+ Map-reduce partition columns: _col0 (type: string)
+ Statistics: Num rows: 500000 Data size: 46500000 Basic
stats: COMPLETE Column stats: COMPLETE
+ value expressions: _col1 (type: bigint)
+ Execution mode: vectorized, llap
+ LLAP IO: all inputs
+ Reducer 2
+ Execution mode: vectorized, llap
+ Reduce Operator Tree:
+ Group By Operator
+ aggregations: count(VALUE._col0)
+ keys: KEY._col0 (type: string)
+ mode: mergepartial
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 250000 Data size: 23250000 Basic stats:
COMPLETE Column stats: COMPLETE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 250000 Data size: 23250000 Basic
stats: COMPLETE Column stats: COMPLETE
+ table:
+ input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
+ output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+
+ Stage: Stage-0
+ Fetch Operator
+ limit: -1
+ Processor Tree:
+ ListSink
+
+PREHOOK: query: EXPLAIN
+SELECT x, COUNT(*)
+FROM (
+ SELECT CASE cat WHEN 1 THEN 'A' WHEN 2 THEN 'B' WHEN 3 THEN 'C' ELSE 'D' END
x
+ FROM t1
+) sub
+GROUP BY x
+PREHOOK: type: QUERY
+PREHOOK: Input: default@t1
+#### A masked pattern was here ####
+POSTHOOK: query: EXPLAIN
+SELECT x, COUNT(*)
+FROM (
+ SELECT CASE cat WHEN 1 THEN 'A' WHEN 2 THEN 'B' WHEN 3 THEN 'C' ELSE 'D' END
x
+ FROM t1
+) sub
+GROUP BY x
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@t1
+#### A masked pattern was here ####
+STAGE DEPENDENCIES:
+ Stage-1 is a root stage
+ Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+ Stage: Stage-1
+ Tez
+#### A masked pattern was here ####
+ Edges:
+ Reducer 2 <- Map 1 (SIMPLE_EDGE)
+#### A masked pattern was here ####
+ Vertices:
+ Map 1
+ Map Operator Tree:
+ TableScan
+ alias: t1
+ Statistics: Num rows: 1000000 Data size: 4000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Select Operator
+ expressions: CASE WHEN ((cat = 1)) THEN ('A') WHEN ((cat =
2)) THEN ('B') WHEN ((cat = 3)) THEN ('C') ELSE ('D') END (type: string)
+ outputColumnNames: _col0
+ Statistics: Num rows: 1000000 Data size: 4000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Group By Operator
+ aggregations: count()
+ keys: _col0 (type: string)
+ minReductionHashAggr: 0.99
+ mode: hash
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 500000 Data size: 46500000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ null sort order: z
+ sort order: +
+ Map-reduce partition columns: _col0 (type: string)
+ Statistics: Num rows: 500000 Data size: 46500000 Basic
stats: COMPLETE Column stats: COMPLETE
+ value expressions: _col1 (type: bigint)
+ Execution mode: vectorized, llap
+ LLAP IO: all inputs
+ Reducer 2
+ Execution mode: vectorized, llap
+ Reduce Operator Tree:
+ Group By Operator
+ aggregations: count(VALUE._col0)
+ keys: KEY._col0 (type: string)
+ mode: mergepartial
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 250000 Data size: 23250000 Basic stats:
COMPLETE Column stats: COMPLETE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 250000 Data size: 23250000 Basic
stats: COMPLETE Column stats: COMPLETE
+ table:
+ input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
+ output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+
+ Stage: Stage-0
+ Fetch Operator
+ limit: -1
+ Processor Tree:
+ ListSink
+
+PREHOOK: query: CREATE TABLE t2 (key STRING, v1 STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@t2
+POSTHOOK: query: CREATE TABLE t2 (key STRING, v1 STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@t2
+PREHOOK: query: ALTER TABLE t1 UPDATE STATISTICS FOR COLUMN val
SET('numDVs'='1000000','numNulls'='0')
+PREHOOK: type: ALTERTABLE_UPDATETABLESTATS
+PREHOOK: Input: default@t1
+PREHOOK: Output: default@t1
+POSTHOOK: query: ALTER TABLE t1 UPDATE STATISTICS FOR COLUMN val
SET('numDVs'='1000000','numNulls'='0')
+POSTHOOK: type: ALTERTABLE_UPDATETABLESTATS
+POSTHOOK: Input: default@t1
+POSTHOOK: Output: default@t1
+PREHOOK: query: ALTER TABLE t1 UPDATE STATISTICS FOR COLUMN data
SET('numDVs'='5000000','numNulls'='0','avgColLen'='500.0','maxColLen'='600')
+PREHOOK: type: ALTERTABLE_UPDATETABLESTATS
+PREHOOK: Input: default@t1
+PREHOOK: Output: default@t1
+POSTHOOK: query: ALTER TABLE t1 UPDATE STATISTICS FOR COLUMN data
SET('numDVs'='5000000','numNulls'='0','avgColLen'='500.0','maxColLen'='600')
+POSTHOOK: type: ALTERTABLE_UPDATETABLESTATS
+POSTHOOK: Input: default@t1
+POSTHOOK: Output: default@t1
+PREHOOK: query: ALTER TABLE t2 UPDATE STATISTICS
SET('numRows'='1000000','rawDataSize'='100000000')
+PREHOOK: type: ALTERTABLE_UPDATETABLESTATS
+PREHOOK: Input: default@t2
+PREHOOK: Output: default@t2
+POSTHOOK: query: ALTER TABLE t2 UPDATE STATISTICS
SET('numRows'='1000000','rawDataSize'='100000000')
+POSTHOOK: type: ALTERTABLE_UPDATETABLESTATS
+POSTHOOK: Input: default@t2
+POSTHOOK: Output: default@t2
+PREHOOK: query: ALTER TABLE t2 UPDATE STATISTICS FOR COLUMN key
SET('numDVs'='1000000','numNulls'='0','avgColLen'='50.0','maxColLen'='100')
+PREHOOK: type: ALTERTABLE_UPDATETABLESTATS
+PREHOOK: Input: default@t2
+PREHOOK: Output: default@t2
+POSTHOOK: query: ALTER TABLE t2 UPDATE STATISTICS FOR COLUMN key
SET('numDVs'='1000000','numNulls'='0','avgColLen'='50.0','maxColLen'='100')
+POSTHOOK: type: ALTERTABLE_UPDATETABLESTATS
+POSTHOOK: Input: default@t2
+POSTHOOK: Output: default@t2
+PREHOOK: query: ALTER TABLE t2 UPDATE STATISTICS FOR COLUMN v1
SET('numDVs'='1000000','numNulls'='0','avgColLen'='50.0','maxColLen'='100')
+PREHOOK: type: ALTERTABLE_UPDATETABLESTATS
+PREHOOK: Input: default@t2
+PREHOOK: Output: default@t2
+POSTHOOK: query: ALTER TABLE t2 UPDATE STATISTICS FOR COLUMN v1
SET('numDVs'='1000000','numNulls'='0','avgColLen'='50.0','maxColLen'='100')
+POSTHOOK: type: ALTERTABLE_UPDATETABLESTATS
+POSTHOOK: Input: default@t2
+POSTHOOK: Output: default@t2
+PREHOOK: query: EXPLAIN
+SELECT a.k, a.total, a.sample, b.v1
+FROM (
+ SELECT
+ k,
+ SUM(val) as total,
+ MAX(data) as sample
+ FROM (
+ SELECT
+ CASE
+ WHEN cat BETWEEN 0 AND 4 THEN 'K00'
+ WHEN cat BETWEEN 5 AND 9 THEN 'K01'
+ WHEN cat BETWEEN 10 AND 14 THEN 'K02'
+ WHEN cat BETWEEN 15 AND 19 THEN 'K03'
+ WHEN cat BETWEEN 20 AND 24 THEN 'K04'
+ WHEN cat BETWEEN 25 AND 29 THEN 'K05'
+ WHEN cat BETWEEN 30 AND 34 THEN 'K06'
+ WHEN cat BETWEEN 35 AND 39 THEN 'K07'
+ WHEN cat BETWEEN 40 AND 44 THEN 'K08'
+ WHEN cat BETWEEN 45 AND 49 THEN 'K09'
+ WHEN cat BETWEEN 50 AND 54 THEN 'K10'
+ WHEN cat BETWEEN 55 AND 59 THEN 'K11'
+ WHEN cat BETWEEN 60 AND 64 THEN 'K12'
+ WHEN cat BETWEEN 65 AND 69 THEN 'K13'
+ WHEN cat BETWEEN 70 AND 74 THEN 'K14'
+ WHEN cat BETWEEN 75 AND 79 THEN 'K15'
+ WHEN cat BETWEEN 80 AND 84 THEN 'K16'
+ WHEN cat BETWEEN 85 AND 89 THEN 'K17'
+ WHEN cat BETWEEN 90 AND 94 THEN 'K18'
+ ELSE 'K19'
+ END as k,
+ val,
+ data
+ FROM t1
+ ) s
+ GROUP BY k
+) a
+JOIN t2 b ON a.k = b.key
+PREHOOK: type: QUERY
+PREHOOK: Input: default@t1
+PREHOOK: Input: default@t2
+#### A masked pattern was here ####
+POSTHOOK: query: EXPLAIN
+SELECT a.k, a.total, a.sample, b.v1
+FROM (
+ SELECT
+ k,
+ SUM(val) as total,
+ MAX(data) as sample
+ FROM (
+ SELECT
+ CASE
+ WHEN cat BETWEEN 0 AND 4 THEN 'K00'
+ WHEN cat BETWEEN 5 AND 9 THEN 'K01'
+ WHEN cat BETWEEN 10 AND 14 THEN 'K02'
+ WHEN cat BETWEEN 15 AND 19 THEN 'K03'
+ WHEN cat BETWEEN 20 AND 24 THEN 'K04'
+ WHEN cat BETWEEN 25 AND 29 THEN 'K05'
+ WHEN cat BETWEEN 30 AND 34 THEN 'K06'
+ WHEN cat BETWEEN 35 AND 39 THEN 'K07'
+ WHEN cat BETWEEN 40 AND 44 THEN 'K08'
+ WHEN cat BETWEEN 45 AND 49 THEN 'K09'
+ WHEN cat BETWEEN 50 AND 54 THEN 'K10'
+ WHEN cat BETWEEN 55 AND 59 THEN 'K11'
+ WHEN cat BETWEEN 60 AND 64 THEN 'K12'
+ WHEN cat BETWEEN 65 AND 69 THEN 'K13'
+ WHEN cat BETWEEN 70 AND 74 THEN 'K14'
+ WHEN cat BETWEEN 75 AND 79 THEN 'K15'
+ WHEN cat BETWEEN 80 AND 84 THEN 'K16'
+ WHEN cat BETWEEN 85 AND 89 THEN 'K17'
+ WHEN cat BETWEEN 90 AND 94 THEN 'K18'
+ ELSE 'K19'
+ END as k,
+ val,
+ data
+ FROM t1
+ ) s
+ GROUP BY k
+) a
+JOIN t2 b ON a.k = b.key
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@t1
+POSTHOOK: Input: default@t2
+#### A masked pattern was here ####
+STAGE DEPENDENCIES:
+ Stage-1 is a root stage
+ Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+ Stage: Stage-1
+ Tez
+#### A masked pattern was here ####
+ Edges:
+ Reducer 2 <- Map 1 (SIMPLE_EDGE)
+ Reducer 3 <- Map 4 (SIMPLE_EDGE), Reducer 2 (SIMPLE_EDGE)
+#### A masked pattern was here ####
+ Vertices:
+ Map 1
+ Map Operator Tree:
+ TableScan
+ alias: t1
+ Statistics: Num rows: 1000000 Data size: 596000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Select Operator
+ expressions: CASE WHEN (cat BETWEEN 0 AND 4) THEN ('K00')
WHEN (cat BETWEEN 5 AND 9) THEN ('K01') WHEN (cat BETWEEN 10 AND 14) THEN
('K02') WHEN (cat BETWEEN 15 AND 19) THEN ('K03') WHEN (cat BETWEEN 20 AND 24)
THEN ('K04') WHEN (cat BETWEEN 25 AND 29) THEN ('K05') WHEN (cat BETWEEN 30 AND
34) THEN ('K06') WHEN (cat BETWEEN 35 AND 39) THEN ('K07') WHEN (cat BETWEEN 40
AND 44) THEN ('K08') WHEN (cat BETWEEN 45 AND 49) THEN ('K09') WHEN (cat
BETWEEN 50 AND 54) THEN ('K10') WHEN (cat BETWEEN 55 AND 59) THEN ('K11') WHEN
(cat BETWEEN 60 AND 64) THEN ('K12') WHEN (cat BETWEEN 65 AND 69) THEN ('K13')
WHEN (cat BETWEEN 70 AND 74) THEN ('K14') WHEN (cat BETWEEN 75 AND 79) THEN
('K15') WHEN (cat BETWEEN 80 AND 84) THEN ('K16') WHEN (cat BETWEEN 85 AND 89)
THEN ('K17') WHEN (cat BETWEEN 90 AND 94) THEN ('K18') ELSE ('K19') END (type:
string), val (type: bigint), data (type: string)
+ outputColumnNames: _col0, _col1, _col2
+ Statistics: Num rows: 1000000 Data size: 596000000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Group By Operator
+ aggregations: sum(_col1), max(_col2)
+ keys: _col0 (type: string)
+ minReductionHashAggr: 0.99
+ mode: hash
+ outputColumnNames: _col0, _col1, _col2
+ Statistics: Num rows: 500000 Data size: 139500000 Basic
stats: COMPLETE Column stats: COMPLETE
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ null sort order: z
+ sort order: +
+ Map-reduce partition columns: _col0 (type: string)
+ Statistics: Num rows: 500000 Data size: 139500000
Basic stats: COMPLETE Column stats: COMPLETE
+ value expressions: _col1 (type: bigint), _col2 (type:
string)
Review Comment:
@okumin thank you very much for your feedback. I got a bit carried away and
overlooked so inflated estimation numbers. Trying a fix that calculatesd
"honest" NDV of multibranch constant expressions before falling back to the
"Pessimistic" combiner
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]