Re: Hive generating different DAGs from the same query

2018-09-11 Thread Sungwoo Park
Hello Gopal,

I have been looking further into this issue, and have found that the
non-determinstic behavior of Hive in
generating DAGs is actually due to the logic in
AggregateStatsCache.findBestMatch() called from
AggregateStatsCache.get(), as well as the disproportionate distribution of
Nulls in __HIVE_DEFAULT_PARTITION__
(in the case of the TPC-DS dataset).

Here is what is happening. Let me use web_sales table and ws_web_site_sk
column in the 10TB TPC-DS dataset as
a running example.

1. In the course of running TPC-DS queries, Hive asks MetaStore about the
column statistics of 1823 partNames
in the web_sales/ws_web_site_sk combination, either without
__HIVE_DEFAULT_PARTITION__ or with
__HIVE_DEFAULT_PARTITION__.

  --- Without __HIVE_DEFAULT_PARTITION__, it reports a total of 901180
nulls.

  --- With __HIVE_DEFAULT_PARTITION__, however, it report a total of
1800087 nulls, almost twice as many.

2. The first call to MetaStore returns the correct result, but all
subsequent requests are likely to
return the same result from the cache, irrespective of the inclusion of
__HIVE_DEFAULT_PARTITION__. This is
because AggregateStatsCache.findBestMatch() treats
__HIVE_DEFAULT_PARTITION__ in the same way as other
partNames, and the difference in the size of partNames[] is just 1. The
outcome depends on the duration of
intervening queries, so everything is now non-deterministic.

3. If a wrong value of numNulls is returned, Hive generates a different
DAG, which usually takes much longer
than the correct one (e.g., 150s to 1000s for the first part of Query 24,
and 40s to 120s for Query 5).  I
guess the problem is particularly pronounced here because of the huge
number of nulls in
__HIVE_DEFAULT_PARTITION__. It is ironic to see that the query optimizer is
so efficient that a single wrong
guess of numNulls creates a very inefficient DAG.

Note that this behavior cannot be avoided by setting
hive.metastore.aggregate.stats.cache.max.variance to zero
because the difference in the number of partNames[] between the argument
and the entry in the cache is just 1.

I think that AggregateStatsCache.findBestMatch() should treat
__HIVE_DEFAULT_PARTITION__ in a special way, by
not returning the result in the cache if there is a difference in the
inclusion of partName
__HIVE_DEFAULT_PARTITION__ (or should provide the use with an option to
activate this feature). However, I am
testing only with the TPC-DS data, so please take my claim with a grain of
salt.

--- Sungwoo


On Fri, Jul 20, 2018 at 2:54 PM Gopal Vijayaraghavan 
wrote:

> > My conclusion is that a query can update some internal states of
> HiveServer2, affecting DAG generation for subsequent queries.
>
> Other than the automatic reoptimization feature, there's two other
> potential suspects.
>
> First one would be to disable the in-memory stats cache's variance param,
> which might be triggering some residual effects.
>
> hive.metastore.aggregate.stats.cache.max.variance
>
> I set it to 0.0 when I suspect that feature is messing with the runtime
> plans or just disable the cache entirely with
>
> set hive.metastore.aggregate.stats.cache.enabled=false;
>
> Other than that, query24 is an interesting query.
>
> Is probably one of the corner cases where the predicate push-down is
> actually hurting the shared work optimizer.
>
> Also cross-check if you have accidentally loaded store_sales with
> ss_item_sk(int) and if the item i_item_sk is a bigint (type mismatches will
> trigger a slow join algorithm, but without any consistency issues).
>
> Cheers,
> Gopal
>
>
>


Re: Hive generating different DAGs from the same query

2018-07-19 Thread Gopal Vijayaraghavan
> My conclusion is that a query can update some internal states of HiveServer2, 
> affecting DAG generation for subsequent queries. 

Other than the automatic reoptimization feature, there's two other potential 
suspects.

First one would be to disable the in-memory stats cache's variance param, which 
might be triggering some residual effects.

hive.metastore.aggregate.stats.cache.max.variance

I set it to 0.0 when I suspect that feature is messing with the runtime plans 
or just disable the cache entirely with

set hive.metastore.aggregate.stats.cache.enabled=false;

Other than that, query24 is an interesting query.

Is probably one of the corner cases where the predicate push-down is actually 
hurting the shared work optimizer.

Also cross-check if you have accidentally loaded store_sales with 
ss_item_sk(int) and if the item i_item_sk is a bigint (type mismatches will 
trigger a slow join algorithm, but without any consistency issues).

Cheers,
Gopal




Fwd: Hive generating different DAGs from the same query

2018-07-19 Thread Sungwoo Park
Hello Zoltan,

I further tested, and found no Exception (such as
MapJoinMemoryExhaustionError) during the run. So, the query ran fine. My
conclusion is that a query can update some internal states of HiveServer2,
affecting DAG generation for subsequent queries. Moreover, the same query
may or may not affect DAG generation.

This issue is not related to query reexecution, as even with query
reexecution disabled (hive.query.reexecution.enabled set to false), I still
see this problem occurring.

--- Sungwoo Park

On Fri, Jul 13, 2018 at 4:48 PM, Zoltan Haindrich  wrote:

> Hello Sungwoo!
>
> I think its possible that reoptimization is kicking in, because the first
> execution have bumped into an exception.
>
> I think the plans should not be changing permanently; unless
> "hive.query.reexecution.stats.persist.scope" is set to a wider scope than
> query.
>
> To check that indeed reoptimization is happening(or not) look for:
>
> cat > patterns << EOF
> org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionError
> reexec
> Driver.java:execute
> SessionState.java:printError
> EOF
>
> cat patterns
>
> fgrep -Ff patterns --color=yes /var/log/hive/hiveserver2.log | grep -v
> DEBUG
>
> cheers,
> Zoltan
>
>
> On 07/11/2018 10:40 AM, Sungwoo Park wrote:
>
>> Hello,
>>
>> I am running the TPC-DS benchmark using Hive 3.0, and I find that Hive
>> sometimes produces different DAGs from the same query. These are the two
>> scenarios for the experiment. The execution engine is tez, and the TPC-DS
>> scale factor is 3TB.
>>
>> 1. Run query 19 to query 24 sequentially in the same session. The first
>> part of query 24 takes about 156 seconds:
>>
>> 100 rows selected (58.641 seconds) <-- query 19
>> 100 rows selected (16.117 seconds)
>> 100 rows selected (9.841 seconds)
>> 100 rows selected (35.195 seconds)
>> 1 row selected (258.441 seconds)
>> 59 rows selected (213.156 seconds)
>> 4,643 rows selected (156.982 seconds) <-- the first part of query 24
>> 1,656 rows selected (136.382 seconds)
>>
>> 2. Now run query 1 to query 24 sequentially in the same session. This
>> time the first part of query 24 takes more than 1000 seconds:
>>
>> 100 rows selected (94.981 seconds) <-- query 1
>> 2,513 rows selected (30.804 seconds)
>> 100 rows selected (11.076 seconds)
>> 100 rows selected (225.646 seconds)
>> 100 rows selected (44.186 seconds)
>> 52 rows selected (11.436 seconds)
>> 100 rows selected (21.968 seconds)
>> 11 rows selected (14.05 seconds)
>> 1 row selected (35.619 seconds)
>> 100 rows selected (27.062 seconds)
>> 100 rows selected (134.098 seconds)
>> 100 rows selected (7.65 seconds)
>> 1 row selected (14.54 seconds)
>> 100 rows selected (143.965 seconds)
>> 100 rows selected (101.676 seconds)
>> 100 rows selected (19.742 seconds)
>> 1 row selected (245.381 seconds)
>> 100 rows selected (71.617 seconds)
>> 100 rows selected (23.017 seconds)
>> 100 rows selected (10.888 seconds)
>> 100 rows selected (11.149 seconds)
>> 100 rows selected (7.919 seconds)
>> 100 rows selected (29.527 seconds)
>> 1 row selected (220.516 seconds)
>> 59 rows selected (204.363 seconds)
>> 4,643 rows selected (1008.514 seconds) <-- the first part of query 24
>> 1,656 rows selected (141.279 seconds)
>>
>> Here are a few findings from the experiment:
>>
>> 1. The two DAGs for the first part of query 24 are quite similar, but
>> actually different. The DAG from the first scenario contains 17 vertices,
>> whereas the DAG from the second scenario contains 18 vertices, skipping
>> some part of map-side join that is performed in the first scenario.
>>
>> 2. The configuration (HiveConf) inside HiveServer2 is precisely the same
>> before running the first part of query 24 (except for minor keys).
>>
>> So, I wonder how Hive can produce different DAGs from the same query. For
>> example, is there some internal configuration key in HiveConf that
>> enables/disables some optimization depending on the accumulate statistics
>> in HiveServer2? (I haven't tested it yet, but I can also test with Hive
>> 2.x.)
>>
>> Thank you in advance,
>>
>> --- Sungwoo Park
>>
>>


Re: Hive generating different DAGs from the same query

2018-07-13 Thread Zoltan Haindrich

Hello Sungwoo!

I think its possible that reoptimization is kicking in, because the first 
execution have bumped into an exception.

I think the plans should not be changing permanently; unless 
"hive.query.reexecution.stats.persist.scope" is set to a wider scope than query.

To check that indeed reoptimization is happening(or not) look for:

cat > patterns << EOF
org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionError
reexec
Driver.java:execute
SessionState.java:printError
EOF

cat patterns

fgrep -Ff patterns --color=yes /var/log/hive/hiveserver2.log | grep -v DEBUG

cheers,
Zoltan

On 07/11/2018 10:40 AM, Sungwoo Park wrote:

Hello,

I am running the TPC-DS benchmark using Hive 3.0, and I find that Hive sometimes produces different DAGs from the same query. These are the two scenarios for the 
experiment. The execution engine is tez, and the TPC-DS scale factor is 3TB.


1. Run query 19 to query 24 sequentially in the same session. The first part of 
query 24 takes about 156 seconds:

100 rows selected (58.641 seconds) <-- query 19
100 rows selected (16.117 seconds)
100 rows selected (9.841 seconds)
100 rows selected (35.195 seconds)
1 row selected (258.441 seconds)
59 rows selected (213.156 seconds)
4,643 rows selected (156.982 seconds) <-- the first part of query 24
1,656 rows selected (136.382 seconds)

2. Now run query 1 to query 24 sequentially in the same session. This time the 
first part of query 24 takes more than 1000 seconds:

100 rows selected (94.981 seconds) <-- query 1
2,513 rows selected (30.804 seconds)
100 rows selected (11.076 seconds)
100 rows selected (225.646 seconds)
100 rows selected (44.186 seconds)
52 rows selected (11.436 seconds)
100 rows selected (21.968 seconds)
11 rows selected (14.05 seconds)
1 row selected (35.619 seconds)
100 rows selected (27.062 seconds)
100 rows selected (134.098 seconds)
100 rows selected (7.65 seconds)
1 row selected (14.54 seconds)
100 rows selected (143.965 seconds)
100 rows selected (101.676 seconds)
100 rows selected (19.742 seconds)
1 row selected (245.381 seconds)
100 rows selected (71.617 seconds)
100 rows selected (23.017 seconds)
100 rows selected (10.888 seconds)
100 rows selected (11.149 seconds)
100 rows selected (7.919 seconds)
100 rows selected (29.527 seconds)
1 row selected (220.516 seconds)
59 rows selected (204.363 seconds)
4,643 rows selected (1008.514 seconds) <-- the first part of query 24
1,656 rows selected (141.279 seconds)

Here are a few findings from the experiment:

1. The two DAGs for the first part of query 24 are quite similar, but actually different. The DAG from the first scenario contains 17 vertices, whereas the DAG from the 
second scenario contains 18 vertices, skipping some part of map-side join that is performed in the first scenario.


2. The configuration (HiveConf) inside HiveServer2 is precisely the same before 
running the first part of query 24 (except for minor keys).

So, I wonder how Hive can produce different DAGs from the same query. For example, is there some internal configuration key in HiveConf that enables/disables some 
optimization depending on the accumulate statistics in HiveServer2? (I haven't tested it yet, but I can also test with Hive 2.x.)


Thank you in advance,

--- Sungwoo Park



Hive generating different DAGs from the same query

2018-07-11 Thread Sungwoo Park
Hello,

I am running the TPC-DS benchmark using Hive 3.0, and I find that Hive
sometimes produces different DAGs from the same query. These are the two
scenarios for the experiment. The execution engine is tez, and the TPC-DS
scale factor is 3TB.

1. Run query 19 to query 24 sequentially in the same session. The first
part of query 24 takes about 156 seconds:

100 rows selected (58.641 seconds) <-- query 19
100 rows selected (16.117 seconds)
100 rows selected (9.841 seconds)
100 rows selected (35.195 seconds)
1 row selected (258.441 seconds)
59 rows selected (213.156 seconds)
4,643 rows selected (156.982 seconds) <-- the first part of query 24
1,656 rows selected (136.382 seconds)

2. Now run query 1 to query 24 sequentially in the same session. This time
the first part of query 24 takes more than 1000 seconds:

100 rows selected (94.981 seconds) <-- query 1
2,513 rows selected (30.804 seconds)
100 rows selected (11.076 seconds)
100 rows selected (225.646 seconds)
100 rows selected (44.186 seconds)
52 rows selected (11.436 seconds)
100 rows selected (21.968 seconds)
11 rows selected (14.05 seconds)
1 row selected (35.619 seconds)
100 rows selected (27.062 seconds)
100 rows selected (134.098 seconds)
100 rows selected (7.65 seconds)
1 row selected (14.54 seconds)
100 rows selected (143.965 seconds)
100 rows selected (101.676 seconds)
100 rows selected (19.742 seconds)
1 row selected (245.381 seconds)
100 rows selected (71.617 seconds)
100 rows selected (23.017 seconds)
100 rows selected (10.888 seconds)
100 rows selected (11.149 seconds)
100 rows selected (7.919 seconds)
100 rows selected (29.527 seconds)
1 row selected (220.516 seconds)
59 rows selected (204.363 seconds)
4,643 rows selected (1008.514 seconds) <-- the first part of query 24
1,656 rows selected (141.279 seconds)

Here are a few findings from the experiment:

1. The two DAGs for the first part of query 24 are quite similar, but
actually different. The DAG from the first scenario contains 17 vertices,
whereas the DAG from the second scenario contains 18 vertices, skipping
some part of map-side join that is performed in the first scenario.

2. The configuration (HiveConf) inside HiveServer2 is precisely the same
before running the first part of query 24 (except for minor keys).

So, I wonder how Hive can produce different DAGs from the same query. For
example, is there some internal configuration key in HiveConf that
enables/disables some optimization depending on the accumulate statistics
in HiveServer2? (I haven't tested it yet, but I can also test with Hive
2.x.)

Thank you in advance,

--- Sungwoo Park