Re: Hive Row number Use case

2018-07-13 Thread Anup Tiwari
Hi All,

Can someone look into this and revert if possible?

Thanks.


On Thu, 12 Jul 2018 12:56 Anup Tiwari,  wrote:

> Hi All,
>
> We have a use case where we want to assign a row number to a table based
> on 3 column ( uid, update_date, flag) i.e. if value of any of the column
> gets changed, we want to reset this number. Please find below sample input
> data and expected output data.
>
> Also please note that we have tried row_number() over(partition by uid,
> update_date, flag order by update_time asc)  but due to this actual input
> ordering got break due to I believe partition by clause because it seems
> partition by creates group within column specified and then it start row
> number and due to this actual ordering is breaking. So i just wanted to
> know that is there any function available in hive which can give us below
> result OR we are missing something in window function?
>
>
> *Input Data :- *
>
> *uid* *update_date* *update_time* *flag*
> 468730 2017-07-12 12/07/2017 22:59:17 1
> 468730 2017-07-12 12/07/2017 23:02:14 0
> 468730 2017-07-12 12/07/2017 23:07:40 0
> 468730 2017-07-12 12/07/2017 23:12:41 0
> 468730 2017-07-12 12/07/2017 23:22:06 0
> 468730 2017-07-12 12/07/2017 23:38:35 0
> 468730 2017-07-12 12/07/2017 23:44:19 0
> 468730 2017-07-12 12/07/2017 23:47:49 1
> 468730 2017-07-12 12/07/2017 23:48:49 1
> 468730 2017-07-12 12/07/2017 23:53:31 0
> 468730 2017-07-12 12/07/2017 23:57:01 1
> 468730 2017-07-13 13/07/2017 00:03:10 1
> 468730 2017-07-13 13/07/2017 00:06:35 0
> 468730 2017-07-13 13/07/2017 00:07:29 1
> 468731 2017-07-13 12/07/2017 12:59:17 1
> 468731 2017-07-13 12/07/2017 13:02:14 0
> 468731 2017-07-13 12/07/2017 13:07:40 0
> 468731 2017-07-13 12/07/2017 13:12:41 0
>
>
> *Output Data :-*
>
> *uid* *update_date* *update_time* *flag* *required_row_num*
> 468730 2017-07-12 12/07/2017 22:59:17 1 1
> 468730 2017-07-12 12/07/2017 23:02:14 0 1
> 468730 2017-07-12 12/07/2017 23:07:40 0 2
> 468730 2017-07-12 12/07/2017 23:12:41 0 3
> 468730 2017-07-12 12/07/2017 23:22:06 0 4
> 468730 2017-07-12 12/07/2017 23:38:35 0 5
> 468730 2017-07-12 12/07/2017 23:44:19 0 6
> 468730 2017-07-12 12/07/2017 23:47:49 1 1
> 468730 2017-07-12 12/07/2017 23:48:49 1 2
> 468730 2017-07-12 12/07/2017 23:53:31 0 1
> 468730 2017-07-12 12/07/2017 23:57:01 1 1
> 468730 2017-07-13 13/07/2017 00:03:10 1 1
> 468730 2017-07-13 13/07/2017 00:06:35 0 1
> 468730 2017-07-13 13/07/2017 00:07:29 1 1
> 468731 2017-07-13 12/07/2017 12:59:17 1 1
> 468731 2017-07-13 12/07/2017 13:02:14 0 1
> 468731 2017-07-13 12/07/2017 13:07:40 0 2
> 468731 2017-07-13 12/07/2017 13:12:41 0 3
> *FYI :* We are one Hive 2.3.1.
>


Re: Cannot INSERT OVERWRITE on clustered table with > 8 buckets

2018-07-13 Thread Gopal Vijayaraghavan


> I'm using Hive 1.2.1 with LLAP on HDP 2.6.5. Tez AM is 3GB, there are 3 
> daemons for a total of 34816 MB.

Assuming you're using Hive2 here (with LLAP) and LLAP kinda sucks for ETL 
workloads, but this is a different problem.

> PARTITIONED BY (DATAPASSAGGIO string, ORAPASSAGGIO string)
> CLUSTERED BY (ID_TICKETTYPE, ID_PERSONTYPE, NPOOLNR, NKASSANR) INTO 8 BUCKETS 
> STORED AS ORC
...
> Total number of partitions is 137k.

20Gb divided by 137k makes for very poorly written ORC files, because I'd guess 
that it has too few rows in a file (will be much smaller than 1 HDFS block) - 
partitioning this fine is actually a performance issue on compile time.

You can make this insert work by changing the insert shuffle mechanism (run an 
explain with/without to see the difference).

set hive.optimize.sort.dynamic.partition=true; -- 
https://issues.apache.org/jira/browse/HIVE-6455

But I suspect you will be very disappointed by the performance of the read 
queries after this insert.

>  ,NPOOLNR decimal(4,0)
> ,NZUTRNR decimal(3,0)
> ,NKASSANR decimal(3,0)
> ,ID_TICKETTYPE decimal(5,0)
> ,ID_PERSONTYPE decimal(6,0)
> ,ID_TICKETPERSONTYPEDEF decimal(6,0)

That's also going to hurt - your schema raises a lot of red-flags that I find 
people do when they first migrated to hive.

https://www.slideshare.net/t3rmin4t0r/data-organization-hive-meetup/

In general, you need to fix the partition count, bucketing structure (how 
clustered by does not "cluster", you need another "sorted by"), zero scale 
decimals.

Can you try running with (& see what your query read-perf looks like)

https://gist.github.com/t3rmin4t0r/087b61f79514673c307bb9a88327a4db

Cheers,
Gopal




Re: Hive generating different DAGs from the same query

2018-07-13 Thread Zoltan Haindrich

Hello Sungwoo!

I think its possible that reoptimization is kicking in, because the first 
execution have bumped into an exception.

I think the plans should not be changing permanently; unless 
"hive.query.reexecution.stats.persist.scope" is set to a wider scope than query.

To check that indeed reoptimization is happening(or not) look for:

cat > patterns << EOF
org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionError
reexec
Driver.java:execute
SessionState.java:printError
EOF

cat patterns

fgrep -Ff patterns --color=yes /var/log/hive/hiveserver2.log | grep -v DEBUG

cheers,
Zoltan

On 07/11/2018 10:40 AM, Sungwoo Park wrote:

Hello,

I am running the TPC-DS benchmark using Hive 3.0, and I find that Hive sometimes produces different DAGs from the same query. These are the two scenarios for the 
experiment. The execution engine is tez, and the TPC-DS scale factor is 3TB.


1. Run query 19 to query 24 sequentially in the same session. The first part of 
query 24 takes about 156 seconds:

100 rows selected (58.641 seconds) <-- query 19
100 rows selected (16.117 seconds)
100 rows selected (9.841 seconds)
100 rows selected (35.195 seconds)
1 row selected (258.441 seconds)
59 rows selected (213.156 seconds)
4,643 rows selected (156.982 seconds) <-- the first part of query 24
1,656 rows selected (136.382 seconds)

2. Now run query 1 to query 24 sequentially in the same session. This time the 
first part of query 24 takes more than 1000 seconds:

100 rows selected (94.981 seconds) <-- query 1
2,513 rows selected (30.804 seconds)
100 rows selected (11.076 seconds)
100 rows selected (225.646 seconds)
100 rows selected (44.186 seconds)
52 rows selected (11.436 seconds)
100 rows selected (21.968 seconds)
11 rows selected (14.05 seconds)
1 row selected (35.619 seconds)
100 rows selected (27.062 seconds)
100 rows selected (134.098 seconds)
100 rows selected (7.65 seconds)
1 row selected (14.54 seconds)
100 rows selected (143.965 seconds)
100 rows selected (101.676 seconds)
100 rows selected (19.742 seconds)
1 row selected (245.381 seconds)
100 rows selected (71.617 seconds)
100 rows selected (23.017 seconds)
100 rows selected (10.888 seconds)
100 rows selected (11.149 seconds)
100 rows selected (7.919 seconds)
100 rows selected (29.527 seconds)
1 row selected (220.516 seconds)
59 rows selected (204.363 seconds)
4,643 rows selected (1008.514 seconds) <-- the first part of query 24
1,656 rows selected (141.279 seconds)

Here are a few findings from the experiment:

1. The two DAGs for the first part of query 24 are quite similar, but actually different. The DAG from the first scenario contains 17 vertices, whereas the DAG from the 
second scenario contains 18 vertices, skipping some part of map-side join that is performed in the first scenario.


2. The configuration (HiveConf) inside HiveServer2 is precisely the same before 
running the first part of query 24 (except for minor keys).

So, I wonder how Hive can produce different DAGs from the same query. For example, is there some internal configuration key in HiveConf that enables/disables some 
optimization depending on the accumulate statistics in HiveServer2? (I haven't tested it yet, but I can also test with Hive 2.x.)


Thank you in advance,

--- Sungwoo Park