[jira] [Created] (HIVE-22046) Differentiate among column stats computed by different engines

2019-07-24 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-22046:
--

 Summary: Differentiate among column stats computed by different 
engines
 Key: HIVE-22046
 URL: https://issues.apache.org/jira/browse/HIVE-22046
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez


The goal is to avoid computation of column stats by engines to step on each 
other, e.g., Hive and Impala. In longer term, we may introduce a common 
representation for the column statistics stored by different engines.

For this issue, we will add a new column 'engine' to TAB_COL_STATS HMS table 
(unpartitioned tables) and to PART_COL_STATS HMS table (partitioned tables). 
This will prevent conflicts at the column level stats.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22045) HIVE-21711 introduced regression in data load

2019-07-24 Thread Vineet Garg (JIRA)
Vineet Garg created HIVE-22045:
--

 Summary: HIVE-21711 introduced regression in data load
 Key: HIVE-22045
 URL: https://issues.apache.org/jira/browse/HIVE-22045
 Project: Hive
  Issue Type: Bug
Affects Versions: 4.0.0
Reporter: Vineet Garg
Assignee: Vineet Garg


Better fix for HIVE-21711 is to specialize the handling for CTAS/Create MV 
statements to avoid intermittent rename operation but keep INSERT etc 
statements do intermittent rename since otherwise final move by file operation 
is significantly slow for such statements.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22044) No way to start HiveServer2

2019-07-24 Thread Ivan Kostyuk (JIRA)
Ivan Kostyuk created HIVE-22044:
---

 Summary: No way to start HiveServer2
 Key: HIVE-22044
 URL: https://issues.apache.org/jira/browse/HIVE-22044
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 2.3.5
Reporter: Ivan Kostyuk


Download apache-hive-2.3.5-bin.tar.gz

Extract and start

$ hive --service hiveserver2
{color:#00}java.lang.NoClassDefFoundError: 
org/eclipse/jetty/http/PreEncodedHttpField
at org.apache.hive.http.HttpServer.(HttpServer.java:98) 
~[hive-common-2.3.5.jar:2.3.5]
at org.apache.hive.http.HttpServer.(HttpServer.java:80) 
~[hive-common-2.3.5.jar:2.3.5]
at org.apache.hive.http.HttpServer$Builder.build(HttpServer.java:133) 
~[hive-common-2.3.5.jar:2.3.5]
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:227) 
~[hive-service-2.3.5.jar:2.3.5]
at 
org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:607)
 [hive-service-2.3.5.jar:2.3.5]
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:100) 
[hive-service-2.3.5.jar:2.3.5]
at 
org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:855)
 [hive-service-2.3.5.jar:2.3.5]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:724) 
[hive-service-2.3.5.jar:2.3.5]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_211]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_211]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_211]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_211]
at org.apache.hadoop.util.RunJar.run(RunJar.java:244) 
[hadoop-common-2.9.2.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:158) 
[hadoop-common-2.9.2.jar:?]
Caused by: java.lang.ClassNotFoundException: 
org.eclipse.jetty.http.PreEncodedHttpField
at 
{color}{color:#ff}[java.net.URLClassLoader.findClass(URLClassLoader.java:382|http://java.net.urlclassloader.findclass%28urlclassloader.java:382/]{color}{color:#00})
 ~[?:1.8.0_211]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_211]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) ~[?:1.8.0_211]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_211]{color}
 
{color:#00}This class first introduced for Jetty 9.3.{color}
{color:#00}[https://github.com/apache/hive/blob/rel/release-2.3.5/common/pom.xml]{color}
jetty-all
[https://github.com/apache/hive/blob/rel/release-2.3.5/pom.xml]
7.6.0.v20120127
 
No other version of Jetty mentioned.
However lib directory contains:
jetty-6.1.26.jar
jetty-all-7.6.0.v20120127.jar
jetty-client-9.2.5.v20141112.jar
jetty-continuation-9.2.5.v20141112.jar
jetty-http-9.2.5.v20141112.jar
jetty-io-9.2.5.v20141112.jar
jetty-proxy-9.2.5.v20141112.jar
jetty-security-9.2.5.v20141112.jar
jetty-server-9.2.5.v20141112.jar
jetty-servlet-9.2.5.v20141112.jar
jetty-servlets-9.2.5.v20141112.jar
jetty-sslengine-6.1.26.jar
jetty-util-6.1.26.jar
jetty-util-9.2.5.v20141112.jar
 
After all not related jars were removed
It started to fail with
{color:#00}java.lang.NoClassDefFoundError: org/eclipse/jetty/http/HttpField
at org.apache.hive.http.HttpServer.(HttpServer.java:98) 
~[hive-common-2.3.5.jar:2.3.5]
at org.apache.hive.http.HttpServer.(HttpServer.java:80) 
~[hive-common-2.3.5.jar:2.3.5]
at org.apache.hive.http.HttpServer$Builder.build(HttpServer.java:133) 
~[hive-common-2.3.5.jar:2.3.5]
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:227) 
~[hive-service-2.3.5.jar:2.3.5]
at 
org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:607)
 [hive-service-2.3.5.jar:2.3.5]
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:100) 
[hive-service-2.3.5.jar:2.3.5]
at 
org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:855)
 [hive-service-2.3.5.jar:2.3.5]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:724) 
[hive-service-2.3.5.jar:2.3.5]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_211]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_211]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_211]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_211]
at org.apache.hadoop.util.RunJar.run(RunJar.java:244) 
[hadoop-common-2.9.2.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:158) 
[hadoop-common-2.9.2.jar:?]
Caused by: java.lang.ClassNotFoundException: org.eclipse.jetty.http.HttpField
at 
{color}{color:#ff}[java.net.URLClassLoader.findClass(URLClassLoader.java:382|http://java.net.urlclassloader.findclass%28urlclassloader.java:382/]{color})
 ~[?:1.8.0_211]
{color:#00}at 

Re: Review Request 71133: HIVE-12971: Add Support for Kudu Tables

2019-07-24 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71133/
---

(Updated July 24, 2019, 6:31 p.m.)


Review request for hive.


Bugs: HIVE-12971
https://issues.apache.org/jira/browse/HIVE-12971


Repository: hive-git


Description
---

This patch adds an initial integration for Apache Kudu backed tables
by supporting the creation of external tables pointed at existing
underlying Kudu tables.

SELECT queries can read from the tables including pushing most
predicates/filters into the Kudu scanners. Future work should
complete support for Kudu predicates.

INSERT queries can write to the tables. However, they currently
use Kudu UPSERT operations when writing. Future work should
complete support for INSERT, UPDATE, and DELETE.

Note: The table properties and class names match the values used by
Apache Impala when creating HMS entries for Kudu tables. This
means tables created by Impala can be used by Hive and vice versa.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java f002c6e931 
  itests/pom.xml 345e9220df 
  itests/qtest-kudu/pom.xml PRE-CREATION 
  
itests/qtest-kudu/src/test/java/org/apache/hadoop/hive/cli/TestKuduCliDriver.java
 PRE-CREATION 
  itests/qtest-kudu/src/test/java/org/apache/hadoop/hive/cli/package-info.java 
PRE-CREATION 
  itests/util/pom.xml 607fd4724e 
  itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java 
5c17e1ade6 
  
itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CoreKuduCliDriver.java
 PRE-CREATION 
  itests/util/src/main/java/org/apache/hadoop/hive/kudu/KuduTestSetup.java 
PRE-CREATION 
  itests/util/src/main/java/org/apache/hadoop/hive/kudu/package-info.java 
PRE-CREATION 
  itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestMiniClusters.java 
bd4c76ed66 
  kudu-handler/pom.xml PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduHiveUtils.java 
PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduInputFormat.java 
PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduOutputFormat.java 
PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduPredicateHandler.java 
PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduSerDe.java PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduStorageHandler.java 
PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduWritable.java 
PRE-CREATION 
  kudu-handler/src/java/org/apache/hadoop/hive/kudu/package-info.java 
PRE-CREATION 
  kudu-handler/src/test/org/apache/hadoop/hive/kudu/KuduTestUtils.java 
PRE-CREATION 
  kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduInputFormat.java 
PRE-CREATION 
  kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduOutputFormat.java 
PRE-CREATION 
  
kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduPredicateHandler.java 
PRE-CREATION 
  kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduSerDe.java 
PRE-CREATION 
  kudu-handler/src/test/org/apache/hadoop/hive/kudu/package-info.java 
PRE-CREATION 
  kudu-handler/src/test/queries/positive/kudu_queries.q PRE-CREATION 
  kudu-handler/src/test/results/positive/kudu_queries.q.out PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/AsyncTaskCopyAuxJars.java
 7b2e32bea2 
  pom.xml 7649af10a7 
  ql/src/java/org/apache/hadoop/hive/ql/io/sarg/ConvertAstToSearchArg.java 
27fe828b75 


Diff: https://reviews.apache.org/r/71133/diff/5/

Changes: https://reviews.apache.org/r/71133/diff/4-5/


Testing
---


Thanks,

Grant Henke



[jira] [Created] (HIVE-22043) Make LLAP's Yarn package dir on HDFS configurable

2019-07-24 Thread Adam Szita (JIRA)
Adam Szita created HIVE-22043:
-

 Summary: Make LLAP's Yarn package dir on HDFS configurable
 Key: HIVE-22043
 URL: https://issues.apache.org/jira/browse/HIVE-22043
 Project: Hive
  Issue Type: New Feature
Reporter: Adam Szita
Assignee: Adam Szita


Currently at LLAP launch we're using a hardwired HDFS directory to upload libs 
and configs that are required for LLAP daemons.  This is hive user home 
directory/.yarn

I propose to have this configurable instead.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22042) Set hive.exec.dynamic.partition.mode=nonstrict by default

2019-07-24 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-22042:
--

 Summary: Set hive.exec.dynamic.partition.mode=nonstrict by default
 Key: HIVE-22042
 URL: https://issues.apache.org/jira/browse/HIVE-22042
 Project: Hive
  Issue Type: Bug
Reporter: Jesus Camacho Rodriguez






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Review Request 71156: Tez: Use a pre-parsed TezConfiguration from DagUtils

2019-07-24 Thread Attila Magyar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71156/
---

Review request for hive, Laszlo Bodor, Gopal V, and Jesús Camacho Rodríguez.


Bugs: HIVE-21828
https://issues.apache.org/jira/browse/HIVE-21828


Repository: hive-git


Description
---

The HS2 tez-site.xml does not change dynamically - the XML parsed components of 
the config can be obtained statically and kept across sessions.

This allows for the replacing of "new TezConfiguration()" with a HS2 local 
version instead.

The configuration object however has to reference the right resource file (i.e 
location of tez-site.xml) without reparsing it for each query.


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 440d761f03d 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java 3278dfea061 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezConfigurationFactory.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java dd7ccd4764d 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFRegExp.java 
3bf3cfd3d9e 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestTezTask.java befeb4f2dd4 
  ql/src/test/org/apache/hive/testutils/HiveTestEnvSetup.java f872da02a3c 
  ql/src/test/queries/clientpositive/mm_loaddata.q 7e5787f2a65 


Diff: https://reviews.apache.org/r/71156/diff/1/


Testing
---

unittests


Thanks,

Attila Magyar



[jira] [Created] (HIVE-22041) HiveServer2 creates Delegation Token crc files and never deletes them

2019-07-24 Thread Alessandro Di Diego (JIRA)
Alessandro Di Diego created HIVE-22041:
--

 Summary: HiveServer2 creates Delegation Token crc files and never 
deletes them
 Key: HIVE-22041
 URL: https://issues.apache.org/jira/browse/HIVE-22041
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 1.1.0
Reporter: Alessandro Di Diego


It seems that in secure clusters, HiveServer2 creates an unbounded amount of 
crc files related to the Delegation Token, e.g.:
{quote}{{# ls -latr .hive_hadoop_delegation_token*.tmp.crc | more}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token7336761305415229297.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token2098027444797756507.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token570615377517838929.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token2806157469169711507.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token4743780849236152782.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token4779276962989484605.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token614278562135964419.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token5163699580858054526.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:41 
.hive_hadoop_delegation_token6764054090932425303.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:42 
.hive_hadoop_delegation_token6389909203128084522.tmp.crc}}
{{-rw-r--r-- 1 hive hive 12 Jul 22 18:42 
.hive_hadoop_delegation_token7810958545651446366.tmp.crc}}
{{[...]}}{quote}
this could quickly fills the inode table of the filesystem (usually the one 
mounted on /tmp).

I've experienced it in Hive 1.1.0 (CDH 5.16.2), but I think the same issue is 
present in the current master branch, since here:

[https://github.com/apache/hive/blob/d1343a69e6be7a312b7c0bb2aeebefaa40535b65/ql/src/java/org/apache/hadoop/hive/ql/exec/SecureCmdDoAs.java#L71]

the crc file gets created, but only the token file gets deleted:

[https://github.com/apache/hive/blob/d1343a69e6be7a312b7c0bb2aeebefaa40535b65/ql/src/java/org/apache/hadoop/hive/ql/exec/SecureCmdDoAs.java#L80]

HIVE-13883 is greatly related to the issue, I think that the same logic could 
be used to fix this.

Although, I wonder if the Credentials class should expose some delete method 
that also deletes the crc file...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)
xiepengjie created HIVE-22040:
-

 Summary: Drop partition throws exception with 'Failed to delete 
parent: File does not exist' when the partition's parent path does not exists
 Key: HIVE-22040
 URL: https://issues.apache.org/jira/browse/HIVE-22040
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 1.2.1
Reporter: xiepengjie
Assignee: xiepengjie


I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will failed if 
using jdbc by java to connec hiveserver2, I  think it is very unfriendly and we 
should fix it.

Example:

-- First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
-- Second, delete the path of partition 'month=07':

 
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
 

--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .

 
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:

 

 
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
(state=08S01,code=1)
 {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22039) Query with CBO crashes HS2 in corner cases

2019-07-24 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HIVE-22039:
---

 Summary: Query with CBO crashes HS2 in corner cases 
 Key: HIVE-22039
 URL: https://issues.apache.org/jira/browse/HIVE-22039
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 2.3.4, 3.1.1
Reporter: Rajesh Balamohan


Here is a very simple repro for this case.

This along with CBO would crash HS2. It runs into a infinite loop creating too 
many number of RexCalls and finally OOMs.

This is observed in 2.x, 3.x.

With 4.x (master branch), it does not happen. Master has 
{{calcite-core-1.19.0.jar}}, where as 3.x has {{calcite-core-1.16.0.jar}}. 

{noformat}

drop table if exists tableA;
drop table if exists tableB;

create table if not exists tableA(id int, reporting_date string) stored as orc;
create table if not exists tableB(id int, reporting_date string) partitioned by 
(datestr string) stored as orc;



explain with tableA_cte as (
select
id,
reporting_date
from tableA
  ),

tableA_cte_2 as (
select
0 as id,
reporting_date
from tableA
  ),

tableA_cte_5 as (
  select * from tableA_cte
  union 
  select * from tableA_cte_2  
),

tableB_cte_0 as (
select
id,
reporting_date
from tableB   
where reporting_date  = '2018-10-29'
  ),

tableB_cte_1 as (
select
0 as id,
reporting_date
from tableB  
where datestr = '2018-10-29'  
  ),


tableB_cte_4 as (
select * from tableB_cte_0
union 
select * from tableB_cte_1
  )

select
  a.id as id,
  b.reporting_date
from tableA_cte_5 a
join tableB_cte_4 b on (a.id = b.id and a.reporting_date = b.reporting_date);

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22038) Fix memory related sideeffects of opening/closing sessions

2019-07-24 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HIVE-22038:
---

 Summary: Fix memory related sideeffects of opening/closing sessions
 Key: HIVE-22038
 URL: https://issues.apache.org/jira/browse/HIVE-22038
 Project: Hive
  Issue Type: Bug
Reporter: Zoltan Haindrich
Assignee: Zoltan Haindrich






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: Review Request 71135: HIVE-22031. HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-24 Thread Artem Velykorodnyi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71135/
---

(Updated July 24, 2019, 11:18 a.m.)


Review request for hive, Jesús Camacho Rodríguez and Zoltan Haindrich.


Repository: hive-git


Description
---

Init commit


Diffs (updated)
-

  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRelDecorrelator.java
 dd5eb41d3f 
  ql/src/test/queries/clientpositive/subquery_exists.q 17d0a98426 
  ql/src/test/results/clientpositive/subquery_exists.q.out 9a65531e22 


Diff: https://reviews.apache.org/r/71135/diff/2/

Changes: https://reviews.apache.org/r/71135/diff/1-2/


Testing
---


Thanks,

Artem Velykorodnyi



[jira] [Created] (HIVE-22037) HS2 should log when shutting down due to OOM

2019-07-24 Thread Barnabas Maidics (JIRA)
Barnabas Maidics created HIVE-22037:
---

 Summary: HS2 should log when shutting down due to OOM
 Key: HIVE-22037
 URL: https://issues.apache.org/jira/browse/HIVE-22037
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Barnabas Maidics
Assignee: Barnabas Maidics


Currently, if HS2 runs into OOM issue, ThreadPoolExecutorWithOomHook kicks in 
and runs oomHook, which will stop HS2. Everything happens without logging. In 
the log, you can only see, that HS2 stopped. 

We should log the stacktrace. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22036) HMS should identify events corresponding to replicated database for Atlas HMS hook

2019-07-24 Thread Ashutosh Bapat (JIRA)
Ashutosh Bapat created HIVE-22036:
-

 Summary: HMS should identify events corresponding to replicated 
database for Atlas HMS hook
 Key: HIVE-22036
 URL: https://issues.apache.org/jira/browse/HIVE-22036
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Ashutosh Bapat
Assignee: Ashutosh Bapat


An HMS Atlas hook allows an Atlas to create/update/delete its metadata based on 
the corresponding events in HMS. But Atlas replication happens out-side and 
before the Hive replication. Thus any events generated during replication may 
change the Atlas data already replicated, thus interfering with Atlas 
replication. Hence, provide an HMS interface which the hook can use to identify 
the events caused by Hive replication flow.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)