[jira] [Commented] (HIVE-14933) include argparse with LLAP scripts to support antique Python versions

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617479#comment-15617479
 ] 

Hive QA commented on HIVE-14933:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835894/HIVE-14933.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10626 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=91)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1877/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1877/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1877/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12835894 - PreCommit-HIVE-Build

> include argparse with LLAP scripts to support antique Python versions
> -
>
> Key: HIVE-14933
> URL: https://issues.apache.org/jira/browse/HIVE-14933
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14933.01.patch, HIVE-14933.02.patch, 
> HIVE-14933.patch
>
>
> The module is a standalone file, and it's under Python license that is 
> compatible with Apache. In the long term we should probably just move 
> LlapServiceDriver code entirely to Java, as right now it's a combination of 
> part-py, part-java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14943) Base Implementation

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617416#comment-15617416
 ] 

Hive QA commented on HIVE-14943:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835888/HIVE-14943.4.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10671 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_bulk] 
(batchId=89)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[columnstats_part_coltype]
 (batchId=148)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testUpdateWithSubquery 
(batchId=268)
org.apache.hadoop.hive.ql.parse.TestMergeStatement.testNegative6 (batchId=251)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=272)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1876/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1876/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1876/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12835888 - PreCommit-HIVE-Build

> Base Implementation
> ---
>
> Key: HIVE-14943
> URL: https://issues.apache.org/jira/browse/HIVE-14943
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14943.2.patch, HIVE-14943.3.patch, 
> HIVE-14943.4.patch, HIVE-14943.5.patch, HIVE-14943.patch
>
>
> Create the 1st pass functional implementation of MERGE
> This should run e2e and produce correct results.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14943) Base Implementation

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14943:
--
Attachment: HIVE-14943.5.patch

> Base Implementation
> ---
>
> Key: HIVE-14943
> URL: https://issues.apache.org/jira/browse/HIVE-14943
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14943.2.patch, HIVE-14943.3.patch, 
> HIVE-14943.4.patch, HIVE-14943.5.patch, HIVE-14943.patch
>
>
> Create the 1st pass functional implementation of MERGE
> This should run e2e and produce correct results.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14943) Base Implementation

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14943:
--
Attachment: (was: HIVE-14943.patch)

> Base Implementation
> ---
>
> Key: HIVE-14943
> URL: https://issues.apache.org/jira/browse/HIVE-14943
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14943.2.patch, HIVE-14943.3.patch, 
> HIVE-14943.4.patch, HIVE-14943.5.patch, HIVE-14943.patch
>
>
> Create the 1st pass functional implementation of MERGE
> This should run e2e and produce correct results.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14943) Base Implementation

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14943:
--
Attachment: (was: HIVE-14943.2.patch)

> Base Implementation
> ---
>
> Key: HIVE-14943
> URL: https://issues.apache.org/jira/browse/HIVE-14943
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14943.2.patch, HIVE-14943.3.patch, 
> HIVE-14943.4.patch, HIVE-14943.patch, HIVE-14943.patch
>
>
> Create the 1st pass functional implementation of MERGE
> This should run e2e and produce correct results.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14943) Base Implementation

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617362#comment-15617362
 ] 

Hive QA commented on HIVE-14943:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835888/HIVE-14943.4.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10671 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_bulk] 
(batchId=89)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[columnstats_part_coltype]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=90)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testUpdateWithSubquery 
(batchId=268)
org.apache.hadoop.hive.ql.parse.TestMergeStatement.testNegative6 (batchId=251)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1875/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1875/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1875/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12835888 - PreCommit-HIVE-Build

> Base Implementation
> ---
>
> Key: HIVE-14943
> URL: https://issues.apache.org/jira/browse/HIVE-14943
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14943.2.patch, HIVE-14943.2.patch, 
> HIVE-14943.3.patch, HIVE-14943.4.patch, HIVE-14943.patch, HIVE-14943.patch
>
>
> Create the 1st pass functional implementation of MERGE
> This should run e2e and produce correct results.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15096) hplsql registerUDF conflicts with pom.xml

2016-10-28 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617292#comment-15617292
 ] 

Fei Hui commented on HIVE-15096:


anyone review it?

> hplsql registerUDF conflicts with pom.xml
> -
>
> Key: HIVE-15096
> URL: https://issues.apache.org/jira/browse/HIVE-15096
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Affects Versions: 2.0.0, 2.1.0, 2.0.1
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.2.0
>
>
> in hplsql code, registerUDF code is
> sql.add("ADD JAR " + dir + "hplsql.jar");
> sql.add("ADD JAR " + dir + "antlr-runtime-4.5.jar");
> sql.add("ADD FILE " + dir + Conf.SITE_XML);
> but pom configufation is
>   
> org.apache.hive
> hive
> 2.2.0-SNAPSHOT
> ../pom.xml
>   
>   hive-hplsql
>   jar
>   Hive HPL/SQL
> 
>org.antlr
>antlr4-runtime
>4.5
> 
> when run hplsql , errors occur as below
>  Error while processing statement: 
> /opt/apps/apache-hive-2.0.0-bin/lib/hplsql.jar does not exist



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15096) hplsql registerUDF conflicts with pom.xml

2016-10-28 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HIVE-15096:
---
Assignee: Fei Hui
Target Version/s: 2.2.0
  Status: Patch Available  (was: Open)

diff --git a/hplsql/src/main/java/org/apache/hive/hplsql/Exec.java 
b/hplsql/src/main/java/org/apache/hive/hplsql/Exec.java
index 6da4f5b..1e14361 100644
--- a/hplsql/src/main/java/org/apache/hive/hplsql/Exec.java
+++ b/hplsql/src/main/java/org/apache/hive/hplsql/Exec.java
@@ -615,9 +615,13 @@ public void registerUdf() {
 }
 ArrayList sql = new ArrayList();
 String dir = Utils.getExecDir();
-sql.add("ADD JAR " + dir + "hplsql.jar");
-sql.add("ADD JAR " + dir + "antlr-runtime-4.5.jar");
-sql.add("ADD FILE " + dir + Conf.SITE_XML);
+sql.add("ADD JAR " + dir + "hive-hplsql-2.2.0-SNAPSHOT.jar");
+sql.add("ADD JAR " + dir + "antlr4-runtime-4.5.jar");
+if(!conf.getLocation().equals("")) {
+  sql.add("ADD FILE " + conf.getLocation());
+} else {
+  sql.add("ADD FILE " + dir + Conf.SITE_XML);
+}
 if (dotHplsqlrcExists) {
   sql.add("ADD FILE " + dir + Conf.DOT_HPLSQLRC);
 }


> hplsql registerUDF conflicts with pom.xml
> -
>
> Key: HIVE-15096
> URL: https://issues.apache.org/jira/browse/HIVE-15096
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Affects Versions: 2.0.1, 2.1.0, 2.0.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.2.0
>
>
> in hplsql code, registerUDF code is
> sql.add("ADD JAR " + dir + "hplsql.jar");
> sql.add("ADD JAR " + dir + "antlr-runtime-4.5.jar");
> sql.add("ADD FILE " + dir + Conf.SITE_XML);
> but pom configufation is
>   
> org.apache.hive
> hive
> 2.2.0-SNAPSHOT
> ../pom.xml
>   
>   hive-hplsql
>   jar
>   Hive HPL/SQL
> 
>org.antlr
>antlr4-runtime
>4.5
> 
> when run hplsql , errors occur as below
>  Error while processing statement: 
> /opt/apps/apache-hive-2.0.0-bin/lib/hplsql.jar does not exist



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15060) Remove the autoCommit warning from beeline

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617281#comment-15617281
 ] 

Hive QA commented on HIVE-15060:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835878/HIVE-15060.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10628 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=90)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=272)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1874/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1874/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1874/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12835878 - PreCommit-HIVE-Build

> Remove the autoCommit warning from beeline
> --
>
> Key: HIVE-15060
> URL: https://issues.apache.org/jira/browse/HIVE-15060
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-15060.1.patch, HIVE-15060.2.patch
>
>
> WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not 
> support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://ctr-e89-1466633100028-0275-01
> By default, this beeline setting is false, while hive only support 
> autoCommit=true for now. So this warning does mot make sense and should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15054) Hive insertion query execution fails on Hive on Spark

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617224#comment-15617224
 ] 

Hive QA commented on HIVE-15054:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835872/HIVE-15054.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10626 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=91)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=272)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1873/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1873/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1873/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12835872 - PreCommit-HIVE-Build

> Hive insertion query execution fails on Hive on Spark
> -
>
> Key: HIVE-15054
> URL: https://issues.apache.org/jira/browse/HIVE-15054
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15054.1.patch, HIVE-15054.2.patch, 
> HIVE-15054.3.patch
>
>
> The query of {{insert overwrite table tbl1}} sometimes will fail with the 
> following errors. Seems we are constructing taskAttemptId with partitionId 
> which is not unique if there are multiple attempts.
> {noformat}
> ava.lang.IllegalStateException: Hit error while closing operators - failing 
> tree: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
> output from: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_task_tmp.-ext-10002/_tmp.002148_0
>  to: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_tmp.-ext-10002/002148_0
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:202)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:106)
> at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14990) run all tests for MM tables and fix the issues that are found

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617158#comment-15617158
 ] 

Hive QA commented on HIVE-14990:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835873/HIVE-14990.04.patch

{color:green}SUCCESS:{color} +1 due to 13 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1187 failed/errored test(s), 9965 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2]
 (batchId=215)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key]
 (batchId=215)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_joins] 
(batchId=215)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=215)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=215)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=215)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] (batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_vectorization] 
(batchId=58)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_vectorization_project]
 (batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[allcolref_in_udf] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_concatenate_indexed_table]
 (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_merge] (batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_merge_2] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_merge_2_orc] 
(batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_merge_3] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_merge_stats] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_numbuckets_partitioned_table2_h23]
 (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_numbuckets_partitioned_table_h23]
 (batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_change_col]
 (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_coltype] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_cascade] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_partition_drop]
 (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_serde2] 
(batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[analyze_table_null_partition]
 (batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_filter] 
(batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_groupby] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_limit] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_part] 
(batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_select] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_table] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_union] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_1_sql_std] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_create_temp_table]
 (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_insert] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_load] 
(batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_parts] 
(batchId=43)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_3] 
(batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_6] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_8] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_9] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join24] (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_reordering_values]
 (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_10] 
(batchId=65)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_11] 
(batchId=77)

[jira] [Commented] (HIVE-15068) Run ClearDanglingScratchDir periodically inside HS2

2016-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617146#comment-15617146
 ] 

Thejas M Nair commented on HIVE-15068:
--

[~daijy] Can you please add a reviewboard link or pull request ?


> Run ClearDanglingScratchDir periodically inside HS2
> ---
>
> Key: HIVE-15068
> URL: https://issues.apache.org/jira/browse/HIVE-15068
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Attachments: HIVE-15068.1.patch
>
>
> In HIVE-13429, we introduce a tool which clear dangling scratch directory. In 
> this ticket, we want to invoke the tool automatically on a Hive cluster. 
> Options are:
> 1. cron job, which would involve manual cron job setup
> 2. As a metastore thread. However, it is possible we run metastore without 
> hdfs in the future (eg, managing s3 files). ClearDanglingScratchDir needs 
> support which only exists in hdfs, it won't work if the above scenario happens
> 3. As a HS2 thread. The downside is if no HS2 is running, the tool will not 
> run automatically. But we expect HS2 will be a required component down the 
> road
> Here I choose approach 3 in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12813) LLAP: issues in setup, shutdown

2016-10-28 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-12813:
--
Labels:   (was: TODOC2.0)

> LLAP: issues in setup, shutdown
> ---
>
> Key: HIVE-12813
> URL: https://issues.apache.org/jira/browse/HIVE-12813
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.0.0
>
> Attachments: HIVE-12813.01.patch, HIVE-12813.patch
>
>
> 1) Due to YARN-4562, we should package ssl-server.xml if available; SSL 
> settings are not  read from LLAP configs.
> 2) The bean removal can fail during shutdown.
> 3) LlapWebServices creates its own config object but uses the one provided by 
> AbstractService instead.
> 4) Setting name for ACL is used by Hadoop to generate the setting name for 
> the host list, which happens to collide with the existing LLAP host list 
> setting name, resulting in all hosts being prevented from connecting to 
> daemon protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12813) LLAP: issues in setup, shutdown

2016-10-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617140#comment-15617140
 ] 

Lefty Leverenz commented on HIVE-12813:
---

[~sladymon] documented *hive.llap.daemon.acl* and *hive.llap.management.acl* in 
the wiki (thanks!) so I'm removing the TODOC2.0 label.

* [hive.llap.daemon.acl | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.daemon.acl]
* [hive.llap.management.acl | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.management.acl]

> LLAP: issues in setup, shutdown
> ---
>
> Key: HIVE-12813
> URL: https://issues.apache.org/jira/browse/HIVE-12813
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.0.0
>
> Attachments: HIVE-12813.01.patch, HIVE-12813.patch
>
>
> 1) Due to YARN-4562, we should package ssl-server.xml if available; SSL 
> settings are not  read from LLAP configs.
> 2) The bean removal can fail during shutdown.
> 3) LlapWebServices creates its own config object but uses the one provided by 
> AbstractService instead.
> 4) Setting name for ACL is used by Hadoop to generate the setting name for 
> the host list, which happens to collide with the existing LLAP host list 
> setting name, resulting in all hosts being prevented from connecting to 
> daemon protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12341) LLAP: add security to daemon protocol endpoint (excluding shuffle)

2016-10-28 Thread Shannon Ladymon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shannon Ladymon updated HIVE-12341:
---
Labels:   (was: TODOC2.0)

> LLAP: add security to daemon protocol endpoint (excluding shuffle)
> --
>
> Key: HIVE-12341
> URL: https://issues.apache.org/jira/browse/HIVE-12341
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.0.0
>
> Attachments: HIVE-12341.01.patch, HIVE-12341.02.patch, 
> HIVE-12341.03.patch, HIVE-12341.03.patch, HIVE-12341.04.patch, 
> HIVE-12341.05.patch, HIVE-12341.06.patch, HIVE-12341.07.patch, 
> HIVE-12341.08.patch, HIVE-12341.09.patch, HIVE-12341.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12341) LLAP: add security to daemon protocol endpoint (excluding shuffle)

2016-10-28 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617135#comment-15617135
 ] 

Shannon Ladymon commented on HIVE-12341:


Doc Done:
* [Configuration Properties - hive.llap.daemon.service.principal | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.daemon.service.principal]
* [Configuration Properties - hive.llap.daemon.keytab.file | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.daemon.keytab.file]
* [Configuration Properties - hive.llap.zk.sm.principal | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.zk.sm.principal]
* [Configuration Properties - hive.llap.zk.sm.keytab.file | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.zk.sm.keytab.file]
* [Configuration Properties - hive.llap.zk.sm.connectionString | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.zk.sm.connectionString]
* [Configuration Properties - hive.llap.daemon.acl | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.daemon.acl]
* [Configuration Properties - hive.llap.management.acl | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.management.acl]
* [Configuration Properties - hive.llap.daemon.delegation.token.lifetime | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.daemon.delegation.token.lifetime]
 * [Configuration Properties - hive.llap.management.rpc.port | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.management.rpc.port]

TODOC label removed.

> LLAP: add security to daemon protocol endpoint (excluding shuffle)
> --
>
> Key: HIVE-12341
> URL: https://issues.apache.org/jira/browse/HIVE-12341
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.0.0
>
> Attachments: HIVE-12341.01.patch, HIVE-12341.02.patch, 
> HIVE-12341.03.patch, HIVE-12341.03.patch, HIVE-12341.04.patch, 
> HIVE-12341.05.patch, HIVE-12341.06.patch, HIVE-12341.07.patch, 
> HIVE-12341.08.patch, HIVE-12341.09.patch, HIVE-12341.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15061) Metastore types are sometimes case sensitive

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617092#comment-15617092
 ] 

Hive QA commented on HIVE-15061:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835876/HIVE-15061.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10626 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals
 (batchId=229)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1871/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1871/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1871/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12835876 - PreCommit-HIVE-Build

> Metastore types are sometimes case sensitive
> 
>
> Key: HIVE-15061
> URL: https://issues.apache.org/jira/browse/HIVE-15061
> Project: Hive
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Chaoyu Tang
> Attachments: HIVE-15061.1.patch, HIVE-15061.1.patch, HIVE-15061.patch
>
>
> Impala recently encountered an issue with the metastore 
> ([IMPALA-4260|https://issues.cloudera.org/browse/IMPALA-4260] ) where column 
> stats would get dropped when adding a column to a table.
> The reason seems to be that Hive does a case sensitive check on the column 
> stats types during an "alter table" and expects the types to be all lower 
> case. This case sensitive check doesn't appear to happen when the stats are 
> set in the first place.
> We're solving this on the Impala end by storing types in the metastore as all 
> lower case, but Hive's behavior here is very confusing. It should either 
> always be case sensitive, so that you can't create column stats with types 
> that Hive considers invalid, or it should never be case sensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12470) Allow splits to provide custom consistent locations, instead of being tied to data locality

2016-10-28 Thread Shannon Ladymon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shannon Ladymon updated HIVE-12470:
---
Labels:   (was: TODOC2.0)

> Allow splits to provide custom consistent locations, instead of being tied to 
> data locality
> ---
>
> Key: HIVE-12470
> URL: https://issues.apache.org/jira/browse/HIVE-12470
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.0.0
>
> Attachments: HIVE-12470.1.txt, HIVE-12470.1.wip.txt, HIVE-12470.2.txt
>
>
> LLAP instances may not run on the same nodes as HDFS, or may run on a subset 
> of the cluster.
> Using split locations based on FileSystem locality is not very useful in such 
> cases - since that guarantees not getting any locality.
> Allow a split to map to a specific location - so that there's a chance of 
> getting cache locality across different queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12470) Allow splits to provide custom consistent locations, instead of being tied to data locality

2016-10-28 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617086#comment-15617086
 ] 

Shannon Ladymon commented on HIVE-12470:


And TODOC label removed.

> Allow splits to provide custom consistent locations, instead of being tied to 
> data locality
> ---
>
> Key: HIVE-12470
> URL: https://issues.apache.org/jira/browse/HIVE-12470
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.0.0
>
> Attachments: HIVE-12470.1.txt, HIVE-12470.1.wip.txt, HIVE-12470.2.txt
>
>
> LLAP instances may not run on the same nodes as HDFS, or may run on a subset 
> of the cluster.
> Using split locations based on FileSystem locality is not very useful in such 
> cases - since that guarantees not getting any locality.
> Allow a split to map to a specific location - so that there's a chance of 
> getting cache locality across different queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12470) Allow splits to provide custom consistent locations, instead of being tied to data locality

2016-10-28 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617084#comment-15617084
 ] 

Shannon Ladymon commented on HIVE-12470:


Doc Done:
* [Configuration Properties - hive.llap.client.consistent.splits | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.client.consistent.splits]
* [Configuration Properties - hive.llap.daemon.service.hosts | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.daemon.service.hosts]

> Allow splits to provide custom consistent locations, instead of being tied to 
> data locality
> ---
>
> Key: HIVE-12470
> URL: https://issues.apache.org/jira/browse/HIVE-12470
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>  Labels: TODOC2.0
> Fix For: 2.0.0
>
> Attachments: HIVE-12470.1.txt, HIVE-12470.1.wip.txt, HIVE-12470.2.txt
>
>
> LLAP instances may not run on the same nodes as HDFS, or may run on a subset 
> of the cluster.
> Using split locations based on FileSystem locality is not very useful in such 
> cases - since that guarantees not getting any locality.
> Allow a split to map to a specific location - so that there's a chance of 
> getting cache locality across different queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15095:
---
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jesus Camacho Rodriguez
> Fix For: 2.2.0
>
> Attachments: HIVE-15095.patch
>
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617044#comment-15617044
 ] 

Jesus Camacho Rodriguez commented on HIVE-15095:


Thanks [~sershe], forgot to add a file to HIVE-15046 addendum; I am pushing the 
fix since it is only test changes.

> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15095.patch
>
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617039#comment-15617039
 ] 

Sergey Shelukhin commented on HIVE-15095:
-

+1

> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15095.patch
>
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9635) LLAP: I'm the decider

2016-10-28 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617037#comment-15617037
 ] 

Shannon Ladymon commented on HIVE-9635:
---

Doc Note:

This patch added the configuration properties *hive.llap.auto.enforce.tree*, 
*hive.llap.auto.enforce.vectorized*, *hive.llap.auto.enforce.stats*, 
*hive.llap.auto.max.input.size*, *hive.llap.auto.max.output.size*, and 
*hive.llap.execution.mode* to the wiki:
* [Configuration Properties - hive.llap.auto.enforce.tree | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.auto.enforce.tree]
* [Configuration Properties - hive.llap.auto.enforce.vectorized | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.auto.enforce.vectorized]
* [Configuration Properties - hive.llap.auto.enforce.stats | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.auto.enforce.stats]
* [Configuration Properties - hive.llap.auto.max.input.size | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.auto.max.input.size]
* [Configuration Properties - hive.llap.auto.max.output.size | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.auto.max.output.size]
* [Configuration Properties - hive.llap.execution.mode | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.execution.mode]


> LLAP: I'm the decider
> -
>
> Key: HIVE-9635
> URL: https://issues.apache.org/jira/browse/HIVE-9635
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-9635.1.patch, HIVE-9635.2.patch
>
>
> https://www.youtube.com/watch?v=r8VbzrZ9yHQ
> Physical optimizer to choose what to run inside/outside llap. Tests first 
> whether user code has to be shipped then if the specific query fragment is 
> suitable to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15095:
---
Attachment: HIVE-15095.patch

> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15095.patch
>
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-15095 started by Jesus Camacho Rodriguez.
--
> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jesus Camacho Rodriguez
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15095:
---
Status: Patch Available  (was: In Progress)

> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jesus Camacho Rodriguez
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-15095:
--

Assignee: Jesus Camacho Rodriguez

> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jesus Camacho Rodriguez
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14884) Test result cleanup before 2.1.1 release

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617005#comment-15617005
 ] 

Hive QA commented on HIVE-14884:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835678/HIVE-14884.03-branch-2.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 74 failed/errored test(s), 10462 tests 
executed
*Failed tests:*
{noformat}
TestJdbcWithMiniHA - did not produce a TEST-*.xml file (likely timed out) 
(batchId=494)
TestJdbcWithMiniMr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=491)
TestMsgBusConnection - did not produce a TEST-*.xml file (likely timed out) 
(batchId=362)
TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file (likely 
timed out) (batchId=484)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats 
(batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table_use_metadata
 (batchId=109)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ppd_schema_evol_3a 
(batchId=97)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_order_null (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_part
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
 (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_part
 (batchId=142)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
 (batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
 (batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
 (batchId=65)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part
 (batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
 (batchId=126)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
 (batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
 (batchId=136)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
 (batchId=58)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
 (batchId=112)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
 (batchId=43)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part
 (batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_table
 (batchId=132)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_null_optimizer 
(batchId=154)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_fast_stats 
(batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_between_in 
(batchId=99)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_ppd_basic 
(batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_part
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_part
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
 (batchId=521)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
 (batchId=521)

[jira] [Commented] (HIVE-15060) Remove the autoCommit warning from beeline

2016-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616975#comment-15616975
 ] 

Thejas M Nair commented on HIVE-15060:
--

+1 Pending tests


> Remove the autoCommit warning from beeline
> --
>
> Key: HIVE-15060
> URL: https://issues.apache.org/jira/browse/HIVE-15060
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-15060.1.patch, HIVE-15060.2.patch
>
>
> WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not 
> support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://ctr-e89-1466633100028-0275-01
> By default, this beeline setting is false, while hive only support 
> autoCommit=true for now. So this warning does mot make sense and should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15062) create backward compat checking for metastore APIs

2016-10-28 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616849#comment-15616849
 ] 

Sushanth Sowmyan edited comment on HIVE-15062 at 10/28/16 10:53 PM:


I like the fundamental idea, and I think this makes sense. I would suggest a 
couple of things, though.

a) checkClient is not really handling "checking" client compatibility. Instead, 
it is instead acting as a gating assert for entering sections of code where we 
make an assumption that feature exists. I think that an api style that did the 
following would be more usable:

{code}
private boolean checkClientCompatible(ClientCapabilities capabilities,
ClientCapability value, String what, String call) {
 // return true if compatible, false, if not. Simply test, no more.
...
}

private boolean assertClientCompatible(ClientCapabilities capabilities,
ClientCapability value, String what, String call) throws 
MetaException {
if (!checkClientCompatible(capabilities, value, what, call) {
// throw exception similar to existing checkClient
}
}

{code}

Then, this allows us to assert that something should be compatible if it 
requires it, and allows us to write backward compatible code if possible by 
checking capability.

b) Also, I would advise cautionary use of the capabilities notion, since the 
metastore api is a public api (also, the HCatClient api that sits on top of 
this), and thus, all manners of tools use it, not just hive. For eg., in the 
example you use, of ACID tables not being visible, or worse, erroring out if a 
user has their capabilities as null, this will then break other existing tools 
written against the metastore api, such as tools that do a UI display of the 
warehouse (data explorers/etc), or will break existing wh management tools 
which set table properties for expiry/cleanup, etc. They can be updated(with 
recompiling), by adding capabilities to each explicitly, but that then becomes 
a moving target for them unless we define something like ALL, at which point 
everyone starts using ALL and the point gets lost. On the other hand, 
warehouses that have tools like that could also disable the compatibility check 
altogether, and it's a good thing you include that - this allows them to 
continue unbroken. But.. that then causes issues if we write code that then 
necessarily depends on the compatibility check acting as a gate.

Thus, while this is a useful capability, the possibility for accidental misuse 
leading to breaking backward compatibility is high. I would still like this to 
be introduced, however, but maybe with more documented warnings on usage, about 
why one must be careful of implementing this?


was (Author: sushanth):
I like the fundamental idea, and I think this makes sense. I would suggest a 
couple of things, though.

a) checkClient is not really handling "checking" client compatibility. Instead, 
it is instead acting as a gating assert for entering sections of code where we 
make an assumption that feature exists. I think that an api style that did the 
following would be more usable:

{code}
private boolean checkClientCompatible(ClientCapabilities capabilities,
ClientCapability value, String what, String call) {
 // return true if compatible, false, if not. Simply test, no more.
...
}

private boolean assertClientCompatible(ClientCapabilities capabilities,
ClientCapability value, String what, String call) throws 
MetaException {
if (!checkClientCompatible(capabilities, value, what, call) {
// throw exception similar to existing checkClient
}
}

{code}

Then, this allows us to assert that something should be compatible if it 
requires it, and allows us to write backward compatible code if possible by 
checking capability.

Also, I would advise cautionary use of the capabilities notion, since the 
metastore api is a public api (also, the HCatClient api that sits on top of 
this), and thus, all manners of tools use it, not just hive. For eg., in the 
example you use, of ACID tables not being visible, or worse, erroring out if a 
user has their capabilities as null, this will then break other tools written 
against the metastore api, such as tools that do a UI display of the warehouse 
(data explorers/etc), or will break existing wh management tools which set 
table properties for expiry/cleanup, etc.

Thus, while this is a useful capability, the possibility for accidental misuse 
leading to breaking backward compatibility is high. I would still like this to 
be introduced, however, but maybe with more documented warnings on usage, about 
why one must be careful of implementing this?

> create backward compat checking for metastore APIs
> --
>
> Key: HIVE-15062
> URL: 

[jira] [Commented] (HIVE-15062) create backward compat checking for metastore APIs

2016-10-28 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616849#comment-15616849
 ] 

Sushanth Sowmyan commented on HIVE-15062:
-

I like the fundamental idea, and I think this makes sense. I would suggest a 
couple of things, though.

a) checkClient is not really handling "checking" client compatibility. Instead, 
it is instead acting as a gating assert for entering sections of code where we 
make an assumption that feature exists. I think that an api style that did the 
following would be more usable:

{code}
private boolean checkClientCompatible(ClientCapabilities capabilities,
ClientCapability value, String what, String call) {
 // return true if compatible, false, if not. Simply test, no more.
...
}

private boolean assertClientCompatible(ClientCapabilities capabilities,
ClientCapability value, String what, String call) throws 
MetaException {
if (!checkClientCompatible(capabilities, value, what, call) {
// throw exception similar to existing checkClient
}
}

{code}

Then, this allows us to assert that something should be compatible if it 
requires it, and allows us to write backward compatible code if possible by 
checking capability.

Also, I would advise cautionary use of the capabilities notion, since the 
metastore api is a public api (also, the HCatClient api that sits on top of 
this), and thus, all manners of tools use it, not just hive. For eg., in the 
example you use, of ACID tables not being visible, or worse, erroring out if a 
user has their capabilities as null, this will then break other tools written 
against the metastore api, such as tools that do a UI display of the warehouse 
(data explorers/etc), or will break existing wh management tools which set 
table properties for expiry/cleanup, etc.

Thus, while this is a useful capability, the possibility for accidental misuse 
leading to breaking backward compatibility is high. I would still like this to 
be introduced, however, but maybe with more documented warnings on usage, about 
why one must be careful of implementing this?

> create backward compat checking for metastore APIs
> --
>
> Key: HIVE-15062
> URL: https://issues.apache.org/jira/browse/HIVE-15062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15062.01.nogen.patch, HIVE-15062.01.patch, 
> HIVE-15062.02.nogen.patch, HIVE-15062.02.patch, HIVE-15062.03.nogen.patch, 
> HIVE-15062.03.patch, HIVE-15062.nogen.patch, HIVE-15062.patch
>
>
> This is to add client capability checking to Hive metastore.
> This could have been used, for example, when introducing ACID tables - a 
> client trying to get_table on such a table without specifying that it is 
> aware of ACID tables would get an error by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-14944) Handle additional predicates

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman resolved HIVE-14944.
---
Resolution: Duplicate

included in HIVE-14943

> Handle additional predicates
> 
>
> Key: HIVE-14944
> URL: https://issues.apache.org/jira/browse/HIVE-14944
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> Add support for (AND ) in  WHEN MATCHED AND X
> (and WHEN NOT MATCHED)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616769#comment-15616769
 ] 

Tao Li commented on HIVE-14476:
---

We are seeing test failures in branch-1 as well: 
https://issues.apache.org/jira/browse/HIVE-15049
[~spena] Do you have quick thoughts on how to fix these failures?

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616731#comment-15616731
 ] 

Hive QA commented on HIVE-14476:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835870/HIVE-14476.1-branch-1.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 137 failed/errored test(s), 7897 tests 
executed
*Failed tests:*
{noformat}
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=339)
TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely 
timed out) (batchId=370)
TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely 
timed out) (batchId=349)
TestAuthzApiEmbedAuthorizerInRemote - did not produce a TEST-*.xml file (likely 
timed out) (batchId=355)
TestBeeLineWithArgs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=377)
TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=393)
TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely 
timed out) (batchId=369)
TestCompactor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=359)
TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) 
(batchId=358)
TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) 
(batchId=378)
TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=324)
TestDDLWithRemoteMetastoreSecondNamenode - did not produce a TEST-*.xml file 
(likely timed out) (batchId=357)
TestDynamicSerDe - did not produce a TEST-*.xml file (likely timed out) 
(batchId=327)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=336)
TestEmbeddedThriftBinaryCLIService - did not produce a TEST-*.xml file (likely 
timed out) (batchId=381)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=331)
TestFolderPermissions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=364)
TestHS2AuthzContext - did not produce a TEST-*.xml file (likely timed out) 
(batchId=396)
TestHS2AuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=397)
TestHS2ImpersonationWithRemoteMS - did not produce a TEST-*.xml file (likely 
timed out) (batchId=385)
TestHiveAuthorizerCheckInvocation - did not produce a TEST-*.xml file (likely 
timed out) (batchId=373)
TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file (likely timed 
out) (batchId=372)
TestHiveHistory - did not produce a TEST-*.xml file (likely timed out) 
(batchId=375)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=351)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=341)
TestHiveMetaTool - did not produce a TEST-*.xml file (likely timed out) 
(batchId=354)
TestHiveServer2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=399)
TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed 
out) (batchId=400)
TestHiveSessionImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=382)
TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=356)
TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=428)
TestJdbcDriver2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=387)
TestJdbcMetadataApiAuth - did not produce a TEST-*.xml file (likely timed out) 
(batchId=398)
TestJdbcWithLocalClusterSpark - did not produce a TEST-*.xml file (likely timed 
out) (batchId=392)
TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=389)
TestJdbcWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=425)
TestJdbcWithMiniKdcCookie - did not produce a TEST-*.xml file (likely timed 
out) (batchId=424)
TestJdbcWithMiniKdcSQLAuthBinary - did not produce a TEST-*.xml file (likely 
timed out) (batchId=422)
TestJdbcWithMiniKdcSQLAuthHttp - did not produce a TEST-*.xml file (likely 
timed out) (batchId=427)
TestJdbcWithMiniMr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=388)
TestJdbcWithSQLAuthUDFBlacklist - did not produce a TEST-*.xml file (likely 
timed out) (batchId=394)
TestJdbcWithSQLAuthorization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=395)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=362)
TestMTQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=360)
TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) 
(batchId=348)
TestMarkPartitionRemote - did not produce a TEST-*.xml file (likely timed out) 
(batchId=352)
TestMetaStoreAuthorization - did not produce a TEST-*.xml file (likely timed 
out) 

[jira] [Commented] (HIVE-15078) Flaky dummy

2016-10-28 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616692#comment-15616692
 ] 

Siddharth Seth commented on HIVE-15078:
---

For CliDriver tests - Tez/Spark sessions are re-used across tests. I believe 
the Hive Client session gets resset each time - and this is supposed to make 
sure tests start with a clean base, and whatever settings are in the qfile get 
applied. I wonder if this is not working as it should.
Also, I read a comment somewhere about tests modifying and writing the 
configuration back to disk. That would cause all kinds of problems when running 
in a batch.

Thanks for looking into this.

> Flaky dummy
> ---
>
> Key: HIVE-15078
> URL: https://issues.apache.org/jira/browse/HIVE-15078
> Project: Hive
>  Issue Type: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-15078.1.patch, HIVE-15078.1.patch
>
>
> I think it would be intresting to see what will happen if all currently known 
> flaky test would be ignored...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14933) include argparse with LLAP scripts to support antique Python versions

2016-10-28 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14933:

Attachment: HIVE-14933.02.patch

Newer version of argparse

> include argparse with LLAP scripts to support antique Python versions
> -
>
> Key: HIVE-14933
> URL: https://issues.apache.org/jira/browse/HIVE-14933
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14933.01.patch, HIVE-14933.02.patch, 
> HIVE-14933.patch
>
>
> The module is a standalone file, and it's under Python license that is 
> compatible with Apache. In the long term we should probably just move 
> LlapServiceDriver code entirely to Java, as right now it's a combination of 
> part-py, part-java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15062) create backward compat checking for metastore APIs

2016-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616619#comment-15616619
 ] 

Sergey Shelukhin commented on HIVE-15062:
-

Known flaky tests for Spark, and for explain - 
https://issues.apache.org/jira/browse/HIVE-15084

The Druid test fails on master for me too - filed 
https://issues.apache.org/jira/browse/HIVE-15095

[~thejas] does the patch make sense now?

> create backward compat checking for metastore APIs
> --
>
> Key: HIVE-15062
> URL: https://issues.apache.org/jira/browse/HIVE-15062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15062.01.nogen.patch, HIVE-15062.01.patch, 
> HIVE-15062.02.nogen.patch, HIVE-15062.02.patch, HIVE-15062.03.nogen.patch, 
> HIVE-15062.03.patch, HIVE-15062.nogen.patch, HIVE-15062.patch
>
>
> This is to add client capability checking to Hive metastore.
> This could have been used, for example, when introducing ACID tables - a 
> client trying to get_table on such a table without specifying that it is 
> aware of ACID tables would get an error by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15095) TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails

2016-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616616#comment-15616616
 ] 

Sergey Shelukhin commented on HIVE-15095:
-

[~jcamachorodriguez] fyi

> TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals fails
> --
>
> Key: HIVE-15095
> URL: https://issues.apache.org/jira/browse/HIVE-15095
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> {noformat}
> junit.framework.AssertionFailedError: 
> expected:<[[2009-12-31T16:00:00.000-08:00/2010-04-01T23:00:00.000-07:00], 
> [2010-04-01T23:00:00.000-07:00/2010-07-02T05:00:00.000-07:00], 
> [2010-07-02T05:00:00.000-07:00/2010-10-01T11:00:00.000-07:00], 
> [2010-10-01T11:00:00.000-07:00/2010-12-31T16:00:00.000-08:00]]> but 
> was:<[[2010-01-01T00:00:00.000Z/2010-04-02T06:00:00.000Z], 
> [2010-04-02T06:00:00.000Z/2010-07-02T12:00:00.000Z], 
> [2010-07-02T12:00:00.000Z/2010-10-01T18:00:00.000Z], 
> [2010-10-01T18:00:00.000Z/2011-01-01T00:00:00.000Z]]>  at 
> org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals(TestHiveDruidQueryBasedInputFormat.java:54)
> {noformat}
> Seems offset by 7-8 hours.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13000) Hive returns useless parsing error

2016-10-28 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616606#comment-15616606
 ] 

Eugene Koifman commented on HIVE-13000:
---

could you add a test case?

> Hive returns useless parsing error 
> ---
>
> Key: HIVE-13000
> URL: https://issues.apache.org/jira/browse/HIVE-13000
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 1.0.0, 1.2.1, 2.2.0
>Reporter: Alina Abramova
>Assignee: Alina Abramova
>Priority: Minor
> Attachments: HIVE-13000.1.patch, HIVE-13000.2.patch, 
> HIVE-13000.3.patch, HIVE-13000.4.patch
>
>
> When I run query like these I receive unclear exception
> hive> SELECT record FROM ctest GROUP BY record.instance_id;
> FAILED: SemanticException Error in parsing 
> It will be clearer if it would be like:
> hive> SELECT record FROM ctest GROUP BY record.instance_id;
> FAILED: SemanticException  Expression not in GROUP BY key record



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15007) Hive 1.2.2 release planning

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616596#comment-15616596
 ] 

Sergio Peña commented on HIVE-15007:


[~vgumashta] All those tests are part of branch-2.1, including 
TestSparkCliDriver and TestMiniSparkOnYarnCliDriver that are generated by the 
itests/qtest-spark/pom.xml

{noformat}
⟫ git status
On branch branch-1.2
Your branch is up-to-date with 'apache/branch-1.2'.
nothing to commit, working tree clean


TestAdminUser
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAdminUser.java
TestAuthorizationPreEventListener
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestAuthorizationPreEventListener.java
TestAuthzApiEmbedAuthorizerInEmbed   
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthzApiEmbedAuthorizerInEmbed.java
TestAuthzApiEmbedAuthorizerInRemote  
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthzApiEmbedAuthorizerInRemote.java
TestBeeLineWithArgs  
./itests/hive-unit/src/test/java/org/apache/hive/beeline/TestBeeLineWithArgs.java
TestCLIAuthzSessionContext   
./itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestCLIAuthzSessionContext.java
TestClientSideAuthorizationProvider  
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestClientSideAuthorizationProvider.java
TestCompactor
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java
TestCreateUdfEntities
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestCreateUdfEntities.java
TestCustomAuthentication 
./itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestCustomAuthentication.java
TestDBTokenStore 
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/thrift/TestDBTokenStore.java
TestDDLWithRemoteMetastoreSecondNamenode 
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestDDLWithRemoteMetastoreSecondNamenode.java
TestDynamicSerDe 
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/serde2/dynamic_type/TestDynamicSerDe.java
TestEmbeddedHiveMetaStore
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestEmbeddedHiveMetaStore.java
TestEmbeddedThriftBinaryCLIService   
./itests/hive-unit/src/test/java/org/apache/hive/service/cli/TestEmbeddedThriftBinaryCLIService.java
TestFilterHooks  
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestFilterHooks.java
TestFolderPermissions
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestFolderPermissions.java
TestHS2AuthzContext  
./itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestHS2AuthzContext.java
TestHS2AuthzSessionContext   
./itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestHS2AuthzSessionContext.java
TestHS2ImpersonationWithRemoteMS 
./itests/hive-unit/src/test/java/org/apache/hive/service/TestHS2ImpersonationWithRemoteMS.java
TestHiveAuthorizerCheckInvocation
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveAuthorizerCheckInvocation.java
TestHiveAuthorizerShowFilters
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveAuthorizerShowFilters.java
TestHiveHistory  
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/history/TestHiveHistory.java
TestHiveMetaStoreTxns
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTxns.java
TestHiveMetaStoreWithEnvironmentContext  
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreWithEnvironmentContext.java
TestHiveMetaTool 
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaTool.java
TestHiveServer2  
./itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHiveServer2.java
TestHiveServer2SessionTimeout
./itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHiveServer2SessionTimeout.java
TestHiveSessionImpl  
./itests/hive-unit/src/test/java/org/apache/hive/service/cli/session/TestHiveSessionImpl.java
TestHs2Hooks 
./itests/hive-unit/src/test/java/org/apache/hadoop/hive/hooks/TestHs2Hooks.java
TestHs2HooksWithMiniKdc  
./itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestHs2HooksWithMiniKdc.java
TestJdbcDriver2  
./itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcDriver2.java
TestJdbcMetadataApiAuth  

[jira] [Commented] (HIVE-15084) Flaky test: TestMiniTezCliDriver:explainanalyze_2

2016-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616581#comment-15616581
 ] 

Sergey Shelukhin commented on HIVE-15084:
-

Same for 3 and 4

> Flaky test: TestMiniTezCliDriver:explainanalyze_2
> -
>
> Key: HIVE-15084
> URL: https://issues.apache.org/jira/browse/HIVE-15084
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14943) Base Implementation

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14943:
--
Attachment: HIVE-14943.4.patch

> Base Implementation
> ---
>
> Key: HIVE-14943
> URL: https://issues.apache.org/jira/browse/HIVE-14943
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14943.2.patch, HIVE-14943.2.patch, 
> HIVE-14943.3.patch, HIVE-14943.4.patch, HIVE-14943.patch, HIVE-14943.patch
>
>
> Create the 1st pass functional implementation of MERGE
> This should run e2e and produce correct results.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15007) Hive 1.2.2 release planning

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616560#comment-15616560
 ] 

Sergio Peña commented on HIVE-15007:


Well, I don't see TestDruidSerDe anymore, and there are less test failures 
(before 198, now 138). I'll investigate what other issue we have.

> Hive 1.2.2 release planning
> ---
>
> Key: HIVE-15007
> URL: https://issues.apache.org/jira/browse/HIVE-15007
> Project: Hive
>  Issue Type: Task
>Affects Versions: 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007.branch-1.2.patch
>
>
> Discussed with [~spena] about triggering unit test runs for 1.2.2 release and 
> creating a patch which will trigger precommits looks like a good way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14735) Build Infra: Spark artifacts download takes a long time

2016-10-28 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616460#comment-15616460
 ] 

Zoltan Haindrich edited comment on HIVE-14735 at 10/28/16 8:57 PM:
---

[~spena] it's good to know these things...i've tryed this out - because in 
gradle this would be easy...i assumed maven can do it too...well, it seems it 
does! but it needs quite a bunch of xml to do something like this ;)

i've experimented with it...and it looks like it works - i would like to submit 
a test ptest run to check that everything is all right - but since my own 
server serves this spark related maven repo, I don't really want it to go in ;)

i've published a preliminary "conceptional" repackaging tool here:
https://github.com/kgyrtkirk/hive-14735

[~spena], can you take a look at it, and see if it could be a viable 
alternative for the current artifact delivery method (or not)...I've tryed it 
out locally...in the readme i've sketched my steps how I tried it out - hope it 
helps evaluating it!

I think this will eventually work...download/unpack/etc is done by maven 
plugins which should be highly portable.

notes:

* {{mvn clean}} clears the unpacked things - which is good
* unpacking a new version doesn't remove the old files, just pastes the new 
tree on top of it...but in case someone changes distinct branches I think he 
will use {{mvn clean}} or a harder {{git clean -dfx}} - so it should be ok
* there is a log4j2 properties filewhich gets copied into this unpacked 
directory...it can be included in the artifact...or keep it like this?



was (Author: kgyrtkirk):
[~spena] it's good to know these things...i've tryed this out - because in 
gradle this would be easy...i assumed maven can do it too...well, it seems it 
does! but it needs quite a bunch of xml to do something like this ;)

i've experimented with it...and it looks like it works - i would like to submit 
a test ptest run to check that everything is all right - but since my own 
server serves this spark related maven repo, I don't really want it to go in ;)

i've published a preliminary "conceptional" repackaging tool here:
https://github.com/kgyrtkirk/hive-14735

[~spena], you might want to try it out locally, i've sketched a readme in that 
git repo - hope it helps evaluating this maven option.

I think this will eventually work...download/unpack/etc is done by maven 
plugins which should be highly portable.

notes:

* {{mvn clean}} clears the unpacked things - which is good
* unpacking a new version doesn't remove the old files, just pastes the new 
tree on top of it...but in case someone changes distinct branches I think he 
will use {{mvn clean}} or a harder {{git clean -dfx}} - so it should be ok
* there is a log4j2 properties filewhich gets copied into this unpacked 
directory...it can be included in the artifact...or keep it like this?


> Build Infra: Spark artifacts download takes a long time
> ---
>
> Key: HIVE-14735
> URL: https://issues.apache.org/jira/browse/HIVE-14735
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Vaibhav Gumashta
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14735.1.patch
>
>
> In particular this command:
> {{curl -Sso ./../thirdparty/spark-1.6.0-bin-hadoop2-without-hive.tgz 
> http://d3jw87u4immizc.cloudfront.net/spark-tarball/spark-1.6.0-bin-hadoop2-without-hive.tgz}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15062) create backward compat checking for metastore APIs

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616530#comment-15616530
 ] 

Hive QA commented on HIVE-15062:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835862/HIVE-15062.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10626 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=91)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_4] 
(batchId=91)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=90)
org.apache.hadoop.hive.druid.TestHiveDruidQueryBasedInputFormat.testCreateSplitsIntervals
 (batchId=229)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=272)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1868/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1868/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1868/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12835862 - PreCommit-HIVE-Build

> create backward compat checking for metastore APIs
> --
>
> Key: HIVE-15062
> URL: https://issues.apache.org/jira/browse/HIVE-15062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15062.01.nogen.patch, HIVE-15062.01.patch, 
> HIVE-15062.02.nogen.patch, HIVE-15062.02.patch, HIVE-15062.03.nogen.patch, 
> HIVE-15062.03.patch, HIVE-15062.nogen.patch, HIVE-15062.patch
>
>
> This is to add client capability checking to Hive metastore.
> This could have been used, for example, when introducing ACID tables - a 
> client trying to get_table on such a table without specifying that it is 
> aware of ACID tables would get an error by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15007) Hive 1.2.2 release planning

2016-10-28 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616513#comment-15616513
 ] 

Vaibhav Gumashta commented on HIVE-15007:
-

[~spena] Looks like the same issue as before.

> Hive 1.2.2 release planning
> ---
>
> Key: HIVE-15007
> URL: https://issues.apache.org/jira/browse/HIVE-15007
> Project: Hive
>  Issue Type: Task
>Affects Versions: 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007.branch-1.2.patch
>
>
> Discussed with [~spena] about triggering unit test runs for 1.2.2 release and 
> creating a patch which will trigger precommits looks like a good way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15078) Flaky dummy

2016-10-28 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616509#comment-15616509
 ] 

Zoltan Haindrich commented on HIVE-15078:
-

i've forgot to re-attach the patch...

very intresting...i suspect that there are a few problematic cases in there 
which when get into the same batch will fail - but these unstable tests may 
even help bugs to play hide and seek ;)

I will start some standalone mvn test executions...which will probably take a 
few days - but there outputs might be intresting...
until then...i will reschedule this a few times...because this doesn't contain 
any real change; just some disabled tests - this may detect some fluctuating 
tests

> Flaky dummy
> ---
>
> Key: HIVE-15078
> URL: https://issues.apache.org/jira/browse/HIVE-15078
> Project: Hive
>  Issue Type: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-15078.1.patch, HIVE-15078.1.patch
>
>
> I think it would be intresting to see what will happen if all currently known 
> flaky test would be ignored...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14943) Base Implementation

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14943:
--
Attachment: HIVE-14943.3.patch

#3 - extra predicate

> Base Implementation
> ---
>
> Key: HIVE-14943
> URL: https://issues.apache.org/jira/browse/HIVE-14943
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14943.2.patch, HIVE-14943.2.patch, 
> HIVE-14943.3.patch, HIVE-14943.patch, HIVE-14943.patch
>
>
> Create the 1st pass functional implementation of MERGE
> This should run e2e and produce correct results.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15094) Fix test failures for 2.1.1 regarding schema evolution with DECIMAL types

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-15094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616501#comment-15616501
 ] 

Sergio Peña commented on HIVE-15094:


It's interesting though that HIVE-13380 was also reverted from Master, but the 
tests are not failing there. What was it fixed there? 

> Fix test failures for 2.1.1 regarding schema evolution with DECIMAL types
> -
>
> Key: HIVE-15094
> URL: https://issues.apache.org/jira/browse/HIVE-15094
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Sergio Peña
>
> Several tests failures related to schema evolution are happening on 
> branch-2.1 due to a patch reverted in the past.
> {noformat}
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
> 

[jira] [Commented] (HIVE-15094) Fix test failures for 2.1.1 regarding schema evolution with DECIMAL types

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-15094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616474#comment-15616474
 ] 

Sergio Peña commented on HIVE-15094:


[~jcamachorodriguez] I dig a little more, and this is what is causing all those 
failures. We just need to update the .q files to remove or change the tests 
causing the failures with DECIMAL -> FLOAT,DOUBLE

> Fix test failures for 2.1.1 regarding schema evolution with DECIMAL types
> -
>
> Key: HIVE-15094
> URL: https://issues.apache.org/jira/browse/HIVE-15094
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Sergio Peña
>
> Several tests failures related to schema evolution are happening on 
> branch-2.1 due to a patch reverted in the past.
> {noformat}
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acid_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
> 

[jira] [Commented] (HIVE-14992) Relocate several common libraries in hive jdbc uber jar

2016-10-28 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616485#comment-15616485
 ] 

Tao Li commented on HIVE-14992:
---

Added another iteration to add the below ones to relocation:

com.beust.jcommander
com.lmax.disruptor
org.jamon
javolution

> Relocate several common libraries in hive jdbc uber jar
> ---
>
> Key: HIVE-14992
> URL: https://issues.apache.org/jira/browse/HIVE-14992
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14992.1.patch, HIVE-14992.2.patch, 
> HIVE-14992.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14992) Relocate several common libraries in hive jdbc uber jar

2016-10-28 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HIVE-14992:
--
Attachment: HIVE-14992.3.patch

> Relocate several common libraries in hive jdbc uber jar
> ---
>
> Key: HIVE-14992
> URL: https://issues.apache.org/jira/browse/HIVE-14992
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14992.1.patch, HIVE-14992.2.patch, 
> HIVE-14992.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15078) Flaky dummy

2016-10-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-15078:

Attachment: HIVE-15078.1.patch

> Flaky dummy
> ---
>
> Key: HIVE-15078
> URL: https://issues.apache.org/jira/browse/HIVE-15078
> Project: Hive
>  Issue Type: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-15078.1.patch, HIVE-15078.1.patch
>
>
> I think it would be intresting to see what will happen if all currently known 
> flaky test would be ignored...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-28 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15025:

Attachment: HIVE-15025.addendum

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-15025.1.patch, HIVE-15025.2.patch, 
> HIVE-15025.3.patch, HIVE-15025.addendum
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14735) Build Infra: Spark artifacts download takes a long time

2016-10-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14735:

Attachment: HIVE-14735.1.patch

[~spena] it's good to know these things...i've tryed this out - because in 
gradle this would be easy...i assumed maven can do it too...well, it seems it 
does! but it needs quite a bunch of xml to do something like this ;)

i've experimented with it...and it looks like it works - i would like to submit 
a test ptest run to check that everything is all right - but since my own 
server serves this spark related maven repo, I don't really want it to go in ;)

i've published a preliminary "conceptional" repackaging tool here:
https://github.com/kgyrtkirk/hive-14735

[~spena], you might want to try it out locally, i've sketched a readme in that 
git repo - hope it helps evaluating this maven option.

I think this will eventually work...download/unpack/etc is done by maven 
plugins which should be highly portable.

notes:

* {{mvn clean}} clears the unpacked things - which is good
* unpacking a new version doesn't remove the old files, just pastes the new 
tree on top of it...but in case someone changes distinct branches I think he 
will use {{mvn clean}} or a harder {{git clean -dfx}} - so it should be ok
* there is a log4j2 properties filewhich gets copied into this unpacked 
directory...it can be included in the artifact...or keep it like this?


> Build Infra: Spark artifacts download takes a long time
> ---
>
> Key: HIVE-14735
> URL: https://issues.apache.org/jira/browse/HIVE-14735
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Vaibhav Gumashta
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14735.1.patch
>
>
> In particular this command:
> {{curl -Sso ./../thirdparty/spark-1.6.0-bin-hadoop2-without-hive.tgz 
> http://d3jw87u4immizc.cloudfront.net/spark-tarball/spark-1.6.0-bin-hadoop2-without-hive.tgz}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14735) Build Infra: Spark artifacts download takes a long time

2016-10-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14735:

Status: Patch Available  (was: Open)

> Build Infra: Spark artifacts download takes a long time
> ---
>
> Key: HIVE-14735
> URL: https://issues.apache.org/jira/browse/HIVE-14735
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Vaibhav Gumashta
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14735.1.patch
>
>
> In particular this command:
> {{curl -Sso ./../thirdparty/spark-1.6.0-bin-hadoop2-without-hive.tgz 
> http://d3jw87u4immizc.cloudfront.net/spark-tarball/spark-1.6.0-bin-hadoop2-without-hive.tgz}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15060) Remove the autoCommit warning from beeline

2016-10-28 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HIVE-15060:
--
Attachment: HIVE-15060.2.patch

> Remove the autoCommit warning from beeline
> --
>
> Key: HIVE-15060
> URL: https://issues.apache.org/jira/browse/HIVE-15060
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-15060.1.patch, HIVE-15060.2.patch
>
>
> WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not 
> support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://ctr-e89-1466633100028-0275-01
> By default, this beeline setting is false, while hive only support 
> autoCommit=true for now. So this warning does mot make sense and should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14735) Build Infra: Spark artifacts download takes a long time

2016-10-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-14735:
---

Assignee: Zoltan Haindrich

> Build Infra: Spark artifacts download takes a long time
> ---
>
> Key: HIVE-14735
> URL: https://issues.apache.org/jira/browse/HIVE-14735
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Vaibhav Gumashta
>Assignee: Zoltan Haindrich
>
> In particular this command:
> {{curl -Sso ./../thirdparty/spark-1.6.0-bin-hadoop2-without-hive.tgz 
> http://d3jw87u4immizc.cloudfront.net/spark-tarball/spark-1.6.0-bin-hadoop2-without-hive.tgz}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616417#comment-15616417
 ] 

Sergio Peña commented on HIVE-14476:


Correct.

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10901) Optimize mutli column distinct queries

2016-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616435#comment-15616435
 ] 

Ashutosh Chauhan commented on HIVE-10901:
-

We can  use old method implemented in AggregateExpandDistinctAggregatesRule 
which does this via computing distinct count on each branch and then doing a 
join. Likely grouping set approach may be more efficient but join approach may 
be an improvement on state of art in certain cases.

> Optimize  mutli column distinct queries 
> 
>
> Key: HIVE-10901
> URL: https://issues.apache.org/jira/browse/HIVE-10901
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO, Logical Optimizer
>Affects Versions: 1.2.0
>Reporter: Mostafa Mokhtar
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-10901.patch
>
>
> HIVE-10568 is useful only when there is a distinct on one column. It can be 
> expanded for multiple column cases too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15061) Metastore types are sometimes case sensitive

2016-10-28 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-15061:
---
Attachment: (was: HIVE-15061.1.patch)

> Metastore types are sometimes case sensitive
> 
>
> Key: HIVE-15061
> URL: https://issues.apache.org/jira/browse/HIVE-15061
> Project: Hive
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Chaoyu Tang
> Attachments: HIVE-15061.1.patch, HIVE-15061.1.patch, HIVE-15061.patch
>
>
> Impala recently encountered an issue with the metastore 
> ([IMPALA-4260|https://issues.cloudera.org/browse/IMPALA-4260] ) where column 
> stats would get dropped when adding a column to a table.
> The reason seems to be that Hive does a case sensitive check on the column 
> stats types during an "alter table" and expects the types to be all lower 
> case. This case sensitive check doesn't appear to happen when the stats are 
> set in the first place.
> We're solving this on the Impala end by storing types in the metastore as all 
> lower case, but Hive's behavior here is very confusing. It should either 
> always be case sensitive, so that you can't create column stats with types 
> that Hive considers invalid, or it should never be case sensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616407#comment-15616407
 ] 

Tao Li commented on HIVE-14476:
---

Thanks [~spena]. I removed the original patch file to avoid confusion.

Based on my patch name, the test should be against branch-1.2, right?

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15061) Metastore types are sometimes case sensitive

2016-10-28 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-15061:
---
Attachment: HIVE-15061.1.patch

> Metastore types are sometimes case sensitive
> 
>
> Key: HIVE-15061
> URL: https://issues.apache.org/jira/browse/HIVE-15061
> Project: Hive
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Chaoyu Tang
> Attachments: HIVE-15061.1.patch, HIVE-15061.1.patch, HIVE-15061.patch
>
>
> Impala recently encountered an issue with the metastore 
> ([IMPALA-4260|https://issues.cloudera.org/browse/IMPALA-4260] ) where column 
> stats would get dropped when adding a column to a table.
> The reason seems to be that Hive does a case sensitive check on the column 
> stats types during an "alter table" and expects the types to be all lower 
> case. This case sensitive check doesn't appear to happen when the stats are 
> set in the first place.
> We're solving this on the Impala end by storing types in the metastore as all 
> lower case, but Hive's behavior here is very confusing. It should either 
> always be case sensitive, so that you can't create column stats with types 
> that Hive considers invalid, or it should never be case sensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15054) Hive insertion query execution fails on Hive on Spark

2016-10-28 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616426#comment-15616426
 ] 

Aihua Xu commented on HIVE-15054:
-

[~lirui] You are right that hive needs to use the same id to figure out it's 
the same task. Spark has different taskId for different task attempt. Seems 
partitionId is the closest choice. 

> Hive insertion query execution fails on Hive on Spark
> -
>
> Key: HIVE-15054
> URL: https://issues.apache.org/jira/browse/HIVE-15054
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15054.1.patch, HIVE-15054.2.patch, 
> HIVE-15054.3.patch
>
>
> The query of {{insert overwrite table tbl1}} sometimes will fail with the 
> following errors. Seems we are constructing taskAttemptId with partitionId 
> which is not unique if there are multiple attempts.
> {noformat}
> ava.lang.IllegalStateException: Hit error while closing operators - failing 
> tree: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
> output from: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_task_tmp.-ext-10002/_tmp.002148_0
>  to: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_tmp.-ext-10002/002148_0
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:202)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:106)
> at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14933) include argparse with LLAP scripts to support antique Python versions

2016-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616416#comment-15616416
 ] 

Sergey Shelukhin commented on HIVE-14933:
-

[~gopalv] the latest patch does add the note

> include argparse with LLAP scripts to support antique Python versions
> -
>
> Key: HIVE-14933
> URL: https://issues.apache.org/jira/browse/HIVE-14933
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14933.01.patch, HIVE-14933.patch
>
>
> The module is a standalone file, and it's under Python license that is 
> compatible with Apache. In the long term we should probably just move 
> LlapServiceDriver code entirely to Java, as right now it's a combination of 
> part-py, part-java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15007) Hive 1.2.2 release planning

2016-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616418#comment-15616418
 ] 

Hive QA commented on HIVE-15007:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12835855/HIVE-15007-branch-1.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 138 failed/errored test(s), 7897 tests 
executed
*Failed tests:*
{noformat}
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=339)
TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely 
timed out) (batchId=370)
TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely 
timed out) (batchId=349)
TestAuthzApiEmbedAuthorizerInRemote - did not produce a TEST-*.xml file (likely 
timed out) (batchId=355)
TestBeeLineWithArgs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=377)
TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=393)
TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely 
timed out) (batchId=369)
TestCompactor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=359)
TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) 
(batchId=358)
TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) 
(batchId=378)
TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=324)
TestDDLWithRemoteMetastoreSecondNamenode - did not produce a TEST-*.xml file 
(likely timed out) (batchId=357)
TestDynamicSerDe - did not produce a TEST-*.xml file (likely timed out) 
(batchId=327)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=336)
TestEmbeddedThriftBinaryCLIService - did not produce a TEST-*.xml file (likely 
timed out) (batchId=381)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=331)
TestFolderPermissions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=364)
TestHS2AuthzContext - did not produce a TEST-*.xml file (likely timed out) 
(batchId=396)
TestHS2AuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=397)
TestHS2ImpersonationWithRemoteMS - did not produce a TEST-*.xml file (likely 
timed out) (batchId=385)
TestHiveAuthorizerCheckInvocation - did not produce a TEST-*.xml file (likely 
timed out) (batchId=373)
TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file (likely timed 
out) (batchId=372)
TestHiveHistory - did not produce a TEST-*.xml file (likely timed out) 
(batchId=375)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=351)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=341)
TestHiveMetaTool - did not produce a TEST-*.xml file (likely timed out) 
(batchId=354)
TestHiveServer2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=399)
TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed 
out) (batchId=400)
TestHiveSessionImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=382)
TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=356)
TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=428)
TestJdbcDriver2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=387)
TestJdbcMetadataApiAuth - did not produce a TEST-*.xml file (likely timed out) 
(batchId=398)
TestJdbcWithLocalClusterSpark - did not produce a TEST-*.xml file (likely timed 
out) (batchId=392)
TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=389)
TestJdbcWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=425)
TestJdbcWithMiniKdcCookie - did not produce a TEST-*.xml file (likely timed 
out) (batchId=424)
TestJdbcWithMiniKdcSQLAuthBinary - did not produce a TEST-*.xml file (likely 
timed out) (batchId=422)
TestJdbcWithMiniKdcSQLAuthHttp - did not produce a TEST-*.xml file (likely 
timed out) (batchId=427)
TestJdbcWithMiniMr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=388)
TestJdbcWithSQLAuthUDFBlacklist - did not produce a TEST-*.xml file (likely 
timed out) (batchId=394)
TestJdbcWithSQLAuthorization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=395)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=362)
TestMTQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=360)
TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) 
(batchId=348)
TestMarkPartitionRemote - did not produce a TEST-*.xml file (likely timed out) 
(batchId=352)
TestMetaStoreAuthorization - did not produce a TEST-*.xml file (likely timed 
out) 

[jira] [Updated] (HIVE-15054) Hive insertion query execution fails on Hive on Spark

2016-10-28 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15054:

Status: Patch Available  (was: Open)

patch-3: switch to partitionId_attemptNumber. Spark will have a different 
taskId for the different attempts of the same task while hive needs to use the 
same id to figure out if the data are duplicate. So seems we have to use 
partitionId_attemptNumber here. 


> Hive insertion query execution fails on Hive on Spark
> -
>
> Key: HIVE-15054
> URL: https://issues.apache.org/jira/browse/HIVE-15054
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15054.1.patch, HIVE-15054.2.patch, 
> HIVE-15054.3.patch
>
>
> The query of {{insert overwrite table tbl1}} sometimes will fail with the 
> following errors. Seems we are constructing taskAttemptId with partitionId 
> which is not unique if there are multiple attempts.
> {noformat}
> ava.lang.IllegalStateException: Hit error while closing operators - failing 
> tree: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
> output from: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_task_tmp.-ext-10002/_tmp.002148_0
>  to: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_tmp.-ext-10002/002148_0
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:202)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:106)
> at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HIVE-14476:
--
Attachment: (was: HIVE-14476.1-branch-1.2.patch)

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14990) run all tests for MM tables and fix the issues that are found

2016-10-28 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14990:

Attachment: HIVE-14990.04.patch

Again

> run all tests for MM tables and fix the issues that are found
> -
>
> Key: HIVE-14990
> URL: https://issues.apache.org/jira/browse/HIVE-14990
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14990.01.patch, HIVE-14990.02.patch, 
> HIVE-14990.03.patch, HIVE-14990.04.patch, HIVE-14990.04.patch, 
> HIVE-14990.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15054) Hive insertion query execution fails on Hive on Spark

2016-10-28 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15054:

Status: Open  (was: Patch Available)

> Hive insertion query execution fails on Hive on Spark
> -
>
> Key: HIVE-15054
> URL: https://issues.apache.org/jira/browse/HIVE-15054
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15054.1.patch, HIVE-15054.2.patch, 
> HIVE-15054.3.patch
>
>
> The query of {{insert overwrite table tbl1}} sometimes will fail with the 
> following errors. Seems we are constructing taskAttemptId with partitionId 
> which is not unique if there are multiple attempts.
> {noformat}
> ava.lang.IllegalStateException: Hit error while closing operators - failing 
> tree: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
> output from: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_task_tmp.-ext-10002/_tmp.002148_0
>  to: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_tmp.-ext-10002/002148_0
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:202)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:106)
> at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15054) Hive insertion query execution fails on Hive on Spark

2016-10-28 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15054:

Attachment: HIVE-15054.3.patch

> Hive insertion query execution fails on Hive on Spark
> -
>
> Key: HIVE-15054
> URL: https://issues.apache.org/jira/browse/HIVE-15054
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15054.1.patch, HIVE-15054.2.patch, 
> HIVE-15054.3.patch
>
>
> The query of {{insert overwrite table tbl1}} sometimes will fail with the 
> following errors. Seems we are constructing taskAttemptId with partitionId 
> which is not unique if there are multiple attempts.
> {noformat}
> ava.lang.IllegalStateException: Hit error while closing operators - failing 
> tree: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
> output from: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_task_tmp.-ext-10002/_tmp.002148_0
>  to: 
> hdfs://table1/.hive-staging_hive_2016-06-14_01-53-17_386_3231646810118049146-9/_tmp.-ext-10002/002148_0
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:202)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
> at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:106)
> at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15093) For S3-to-S3 renames, files should be moved individually rather than at a directory level

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616362#comment-15616362
 ] 

Sergio Peña commented on HIVE-15093:


I'm good with moving the logic to BlobStorageUtils for now.

> For S3-to-S3 renames, files should be moved individually rather than at a 
> directory level
> -
>
> Key: HIVE-15093
> URL: https://issues.apache.org/jira/browse/HIVE-15093
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-15093.1.patch
>
>
> Hive's MoveTask uses the Hive.moveFile method to move data within a 
> distributed filesystem as well as blobstore filesystems.
> If the move is done within the same filesystem:
> 1: If the source path is a subdirectory of the destination path, files will 
> be moved one by one using a threapool of workers
> 2: If the source path is not a subdirectory of the destination path, a single 
> rename operation is used to move the entire directory
> The second option may not work well on blobstores such as S3. Renames are not 
> metadata operations and require copying all the data. Client connectors to 
> blobstores may not efficiently rename directories. Worst case, the connector 
> will copy each file one by one, sequentially rather than using a threadpool 
> of workers to copy the data (e.g. HADOOP-13600).
> Hive already has code to rename files using a threadpool of workers, but this 
> only occurs in case number 1.
> This JIRA aims to modify the code so that case 1 is triggered when copying 
> within a blobstore. The focus is on copies within a blobstore because 
> needToCopy will return true if the src and target filesystems are different, 
> in which case a different code path is triggered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-14476:
---
Assignee: Tao Li  (was: Sergio Peña)

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch, 
> HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña reassigned HIVE-14476:
--

Assignee: Sergio Peña  (was: Tao Li)

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Sergio Peña
> Attachments: HIVE-14476.1-branch-1.2.patch, 
> HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-14476:
---
Attachment: HIVE-14476.1-branch-1.2.patch

Attach file to retrigger tests.

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Sergio Peña
> Attachments: HIVE-14476.1-branch-1.2.patch, 
> HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616347#comment-15616347
 ] 

Sergio Peña commented on HIVE-14476:


I will delete the HiveQA to retrigger the testing. There was an issue on the 
ptest server where the 'master' classes were left on the source code directory, 
and ptest was attempting to run them on branch-1 even if they didn't or never 
exist there.

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616347#comment-15616347
 ] 

Sergio Peña edited comment on HIVE-14476 at 10/28/16 7:47 PM:
--

I will retrigger the testing. There was an issue on the ptest server where the 
'master' classes were left on the source code directory, and ptest was 
attempting to run them on branch-1 even if they didn't or never exist there.


was (Author: spena):
I will delete the HiveQA to retrigger the testing. There was an issue on the 
ptest server where the 'master' classes were left on the source code directory, 
and ptest was attempting to run them on branch-1 even if they didn't or never 
exist there.

> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15093) For S3-to-S3 renames, files should be moved individually rather than at a directory level

2016-10-28 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616344#comment-15616344
 ] 

Sahil Takiar commented on HIVE-15093:
-

[~poeppt], [~spena]

I agree that ideally this should be fixed in s3a. This patch is more of a 
stop-gap solution until this is fixed in HADOOP-13600. However, that could be a 
while. I don't know when the fix in s3a will land, the current JIRA has a 
Target Version of 2.9.0 and I don't know when that will be released.

I think it's still worth adding this to Hive even if it will be reverted once 
Hadoop 2.9.0 is released (whenever that will be). We have seen this portion of 
the code become a bottleneck for any Hive questions on S3, because all data on 
S3 needs to be renamed sequentially, by a single process.

I think the burden is still on the blobstore connectors to implement efficient 
rename of directories. We can just make this specific patch s3a specific.

I like the idea of moving the logic into the BlobStorageUtils class so that 
other components can re-use this optimization. This should make it much easier 
to remove in the future too.

> For S3-to-S3 renames, files should be moved individually rather than at a 
> directory level
> -
>
> Key: HIVE-15093
> URL: https://issues.apache.org/jira/browse/HIVE-15093
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-15093.1.patch
>
>
> Hive's MoveTask uses the Hive.moveFile method to move data within a 
> distributed filesystem as well as blobstore filesystems.
> If the move is done within the same filesystem:
> 1: If the source path is a subdirectory of the destination path, files will 
> be moved one by one using a threapool of workers
> 2: If the source path is not a subdirectory of the destination path, a single 
> rename operation is used to move the entire directory
> The second option may not work well on blobstores such as S3. Renames are not 
> metadata operations and require copying all the data. Client connectors to 
> blobstores may not efficiently rename directories. Worst case, the connector 
> will copy each file one by one, sequentially rather than using a threadpool 
> of workers to copy the data (e.g. HADOOP-13600).
> Hive already has code to rename files using a threadpool of workers, but this 
> only occurs in case number 1.
> This JIRA aims to modify the code so that case 1 is triggered when copying 
> within a blobstore. The focus is on copies within a blobstore because 
> needToCopy will return true if the src and target filesystems are different, 
> in which case a different code path is triggered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14476) Fix logging issue for branch-1

2016-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616332#comment-15616332
 ] 

Thejas M Nair commented on HIVE-14476:
--

Unfortunately, the failures are too many in branch-1. Its hard to tell if there 
could be a related failure or not.


> Fix logging issue for branch-1
> --
>
> Key: HIVE-14476
> URL: https://issues.apache.org/jira/browse/HIVE-14476
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14476.1-branch-1.2.patch
>
>
> This issue is from branch-1 code when we decide if a log entry is an 
> operational log or not (the operational logs are visible to the client). The 
> problem is that the code is checking the logging mode at the beginning of the 
> decide() method, while the logging mode is updated after that check. Due to 
> this issue, we ran into an issue that an operational log could be filtered 
> out if it's the very first log being checked from the this method. As a 
> result, that particular log is not showing up for the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15051) Test framework integration with findbugs, rat checks etc.

2016-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616325#comment-15616325
 ] 

Thejas M Nair commented on HIVE-15051:
--

[~pvary]
Thanks for looking into this. This will be very valuable.
Hadoop builds have had this for a while.


> Test framework integration with findbugs, rat checks etc.
> -
>
> Key: HIVE-15051
> URL: https://issues.apache.org/jira/browse/HIVE-15051
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Peter Vary
>Assignee: Peter Vary
>
> Find a way to integrate code analysis tools like findbugs, rat checks to 
> PreCommit tests, thus removing the burden from reviewers to check the code 
> style and other checks which could be done by code. 
> Might worth to take a look on Yetus, but keep in mind the Hive has a specific 
> parallel test framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15060) Remove the autoCommit warning from beeline

2016-10-28 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616322#comment-15616322
 ] 

Tao Li commented on HIVE-15060:
---

[~thejas] Good point. Will do.

> Remove the autoCommit warning from beeline
> --
>
> Key: HIVE-15060
> URL: https://issues.apache.org/jira/browse/HIVE-15060
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-15060.1.patch
>
>
> WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not 
> support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://ctr-e89-1466633100028-0275-01
> By default, this beeline setting is false, while hive only support 
> autoCommit=true for now. So this warning does mot make sense and should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14992) Relocate several common libraries in hive jdbc uber jar

2016-10-28 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616317#comment-15616317
 ] 

Tao Li commented on HIVE-14992:
---

We can revisit if these artifacts are really needed in a separate JIRA: 
https://issues.apache.org/jira/browse/HIVE-15080

Regarding the relocation, I am actually not concerned about extra build time 
from relocation (I guess that should not cause any obvious difference). My 
hunch if that we can relocate the artifacts in a lazy way, and I think the jars 
such as from "com.lmax.disruptor" are unlikely to be used along with the jdbc. 
But I am also fine with relocating all of them in one shot.

> Relocate several common libraries in hive jdbc uber jar
> ---
>
> Key: HIVE-14992
> URL: https://issues.apache.org/jira/browse/HIVE-14992
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14992.1.patch, HIVE-14992.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15060) Remove the autoCommit warning from beeline

2016-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616315#comment-15616315
 ] 

Thejas M Nair commented on HIVE-15060:
--

[~taoli-hwx] Can you add a test as well ? See 
TestBeelineArgParsing.testBeelineOpts for example.


> Remove the autoCommit warning from beeline
> --
>
> Key: HIVE-15060
> URL: https://issues.apache.org/jira/browse/HIVE-15060
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-15060.1.patch
>
>
> WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not 
> support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://ctr-e89-1466633100028-0275-01
> By default, this beeline setting is false, while hive only support 
> autoCommit=true for now. So this warning does mot make sense and should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14992) Relocate several common libraries in hive jdbc uber jar

2016-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616286#comment-15616286
 ] 

Thejas M Nair commented on HIVE-14992:
--

bq. Regarding the below ones, I am not sure how likely they will be used, so I 
would defer relocating those for now.
Do you know if we need them in the jar ?  (A best guess depending on their 
functionality would be OK)
What is the cost of relocating them ? Are you concerned about additional build 
time ? (I don't have a good sense of additional time this is adding).


> Relocate several common libraries in hive jdbc uber jar
> ---
>
> Key: HIVE-14992
> URL: https://issues.apache.org/jira/browse/HIVE-14992
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14992.1.patch, HIVE-14992.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15093) For S3-to-S3 renames, files should be moved individually rather than at a directory level

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616281#comment-15616281
 ] 

Sergio Peña commented on HIVE-15093:


Agree with [~poeppt]. We should have some implementation of renames between S3 
-> S3 on the BlobstorageUtils class. There might be other cases in the Hive 
code where renames are handled serially, and we won't like to repeat code for 
those cases are found.

Also, if this serial copies happen at the rename() level, then probably having 
this work on the Hadoop side may be beneficial for other components.

> For S3-to-S3 renames, files should be moved individually rather than at a 
> directory level
> -
>
> Key: HIVE-15093
> URL: https://issues.apache.org/jira/browse/HIVE-15093
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-15093.1.patch
>
>
> Hive's MoveTask uses the Hive.moveFile method to move data within a 
> distributed filesystem as well as blobstore filesystems.
> If the move is done within the same filesystem:
> 1: If the source path is a subdirectory of the destination path, files will 
> be moved one by one using a threapool of workers
> 2: If the source path is not a subdirectory of the destination path, a single 
> rename operation is used to move the entire directory
> The second option may not work well on blobstores such as S3. Renames are not 
> metadata operations and require copying all the data. Client connectors to 
> blobstores may not efficiently rename directories. Worst case, the connector 
> will copy each file one by one, sequentially rather than using a threadpool 
> of workers to copy the data (e.g. HADOOP-13600).
> Hive already has code to rename files using a threadpool of workers, but this 
> only occurs in case number 1.
> This JIRA aims to modify the code so that case 1 is triggered when copying 
> within a blobstore. The focus is on copies within a blobstore because 
> needToCopy will return true if the src and target filesystems are different, 
> in which case a different code path is triggered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14979) Removing stale Zookeeper locks at HiveServer2 initialization

2016-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616277#comment-15616277
 ] 

Thejas M Nair commented on HIVE-14979:
--

The current approach of cleanup on restart relies on the fact that the restart 
happens on same node. In case of cloud environments, there are more frequent 
instances of nodes going down. In case of on-prem instances, a node having 
hardware failure could result in that node/ip not being available for some 
time. A new HS2 instances might get started on a different node with a 
different IP address. 
Also, the current approach doesn't handle the case of multiple instances of HS2 
running on the same host.

I think going with [persistent 
ephemeral|http://curator.apache.org/curator-recipes/persistent-ephemeral-node.html]
 nodes is better approach. That approach is also as resilient as I wish it 
would be, because the fact that this curator recipe exists, also shows that 
there is some flakiness around nodes being around when it should be. So I think 
we should still keep the session.timeout in order of minutes.


Regarding the session timeout -
Looks like the original setting for the session timeout was 10 mins, and 
HIVE-9119 changed it to 20 mins. 
In case of zookeeper service discovery, it is not a major issue if the entry in 
zookeeper stays around for longer. Larger timeout can provide better resilience 
against temporary gc or network issues. 10 mins might be still OK for this 
purpose.

However, in case of the locks we want to wait as little as possible before 
cleanup, so that in case of improper shutdown, we can cleanup the entries 
sooner. I think we still would want it to be couple of minutes for the sake of 
resiliency. 
Since the requirements are different we could create separate config for the 
lock zk session timeout.



> Removing stale Zookeeper locks at HiveServer2 initialization
> 
>
> Key: HIVE-14979
> URL: https://issues.apache.org/jira/browse/HIVE-14979
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14979.3.patch, HIVE-14979.4.patch, 
> HIVE-14979.5.patch, HIVE-14979.patch
>
>
> HiveServer2 could use Zookeeper to store token that indicate that particular 
> tables are locked with the creation of persistent Zookeeper objects. 
> A problem can occur when a HiveServer2 instance creates a lock on a table and 
> the HiveServer2 instances crashes ("Out of Memory" for example) and the locks 
> are not released in Zookeeper. This lock will then remain until it is 
> manually cleared by an admin.
> There should be a way to remove stale locks at HiveServer2 initialization, 
> helping the admins life.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15062) create backward compat checking for metastore APIs

2016-10-28 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15062:

Attachment: HIVE-15062.03.patch
HIVE-15062.03.nogen.patch

Renamed the value and added the comment to make it more explicit

> create backward compat checking for metastore APIs
> --
>
> Key: HIVE-15062
> URL: https://issues.apache.org/jira/browse/HIVE-15062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15062.01.nogen.patch, HIVE-15062.01.patch, 
> HIVE-15062.02.nogen.patch, HIVE-15062.02.patch, HIVE-15062.03.nogen.patch, 
> HIVE-15062.03.patch, HIVE-15062.nogen.patch, HIVE-15062.patch
>
>
> This is to add client capability checking to Hive metastore.
> This could have been used, for example, when introducing ACID tables - a 
> client trying to get_table on such a table without specifying that it is 
> aware of ACID tables would get an error by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15062) create backward compat checking for metastore APIs

2016-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616248#comment-15616248
 ] 

Sergey Shelukhin commented on HIVE-15062:
-

[~thejas] tested it locally by writing to file, removing the value, rebuilding 
and reading the file; the unknown value is simply converted to null. It may 
break in the cases where the enum field is required, but it is ok for lists.


> create backward compat checking for metastore APIs
> --
>
> Key: HIVE-15062
> URL: https://issues.apache.org/jira/browse/HIVE-15062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15062.01.nogen.patch, HIVE-15062.01.patch, 
> HIVE-15062.02.nogen.patch, HIVE-15062.02.patch, HIVE-15062.nogen.patch, 
> HIVE-15062.patch
>
>
> This is to add client capability checking to Hive metastore.
> This could have been used, for example, when introducing ACID tables - a 
> client trying to get_table on such a table without specifying that it is 
> aware of ACID tables would get an error by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15081) RetryingMetaStoreClient.getProxy(HiveConf, Boolean) doesn't match constructor of HiveMetaStoreClient

2016-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616193#comment-15616193
 ] 

Sergey Shelukhin commented on HIVE-15081:
-

+1

> RetryingMetaStoreClient.getProxy(HiveConf, Boolean) doesn't match constructor 
> of HiveMetaStoreClient
> 
>
> Key: HIVE-15081
> URL: https://issues.apache.org/jira/browse/HIVE-15081
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-15081.1.patch
>
>
> Calling RetryingMetaStoreClient.getProxy(HiveConf, Boolean) will result in 
> error
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1661)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:81)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:131)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:87)
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(org.apache.hadoop.hive.conf.HiveConf,
>  java.lang.Boolean)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15007) Hive 1.2.2 release planning

2016-10-28 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-15007:

Attachment: HIVE-15007-branch-1.2.patch

> Hive 1.2.2 release planning
> ---
>
> Key: HIVE-15007
> URL: https://issues.apache.org/jira/browse/HIVE-15007
> Project: Hive
>  Issue Type: Task
>Affects Versions: 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007.branch-1.2.patch
>
>
> Discussed with [~spena] about triggering unit test runs for 1.2.2 release and 
> creating a patch which will trigger precommits looks like a good way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14883) Checks for Acid operation/bucket table write are in the wrong place

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14883:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

this is part of HIVE-14943

> Checks for Acid operation/bucket table write are in the wrong place
> ---
>
> Key: HIVE-14883
> URL: https://issues.apache.org/jira/browse/HIVE-14883
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning, Transactions
>Affects Versions: 1.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.2.0
>
> Attachments: HIVE-14883.2.patch, HIVE-14883.3.patch, 
> HIVE-14883.4.patch, HIVE-14883.5.patch, HIVE-14883.patch
>
>
> The following code in 
>  in SemanticAnalyzer.getMetaData(QB qb, ReadEntity parentInput) 
> {noformat}
>   // Disallow INSERT INTO on bucketized tables
>   boolean isAcid = AcidUtils.isAcidTable(tab);
>   boolean isTableWrittenTo = 
> qb.getParseInfo().isInsertIntoTable(tab.getDbName(), tab.getTableName());
>   if (isTableWrittenTo &&
>   tab.getNumBuckets() > 0 && !isAcid) {
> throw new SemanticException(ErrorMsg.INSERT_INTO_BUCKETIZED_TABLE.
> getMsg("Table: " + tabName));
>   }
>   // Disallow update and delete on non-acid tables
>   if ((updating() || deleting()) && !isAcid && isTableWrittenTo) {
> //isTableWrittenTo: delete from acidTbl where a in (select id from 
> nonAcidTable)
> //so only assert this if we are actually writing to this table
> // Whether we are using an acid compliant transaction manager has 
> already been caught in
> // UpdateDeleteSemanticAnalyzer, so if we are updating or deleting 
> and getting nonAcid
> // here, it means the table itself doesn't support it.
> throw new SemanticException(ErrorMsg.ACID_OP_ON_NONACID_TABLE, 
> tabName);
>   }
> {noformat}
> is done in the loop "for (String alias : tabAliases) {" which is over 
> tables being read.
> Should be done in "for (String name : qbp.getClauseNamesForDest()) {" loop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-15087:
--
Comment: was deleted

(was: never mind.  The changes I was expecting to see were in another check in 
in HIVE-14878...)

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616090#comment-15616090
 ] 

Eugene Koifman commented on HIVE-15087:
---

never mind.  The changes I was expecting to see were in another check in in 
HIVE-14878...

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616091#comment-15616091
 ] 

Eugene Koifman commented on HIVE-15087:
---

never mind.  The changes I was expecting to see were in another check in in 
HIVE-14878...

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616083#comment-15616083
 ] 

Wei Zheng commented on HIVE-15087:
--

No it is. There was an earlier commit that added the new property. But at that 
time I didn't touch existing "hivecommit"="true" tblproperty that Sergey 
created.

In this JIRA I removed all those "hivecommit" and did the replacement. This can 
be seen as part 2 of 2.

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616086#comment-15616086
 ] 

Wei Zheng commented on HIVE-15087:
--

Forgot the jira number HIVE-14878

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616071#comment-15616071
 ] 

Eugene Koifman commented on HIVE-15087:
---

I see now.  The patch attached here is not what was committed...

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616054#comment-15616054
 ] 

Wei Zheng edited comment on HIVE-15087 at 10/28/16 5:42 PM:


[~ekoifman] All the changes were committed to branch hive-14535
https://github.com/apache/hive/blob/hive-14535/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java

Please let me know if anything has to be changed.


was (Author: wzheng):
[~ekoifman] Are the changes were committed to branch hive-14535
https://github.com/apache/hive/blob/hive-14535/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java

Please let me know if anything has to be changed.

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15087) integrate MM tables into ACID: replace "hivecommit" property with ACID property

2016-10-28 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616054#comment-15616054
 ] 

Wei Zheng commented on HIVE-15087:
--

[~ekoifman] Are the changes were committed to branch hive-14535
https://github.com/apache/hive/blob/hive-14535/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java

Please let me know if anything has to be changed.

> integrate MM tables into ACID: replace "hivecommit" property with ACID 
> property
> ---
>
> Key: HIVE-15087
> URL: https://issues.apache.org/jira/browse/HIVE-15087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: hive-14535
>
> Attachments: HIVE-15087.1.patch
>
>
> Previously declared DDL
> {code}
> create table t1 (key int, key2 int)  tblproperties("hivecommit"="true");
> {code}
> should be replaced with:
> {code}
> create table t1 (key int, key2 int)  tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >