[jira] [Commented] (HIVE-12189) The list in pushdownPreds of ppd.ExprWalkerInfo should not be allowed to grow very large

2015-10-17 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962169#comment-14962169
 ] 

Yongzhi Chen commented on HIVE-12189:
-

The 4 failures are not related. Their ages are more than 14. 

> The list in pushdownPreds of ppd.ExprWalkerInfo should not be allowed to grow 
> very large
> 
>
> Key: HIVE-12189
> URL: https://issues.apache.org/jira/browse/HIVE-12189
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-12189.1.patch
>
>
> Some queries are very slow in compile time, for example following query
> {noformat}
> select * from tt1 nf 
> join tt2 a1 on (nf.col1 = a1.col1 and nf.hdp_databaseid = a1.hdp_databaseid) 
> join tt3 a2 on(a2.col2 = a1.col2 and a2.col3 = nf.col3 and 
> a2.hdp_databaseid = nf.hdp_databaseid) 
> join tt4 a3 on  (a3.col4 = a2.col4 and a3.col3 = a2.col3) 
> join tt5 a4 on (a4.col4 = a2.col4 and a4.col5 = a2.col5 and a4.col3 = 
> a2.col3 and a4.hdp_databaseid = nf.hdp_databaseid) 
> join tt6 a5 on  (a5.col3 = a2.col3 and a5.col2 = a2.col2 and 
> a5.hdp_databaseid = nf.hdp_databaseid) 
> JOIN tt7 a6 ON (a2.col3 = a6.col3 and a2.col2 = a6.col2 and a6.hdp_databaseid 
> = nf.hdp_databaseid) 
> JOIN tt8 a7 ON (a2.col3 = a7.col3 and a2.col2 = a7.col2 and a7.hdp_databaseid 
> = nf.hdp_databaseid)
> where nf.hdp_databaseid = 102 limit 10;
> {noformat}
> takes around 120 seconds to compile in hive 1.1 when
> hive.mapred.mode=strict;
> hive.optimize.ppd=true;
> and hive is not in test mode.
> All the above tables are tables with one column as partition. But all the 
> tables are empty table. If the tables are not empty, it is reported that the 
> compile so slow that it looks like hive is hanging. 
> In hive 2.0, the compile is much faster, explain takes 6.6 seconds. But it is 
> still a lot of time. One of the problem slows ppd down is that list in 
> pushdownPreds can grow very large which makes extractPushdownPreds bad 
> performance:
> {noformat}
> public static ExprWalkerInfo extractPushdownPreds(OpWalkerInfo opContext,
> Operator op, List preds)
> {noformat}
> During run the query above, in the following break point preds  has size of 
> 12051, and most entry of the list is: 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> Following code in extractPushdownPreds will clone all the nodes in preds and 
> do the walk. Hive 2.0 is faster because HIVE-11652(and other jiras) makes 
> startWalking much faster, but we still clone thousands of nodes with same 
> expression. Should we store so many same predicates in the list or just one 
> is good enough?  
> {noformat}
> List startNodes = new ArrayList();
> List clonedPreds = new ArrayList();
> for (ExprNodeDesc node : preds) {
>   ExprNodeDesc clone = node.clone();
>   clonedPreds.add(clone);
>   exprContext.getNewToOldExprMap().put(clone, node);
> }
> startNodes.addAll(clonedPreds);
> egw.startWalking(startNodes, null);
> {noformat}
> Should we change java/org/apache/hadoop/hive/ql/ppd/ExprWalkerInfo.java
> method 
> public void addFinalCandidate(String alias, ExprNodeDesc expr) 
> and
> public void addPushDowns(String alias, List pushDowns) 
> to only add expr which is not in the PushDown list for an alias?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10807) Invalidate basic stats for insert queries if autogather=false

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962171#comment-14962171
 ] 

Hive QA commented on HIVE-10807:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767181/HIVE-10807.6.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 9697 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_dynamic
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_join_unencrypted_tbl
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_map_join_1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_map_join_2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats3
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5699/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5699/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5699/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767181 - PreCommit-HIVE-TRUNK-Build

> Invalidate basic stats for insert queries if autogather=false
> -
>
> Key: HIVE-10807
> URL: https://issues.apache.org/jira/browse/HIVE-10807
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 1.2.0
>Reporter: Gopal V
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-10807.2.patch, HIVE-10807.3.patch, 
> HIVE-10807.4.patch, HIVE-10807.5.patch, HIVE-10807.6.patch, HIVE-10807.patch
>
>
> if stats.autogather=false leads to incorrect basic stats in case of insert 
> statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11777) implement an option to have single ETL strategy for multiple directories

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962108#comment-14962108
 ] 

Hive QA commented on HIVE-11777:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767119/HIVE-11777.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9697 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5695/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5695/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5695/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767119 - PreCommit-HIVE-TRUNK-Build

> implement an option to have single ETL strategy for multiple directories
> 
>
> Key: HIVE-11777
> URL: https://issues.apache.org/jira/browse/HIVE-11777
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11777.01.patch, HIVE-11777.02.patch, 
> HIVE-11777.03.patch, HIVE-11777.patch
>
>
> In case of metastore footer PPD we don't want to call PPD call with all 
> attendant SARG, MS and HBase overhead for each directory. If we wait for some 
> time (10ms? some fraction of inputs?) we can do one call without losing 
> overall perf. 
> For now make it time based.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11981) ORC Schema Evolution Issues (Vectorized, ACID, and Non-Vectorized)

2015-10-17 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-11981:

Attachment: HIVE-11981.07.patch

Didn't build -- rebase and try again.

> ORC Schema Evolution Issues (Vectorized, ACID, and Non-Vectorized)
> --
>
> Key: HIVE-11981
> URL: https://issues.apache.org/jira/browse/HIVE-11981
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-11981.01.patch, HIVE-11981.02.patch, 
> HIVE-11981.03.patch, HIVE-11981.05.patch, HIVE-11981.06.patch, 
> HIVE-11981.07.patch, ORC Schema Evolution Issues.docx
>
>
> High priority issues with schema evolution for the ORC file format.
> Schema evolution here is limited to adding new columns and a few cases of 
> column type-widening (e.g. int to bigint).
> Renaming columns, deleting column, moving columns and other schema evolution 
> were not pursued due to lack of importance and lack of time.  Also, it 
> appears a much more sophisticated metadata would be needed to support them.
> The biggest issues for users have been adding new columns for ACID table 
> (HIVE-11421 Support Schema evolution for ACID tables) and vectorization 
> (HIVE-10598 Vectorization borks when column is added to table).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12200) INSERT INTO table using a select statement w/o a FROM clause fails

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962135#comment-14962135
 ] 

Hive QA commented on HIVE-12200:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767128/HIVE-12200.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9697 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5696/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5696/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5696/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767128 - PreCommit-HIVE-TRUNK-Build

> INSERT INTO table using a select statement w/o a FROM clause fails
> --
>
> Key: HIVE-12200
> URL: https://issues.apache.org/jira/browse/HIVE-12200
> Project: Hive
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-12200.1.patch
>
>
> Here is the stack trace:
> {noformat}
> FailedPredicateException(regularBody,{$s.tree.getChild(1) !=null}?)
>   at 
> org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41047)
>   at 
> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40222)
>   at 
> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40092)
>   at 
> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1656)
>   at 
> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1140)
>   at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
>   at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:407)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1162)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1215)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1091)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1081)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:225)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:177)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:388)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:323)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:731)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:704)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> FAILED: ParseException line 1:29 Failed to recognize predicate ''. 
> Failed rule: 'regularBody' in statement
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12061) add file type support to file metadata by expr call

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962136#comment-14962136
 ] 

Hive QA commented on HIVE-12061:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767137/HIVE-12061.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5697/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5697/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5697/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseConnection.java:[68,13]
 error: cannot find symbol
[ERROR] interface HBaseConnection
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseConnection.java:[84,2]
 error: cannot find symbol
[ERROR] interface HBaseConnection
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseConnection.java:[94,2]
 error: cannot find symbol
[ERROR] interface HBaseConnection
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[29,30]
 error: package org.apache.hadoop.hbase does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[30,30]
 error: package org.apache.hadoop.hbase does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[31,30]
 error: package org.apache.hadoop.hbase does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[32,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[33,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[34,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[35,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[36,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[37,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[38,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[39,37]
 error: package org.apache.hadoop.hbase.client does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[40,37]
 error: package org.apache.hadoop.hbase.filter does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[41,37]
 error: package org.apache.hadoop.hbase.filter does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[42,37]
 error: package org.apache.hadoop.hbase.filter does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[43,37]
 error: package org.apache.hadoop.hbase.filter does not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java:[44,62]
 error: package org.apache.hadoop.hbase.protobuf.generated.ClientProtos does 
not exist
[ERROR] 
/data/hive-ptest/working/apache-github-source-source/metastore/src/java/org/apache/hadoop/hive/metastore/hbase/PartitionKeyComparator.java:[30,37]
 

[jira] [Commented] (HIVE-11591) upgrade thrift to 0.9.3 and change generation to use undated annotations

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962155#comment-14962155
 ] 

Hive QA commented on HIVE-11591:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767147/HIVE-11591.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9697 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5698/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5698/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5698/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767147 - PreCommit-HIVE-TRUNK-Build

> upgrade thrift to 0.9.3 and change generation to use undated annotations
> 
>
> Key: HIVE-11591
> URL: https://issues.apache.org/jira/browse/HIVE-11591
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11591.WIP.patch, HIVE-11591.nogen.patch, 
> HIVE-11591.patch
>
>
> Thrift has added class annotations to generated classes; these contain 
> generation date. Because of this, all the Java thrift files change on every 
> re-gen, even if you only make a small change that should not affect bazillion 
> files. We should use undated annotations to avoid this problem.
> This depends on upgrading to Thrift 0.9.3, -which doesn't exist yet-.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11981) ORC Schema Evolution Issues (Vectorized, ACID, and Non-Vectorized)

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962192#comment-14962192
 ] 

Hive QA commented on HIVE-11981:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767233/HIVE-11981.07.patch

{color:green}SUCCESS:{color} +1 due to 15 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 9718 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_ppr_all
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactAfterAbort
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreaming
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactAfterAbort
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreaming
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testStatsAfterCompactionPartTbl
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5700/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5700/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5700/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767233 - PreCommit-HIVE-TRUNK-Build

> ORC Schema Evolution Issues (Vectorized, ACID, and Non-Vectorized)
> --
>
> Key: HIVE-11981
> URL: https://issues.apache.org/jira/browse/HIVE-11981
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-11981.01.patch, HIVE-11981.02.patch, 
> HIVE-11981.03.patch, HIVE-11981.05.patch, HIVE-11981.06.patch, 
> HIVE-11981.07.patch, ORC Schema Evolution Issues.docx
>
>
> High priority issues with schema evolution for the ORC file format.
> Schema evolution here is limited to adding new columns and a few cases of 
> column type-widening (e.g. int to bigint).
> Renaming columns, deleting column, moving columns and other schema evolution 
> were not pursued due to lack of importance and lack of time.  Also, it 
> appears a much more sophisticated metadata would be needed to support them.
> The biggest issues for users have been adding new columns for ACID table 
> (HIVE-11421 Support Schema evolution for ACID tables) and vectorization 
> (HIVE-10598 Vectorization borks when column is added to table).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11721) non-ascii characters shows improper with "insert into"

2015-10-17 Thread Aleksei S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksei S updated HIVE-11721:
-
Attachment: HIVE-11721.patch

I debugged the issue and found that the reason is that the contents of a 
virtual table is written as bytes while keeping only lower 8 bits, which 
doesn't work with non-ascii characters.
The fix is to create a Text object (which is used as a virtual table storage 
format) and encode values with it.

> non-ascii characters shows improper with "insert into"
> --
>
> Key: HIVE-11721
> URL: https://issues.apache.org/jira/browse/HIVE-11721
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 1.1.0, 1.2.1, 2.0.0
>Reporter: Jun Yin
> Attachments: HIVE-11721.patch
>
>
> Hive: 1.1.0
> hive> create table char_255_noascii as select cast("Garçu 谢谢 Kôkaku 
> ありがとうございますkidôtai한국어" as char(255));
> hive> select * from char_255_noascii;
> OK
> Garçu 谢谢 Kôkaku ありがとうございますkidôtai>한국어
> it shows correct, and also it works good with "LOAD DATA" 
> but when I try another way to insert data as below:
> hive> create table nonascii(t1 char(255));
> OK
> Time taken: 0.125 seconds
> hive> insert into nonascii values("Garçu 谢谢 Kôkaku ありがとうございますkidôtai한국어");
> hive> select * from nonascii;
> OK
> Gar�u "" K�kaku B�LhFTVD~Ykid�tai\m� 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-11721) non-ascii characters shows improper with "insert into"

2015-10-17 Thread Aleksei S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksei S reassigned HIVE-11721:


Assignee: Aleksei S

> non-ascii characters shows improper with "insert into"
> --
>
> Key: HIVE-11721
> URL: https://issues.apache.org/jira/browse/HIVE-11721
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 1.1.0, 1.2.1, 2.0.0
>Reporter: Jun Yin
>Assignee: Aleksei S
> Attachments: HIVE-11721.patch
>
>
> Hive: 1.1.0
> hive> create table char_255_noascii as select cast("Garçu 谢谢 Kôkaku 
> ありがとうございますkidôtai한국어" as char(255));
> hive> select * from char_255_noascii;
> OK
> Garçu 谢谢 Kôkaku ありがとうございますkidôtai>한국어
> it shows correct, and also it works good with "LOAD DATA" 
> but when I try another way to insert data as below:
> hive> create table nonascii(t1 char(255));
> OK
> Time taken: 0.125 seconds
> hive> insert into nonascii values("Garçu 谢谢 Kôkaku ありがとうございますkidôtai한국어");
> hive> select * from nonascii;
> OK
> Gar�u "" K�kaku B�LhFTVD~Ykid�tai\m� 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11785) Support escaping carriage return and new line for LazySimpleSerDe

2015-10-17 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961768#comment-14961768
 ] 

Lefty Leverenz commented on HIVE-11785:
---

Does this need to be documented in the wiki?  (If so, please add a TODOC2.0 
label.)

> Support escaping carriage return and new line for LazySimpleSerDe
> -
>
> Key: HIVE-11785
> URL: https://issues.apache.org/jira/browse/HIVE-11785
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.0.0
>
> Attachments: HIVE-11785.2.patch, HIVE-11785.3.patch, 
> HIVE-11785.patch, test.parquet
>
>
> Create the table and perform the queries as follows. You will see different 
> results when the setting changes. 
> The expected result should be:
> {noformat}
> 1 newline
> here
> 2 carriage return
> 3 both
> here
> {noformat}
> {noformat}
> hive> create table repo (lvalue int, charstring string) stored as parquet;
> OK
> Time taken: 0.34 seconds
> hive> load data inpath '/tmp/repo/test.parquet' overwrite into table repo;
> Loading data to table default.repo
> chgrp: changing ownership of 
> 'hdfs://nameservice1/user/hive/warehouse/repo/test.parquet': User does not 
> belong to hive
> Table default.repo stats: [numFiles=1, numRows=0, totalSize=610, 
> rawDataSize=0]
> OK
> Time taken: 0.732 seconds
> hive> set hive.fetch.task.conversion=more;
> hive> select * from repo;
> OK
> 1 newline
> here
> here  carriage return
> 3 both
> here
> Time taken: 0.253 seconds, Fetched: 3 row(s)
> hive> set hive.fetch.task.conversion=none;
> hive> select * from repo;
> Query ID = root_20150909113535_e081db8b-ccd9-4c44-aad9-d990ffb8edf3
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1441752031022_0006, Tracking URL = 
> http://host-10-17-81-63.coe.cloudera.com:8088/proxy/application_1441752031022_0006/
> Kill Command = 
> /opt/cloudera/parcels/CDH-5.4.5-1.cdh5.4.5.p0.7/lib/hadoop/bin/hadoop job  
> -kill job_1441752031022_0006
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: > 0
> 2015-09-09 11:35:54,127 Stage-1 map = 0%,  reduce = 0%
> 2015-09-09 11:36:04,664 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.98 
> sec
> MapReduce Total cumulative CPU time: 2 seconds 980 msec
> Ended Job = job_1441752031022_0006
> MapReduce Jobs Launched:
> Stage-Stage-1: Map: 1   Cumulative CPU: 2.98 sec   HDFS Read: 4251 HDFS 
> Write: 51 SUCCESS
> Total MapReduce CPU Time Spent: 2 seconds 980 msec
> OK
> 1 newline
> NULL  NULL
> 2 carriage return
> NULL  NULL
> 3 both
> NULL  NULL
> Time taken: 25.131 seconds, Fetched: 6 row(s)
> hive>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11735) Different results when multiple if() functions are used

2015-10-17 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962009#comment-14962009
 ] 

Ashutosh Chauhan commented on HIVE-11735:
-

I think problem here stems from 
{code}
aggregations.put(expressionTree.toStringTree().toLowerCase(), expressionTree);
{code}

I think for your particular query if you remove {{toLowerCase()}} would solve 
your problem. Do you really need other changes for column aliases and such in 
RR?

Intent for this map is to detect duplicate functions in aggregations, so that 
we are not computing them twice. However, this is blindly doing 
{{toLoweCase()}} on full expression Tree, ignoring the fact that there might be 
constant literals in there. There are two possible solutions here : 

* Eliminate this logic altogether from this phase. Don't bother about 
duplicates in phase 1 analysis. Instead write a rule either on Calcite operator 
tree or Hive operator tree which walks on expressions and detects duplicates 
and fixes up operator tree to refer to 1 expression tree.
* Write a utility function which takes expression tree as an argument and 
returns lower case version of its string tree, while leaving constant string 
literals in original case. Then use this string representation as a key in that 
map.

IMHO, Option 1 is a cleaner approach. However, that might be a big change 
touching various pieces in planning.
Option 2 is much more local and contained change, but kinda inelegant.

cc: [~jpullokkaran] if he has other ideas. 

> Different results when multiple if() functions are used 
> 
>
> Key: HIVE-11735
> URL: https://issues.apache.org/jira/browse/HIVE-11735
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.0.0, 1.1.1, 1.2.1
>Reporter: Chetna Chaudhari
>Assignee: Chetna Chaudhari
> Attachments: HIVE-11735.patch
>
>
> Hive if() udf is returns different results when string equality is used as 
> condition, with case change. 
> Observation:
>1) if( name = 'chetna' , 3, 4) and if( name = 'Chetna', 3, 4) both are 
> treated as equal.
>2) The rightmost udf result is pushed to predicates on left side. Leading 
> to same result for both the udfs.
> How to reproduce the issue:
> 1) CREATE TABLE `sample`(
>   `name` string)
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1425075745');
> 2) insert into table sample values ('chetna');
> 3) select min(if(name = 'chetna', 4, 3)) , min(if(name='Chetna', 4, 3))  from 
> sample; 
> This will give result : 
> 33
> Expected result:
> 43
> 4) select min(if(name = 'Chetna', 4, 3)) , min(if(name='chetna', 4, 3))  from 
> sample; 
> This will give result 
> 44
> Expected result:
> 34



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11895) CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix udaf_percentile_approx_23.q

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961828#comment-14961828
 ] 

Hive QA commented on HIVE-11895:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766920/HIVE-11895.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 9701 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5687/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5687/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5687/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766920 - PreCommit-HIVE-TRUNK-Build

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix 
> udaf_percentile_approx_23.q
> -
>
> Key: HIVE-11895
> URL: https://issues.apache.org/jira/browse/HIVE-11895
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-11895.01.patch, HIVE-11895.02.patch
>
>
> Due to a type conversion problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12026) Add test case to check permissions when truncating partition

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961797#comment-14961797
 ] 

Hive QA commented on HIVE-12026:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766883/HIVE-12026.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9687 tests executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-mapjoin_decimal.q-transform_ppr2.q-vector_groupby_reduce.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_stats_counter_partitioned
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5685/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5685/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5685/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766883 - PreCommit-HIVE-TRUNK-Build

> Add test case to check permissions when truncating partition
> 
>
> Key: HIVE-12026
> URL: https://issues.apache.org/jira/browse/HIVE-12026
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12026.1.patch, HIVE-12026.2.patch
>
>
> Add to the tests added during HIVE-9474, for TRUNCATE PARTITION



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12203) CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results

2015-10-17 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961802#comment-14961802
 ] 

Jesus Camacho Rodriguez commented on HIVE-12203:


Results file in q file has been generated without CBO return path on.

> CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results
> ---
>
> Key: HIVE-12203
> URL: https://issues.apache.org/jira/browse/HIVE-12203
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: 2.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12203.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11985) don't store type names in metastore when metastore type names are not used

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961849#comment-14961849
 ] 

Hive QA commented on HIVE-11985:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766940/HIVE-11985.05.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 9700 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5688/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5688/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5688/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766940 - PreCommit-HIVE-TRUNK-Build

> don't store type names in metastore when metastore type names are not used
> --
>
> Key: HIVE-11985
> URL: https://issues.apache.org/jira/browse/HIVE-11985
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11985.01.patch, HIVE-11985.02.patch, 
> HIVE-11985.03.patch, HIVE-11985.05.patch, HIVE-11985.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12203) CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results

2015-10-17 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-12203:
---
Attachment: (was: HIVE-12203.patch)

> CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results
> ---
>
> Key: HIVE-12203
> URL: https://issues.apache.org/jira/browse/HIVE-12203
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: 2.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12203) CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results

2015-10-17 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-12203:
---
Attachment: HIVE-12203.patch

> CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results
> ---
>
> Key: HIVE-12203
> URL: https://issues.apache.org/jira/browse/HIVE-12203
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: 2.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12203.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12203) CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results

2015-10-17 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-12203:
---
Attachment: HIVE-12203.patch

Initial patch with three test cases that contain {{GROUPING__ID}} and fail when 
return path is on (wrong results).

After initial exploration, it seems that we are not handling properly 
{{GROUPING__ID}} in the translation of the GroupBy operator through the return 
path.

> CBO (Calcite Return Path): groupby_grouping_id2.q returns wrong results
> ---
>
> Key: HIVE-12203
> URL: https://issues.apache.org/jira/browse/HIVE-12203
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: 2.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12203.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11735) Different results when multiple if() functions are used

2015-10-17 Thread Chetna Chaudhari (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961904#comment-14961904
 ] 

Chetna Chaudhari commented on HIVE-11735:
-

[~ashutoshc]: This issue will occur in all queries wherever there are 
predicates based on case sensitive data. Any thoughts on whether I should 
proceed with fixing it for all. Because changing RowResolver class is causing 
test failures in other queries. Or its by design?

> Different results when multiple if() functions are used 
> 
>
> Key: HIVE-11735
> URL: https://issues.apache.org/jira/browse/HIVE-11735
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.0.0, 1.1.1, 1.2.1
>Reporter: Chetna Chaudhari
>Assignee: Chetna Chaudhari
> Attachments: HIVE-11735.patch
>
>
> Hive if() udf is returns different results when string equality is used as 
> condition, with case change. 
> Observation:
>1) if( name = 'chetna' , 3, 4) and if( name = 'Chetna', 3, 4) both are 
> treated as equal.
>2) The rightmost udf result is pushed to predicates on left side. Leading 
> to same result for both the udfs.
> How to reproduce the issue:
> 1) CREATE TABLE `sample`(
>   `name` string)
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1425075745');
> 2) insert into table sample values ('chetna');
> 3) select min(if(name = 'chetna', 4, 3)) , min(if(name='Chetna', 4, 3))  from 
> sample; 
> This will give result : 
> 33
> Expected result:
> 43
> 4) select min(if(name = 'Chetna', 4, 3)) , min(if(name='chetna', 4, 3))  from 
> sample; 
> This will give result 
> 44
> Expected result:
> 34



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12164) Remove jdbc stats collection mechanism

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961894#comment-14961894
 ] 

Hive QA commented on HIVE-12164:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767138/HIVE-12164.2.patch

{color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9697 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_conf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_fsstat
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_fsstat
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5689/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5689/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5689/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767138 - PreCommit-HIVE-TRUNK-Build

> Remove jdbc stats collection mechanism
> --
>
> Key: HIVE-12164
> URL: https://issues.apache.org/jira/browse/HIVE-12164
> Project: Hive
>  Issue Type: Task
>  Components: Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12164.1.patch, HIVE-12164.2.patch, HIVE-12164.patch
>
>
> Though there are some deployments using it, usually its painful to setup 
> since a valid hive-site.xml is needed on all task nodes (containing 
> connection details) and for large tasks (with thousands of tasks) results in 
> a scalability issue with all of them hammering DB at nearly same time.
> Because of these pain points alternative stats collection mechanism were 
> added. FS stats based system is default for some time.
> We should remove jdbc stats collection mechanism as it needlessly adds 
> complexity in TS and FS operators w.r.t key handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-12164) Remove jdbc stats collection mechanism

2015-10-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-12164.
-
   Resolution: Fixed
Fix Version/s: 2.0.0

Pushed to master. Thanks, Pengcheng for review.

> Remove jdbc stats collection mechanism
> --
>
> Key: HIVE-12164
> URL: https://issues.apache.org/jira/browse/HIVE-12164
> Project: Hive
>  Issue Type: Task
>  Components: Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 2.0.0
>
> Attachments: HIVE-12164.1.patch, HIVE-12164.2.patch, 
> HIVE-12164.3.patch, HIVE-12164.patch
>
>
> Though there are some deployments using it, usually its painful to setup 
> since a valid hive-site.xml is needed on all task nodes (containing 
> connection details) and for large tasks (with thousands of tasks) results in 
> a scalability issue with all of them hammering DB at nearly same time.
> Because of these pain points alternative stats collection mechanism were 
> added. FS stats based system is default for some time.
> We should remove jdbc stats collection mechanism as it needlessly adds 
> complexity in TS and FS operators w.r.t key handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12164) Remove jdbc stats collection mechanism

2015-10-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12164:

Affects Version/s: 2.0.0

> Remove jdbc stats collection mechanism
> --
>
> Key: HIVE-12164
> URL: https://issues.apache.org/jira/browse/HIVE-12164
> Project: Hive
>  Issue Type: Task
>  Components: Statistics
>Affects Versions: 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 2.0.0
>
> Attachments: HIVE-12164.1.patch, HIVE-12164.2.patch, 
> HIVE-12164.3.patch, HIVE-12164.patch
>
>
> Though there are some deployments using it, usually its painful to setup 
> since a valid hive-site.xml is needed on all task nodes (containing 
> connection details) and for large tasks (with thousands of tasks) results in 
> a scalability issue with all of them hammering DB at nearly same time.
> Because of these pain points alternative stats collection mechanism were 
> added. FS stats based system is default for some time.
> We should remove jdbc stats collection mechanism as it needlessly adds 
> complexity in TS and FS operators w.r.t key handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11895) CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix udaf_percentile_approx_23.q

2015-10-17 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962012#comment-14962012
 ] 

Pengcheng Xiong commented on HIVE-11895:


The failed tests are unrelated.

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix 
> udaf_percentile_approx_23.q
> -
>
> Key: HIVE-11895
> URL: https://issues.apache.org/jira/browse/HIVE-11895
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-11895.01.patch, HIVE-11895.02.patch
>
>
> Due to a type conversion problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12164) Remove jdbc stats collection mechanism

2015-10-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12164:

Attachment: HIVE-12164.3.patch

> Remove jdbc stats collection mechanism
> --
>
> Key: HIVE-12164
> URL: https://issues.apache.org/jira/browse/HIVE-12164
> Project: Hive
>  Issue Type: Task
>  Components: Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12164.1.patch, HIVE-12164.2.patch, 
> HIVE-12164.3.patch, HIVE-12164.patch
>
>
> Though there are some deployments using it, usually its painful to setup 
> since a valid hive-site.xml is needed on all task nodes (containing 
> connection details) and for large tasks (with thousands of tasks) results in 
> a scalability issue with all of them hammering DB at nearly same time.
> Because of these pain points alternative stats collection mechanism were 
> added. FS stats based system is default for some time.
> We should remove jdbc stats collection mechanism as it needlessly adds 
> complexity in TS and FS operators w.r.t key handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11895) CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix udaf_percentile_approx_23.q

2015-10-17 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962013#comment-14962013
 ] 

Pengcheng Xiong commented on HIVE-11895:


[~ashutoshc], could you please take a look? Thanks!

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix 
> udaf_percentile_approx_23.q
> -
>
> Key: HIVE-11895
> URL: https://issues.apache.org/jira/browse/HIVE-11895
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-11895.01.patch, HIVE-11895.02.patch
>
>
> Due to a type conversion problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12189) The list in pushdownPreds of ppd.ExprWalkerInfo should not be allowed to grow very large

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962014#comment-14962014
 ] 

Hive QA commented on HIVE-12189:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767094/HIVE-12189.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9702 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5693/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5693/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5693/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767094 - PreCommit-HIVE-TRUNK-Build

> The list in pushdownPreds of ppd.ExprWalkerInfo should not be allowed to grow 
> very large
> 
>
> Key: HIVE-12189
> URL: https://issues.apache.org/jira/browse/HIVE-12189
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-12189.1.patch
>
>
> Some queries are very slow in compile time, for example following query
> {noformat}
> select * from tt1 nf 
> join tt2 a1 on (nf.col1 = a1.col1 and nf.hdp_databaseid = a1.hdp_databaseid) 
> join tt3 a2 on(a2.col2 = a1.col2 and a2.col3 = nf.col3 and 
> a2.hdp_databaseid = nf.hdp_databaseid) 
> join tt4 a3 on  (a3.col4 = a2.col4 and a3.col3 = a2.col3) 
> join tt5 a4 on (a4.col4 = a2.col4 and a4.col5 = a2.col5 and a4.col3 = 
> a2.col3 and a4.hdp_databaseid = nf.hdp_databaseid) 
> join tt6 a5 on  (a5.col3 = a2.col3 and a5.col2 = a2.col2 and 
> a5.hdp_databaseid = nf.hdp_databaseid) 
> JOIN tt7 a6 ON (a2.col3 = a6.col3 and a2.col2 = a6.col2 and a6.hdp_databaseid 
> = nf.hdp_databaseid) 
> JOIN tt8 a7 ON (a2.col3 = a7.col3 and a2.col2 = a7.col2 and a7.hdp_databaseid 
> = nf.hdp_databaseid)
> where nf.hdp_databaseid = 102 limit 10;
> {noformat}
> takes around 120 seconds to compile in hive 1.1 when
> hive.mapred.mode=strict;
> hive.optimize.ppd=true;
> and hive is not in test mode.
> All the above tables are tables with one column as partition. But all the 
> tables are empty table. If the tables are not empty, it is reported that the 
> compile so slow that it looks like hive is hanging. 
> In hive 2.0, the compile is much faster, explain takes 6.6 seconds. But it is 
> still a lot of time. One of the problem slows ppd down is that list in 
> pushdownPreds can grow very large which makes extractPushdownPreds bad 
> performance:
> {noformat}
> public static ExprWalkerInfo extractPushdownPreds(OpWalkerInfo opContext,
> Operator op, List preds)
> {noformat}
> During run the query above, in the following break point preds  has size of 
> 12051, and most entry of the list is: 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> GenericUDFOPEqual(Column[hdp_databaseid], Const int 102), 
> Following code in extractPushdownPreds will clone all the nodes in preds and 
> do the walk. Hive 2.0 is faster because HIVE-11652(and other jiras) makes 
> startWalking much faster, but we still clone thousands of nodes with same 
> expression. Should we store so many same predicates in the list or just one 
> is good enough?  
> {noformat}
> List startNodes = new ArrayList();
> List clonedPreds = new ArrayList();
> for (ExprNodeDesc node : preds) {
>   ExprNodeDesc clone = node.clone();
>   clonedPreds.add(clone);
>   exprContext.getNewToOldExprMap().put(clone, node);
> }
> startNodes.addAll(clonedPreds);
> egw.startWalking(startNodes, null);
> {noformat}
> Should we change java/org/apache/hadoop/hive/ql/ppd/ExprWalkerInfo.java
> method 
> public void addFinalCandidate(String alias, ExprNodeDesc expr) 
> and
> public void addPushDowns(String alias, List pushDowns) 
> 

[jira] [Commented] (HIVE-11499) Datanucleus leaks classloaders when used using embedded metastore with HiveServer2 with UDFs

2015-10-17 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962025#comment-14962025
 ] 

Ashutosh Chauhan commented on HIVE-11499:
-

I noticed this patch checked in a jar file in git repo. Its usually not 
considered a good practice to check in binary files in source repo. We never 
know when we need to update source of that jar. Ideally, source code should be 
checked in and then it should be compiled in build phase and then used in test 
phase. 
[~hsubramaniyan] spent lot of time last year to make our repo binary-file free. 
Please reconsider decision of checking in a jar file.

> Datanucleus leaks classloaders when used using embedded metastore with 
> HiveServer2 with UDFs
> 
>
> Key: HIVE-11499
> URL: https://issues.apache.org/jira/browse/HIVE-11499
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Metastore
>Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.1.1, 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11499.1.patch, HIVE-11499.3.patch, 
> HIVE-11499.4.patch, HS2-NucleusCache-Leak.tiff
>
>
> When UDFs are used, we create a new classloader to add the UDF jar. Similar 
> to what hadoop's reflection utils does(HIVE-11408), datanucleus caches the 
> classloaders 
> (https://github.com/datanucleus/datanucleus-core/blob/3.2/src/java/org/datanucleus/NucleusContext.java#L161).
>  JDOPersistanceManager factory (1 per JVM) holds on to a NucleusContext 
> reference 
> (https://github.com/datanucleus/datanucleus-api-jdo/blob/3.2/src/java/org/datanucleus/api/jdo/JDOPersistenceManagerFactory.java#L115).
>  Until we call  NucleusContext#close, the classloader cache is not cleared. 
> In case of UDFs this can lead to permgen leak, as shown in the attached 
> screenshot, where NucleusContext holds on to several URLClassloader objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11981) ORC Schema Evolution Issues (Vectorized, ACID, and Non-Vectorized)

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961961#comment-14961961
 ] 

Hive QA commented on HIVE-11981:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767005/HIVE-11981.06.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5691/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5691/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5691/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-5691/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at ec07664 HIVE-12083 : HIVE-10965 introduces thrift error if 
partNames or colNames are empty (Sushanth Sowmyan, reviewed by Thejas Nair)
+ git clean -f -d
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at ec07664 HIVE-12083 : HIVE-10965 introduces thrift error if 
partNames or colNames are empty (Sushanth Sowmyan, reviewed by Thejas Nair)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
Going to apply patch with: patch -p0
patching file common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
patching file itests/src/test/resources/testconfiguration.properties
patching file 
llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapInputFormat.java
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkReduceRecordHandler.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/ReduceRecordProcessor.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/ReduceRecordSource.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExtractRow.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinBaseOperator.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSMBMapJoinOperator.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
Hunk #7 succeeded at 1043 (offset 1 line).
Hunk #8 succeeded at 2349 (offset 2 lines).
Hunk #9 succeeded at 2464 (offset 2 lines).
Hunk #10 succeeded at 2490 (offset 2 lines).
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedBatchUtil.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
patching file 

[jira] [Commented] (HIVE-12201) Tez settings need to be shown in set -v output when execution engine is tez.

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961957#comment-14961957
 ] 

Hive QA commented on HIVE-12201:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767099/HIVE-12201.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 9064 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.initializationError
org.apache.hadoop.hive.cli.TestSparkCliDriver.initializationError
org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.initializationError
org.apache.hive.beeline.cli.TestHiveCli.testSetHeaderValue
org.apache.hive.beeline.cli.TestHiveCli.testSetPromptValue
org.apache.hive.beeline.cli.TestHiveCli.testUseCurrentDB1
org.apache.hive.beeline.cli.TestHiveCli.testUseCurrentDB2
org.apache.hive.beeline.cli.TestHiveCli.testUseCurrentDB3
org.apache.hive.beeline.cli.TestHiveCli.testVariablesForSource
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.jdbc.TestSchedulerQueue.testFairSchedulerPrimaryQueueMapping
org.apache.hive.jdbc.TestSchedulerQueue.testFairSchedulerQueueMapping
org.apache.hive.jdbc.TestSchedulerQueue.testFairSchedulerSecondaryQueueMapping
org.apache.hive.jdbc.TestSchedulerQueue.testQueueMappingCheckDisabled
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testConnection
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testProxyAuth
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testTokenAuth
org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitDir
org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitFile
org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitFileAndConfOverlay
org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitFileWithUser
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5690/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5690/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5690/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767099 - PreCommit-HIVE-TRUNK-Build

> Tez settings need to be shown in set -v output when execution engine is tez.
> 
>
> Key: HIVE-12201
> URL: https://issues.apache.org/jira/browse/HIVE-12201
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.1, 1.2.1
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Minor
> Attachments: HIVE-12201.1.patch, HIVE-12201.2.patch
>
>
> The set -v output currently shows configurations for yarn, hdfs etc. but does 
> not show tez settings when tez is set as the execution engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12017) Do not disable CBO by default when number of joins in a query is equal or less than 1

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961962#comment-14961962
 ] 

Hive QA commented on HIVE-12017:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767062/HIVE-12017.05.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5692/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5692/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5692/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-5692/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at ec07664 HIVE-12083 : HIVE-10965 introduces thrift error if 
partNames or colNames are empty (Sushanth Sowmyan, reviewed by Thejas Nair)
+ git clean -f -d
Removing 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java.orig
Removing ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java.orig
Removing ql/src/java/org/apache/hadoop/hive/ql/io/orc/SchemaEvolution.java
Removing 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java.orig.rej
Removing 
ql/src/java/org/apache/hadoop/hive/ql/plan/VectorPartitionConversion.java
Removing ql/src/java/org/apache/hadoop/hive/ql/plan/VectorPartitionDesc.java
Removing ql/src/test/queries/clientpositive/schema_evol_orc_acid_mapwork_part.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_orc_nonvec_fetchwork_part.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_orc_nonvec_fetchwork_table.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_orc_nonvec_mapwork_part.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_orc_nonvec_mapwork_table.q
Removing ql/src/test/queries/clientpositive/schema_evol_orc_vec_mapwork_part.q
Removing ql/src/test/queries/clientpositive/schema_evol_orc_vec_mapwork_table.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_text_nonvec_fetchwork_part.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_text_nonvec_fetchwork_table.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_text_nonvec_mapwork_part.q
Removing 
ql/src/test/queries/clientpositive/schema_evol_text_nonvec_mapwork_table.q
Removing 
ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_part.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_part.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_table.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_table.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_table.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_text_nonvec_fetchwork_part.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_text_nonvec_fetchwork_table.q.out
Removing 
ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part.q.out
Removing 

[jira] [Commented] (HIVE-12014) re-enable most LLAP tests after merge

2015-10-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962086#comment-14962086
 ] 

Hive QA commented on HIVE-12014:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12767117/HIVE-12014.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 9664 tests executed
*Failed tests:*
{noformat}
TestMiniLlapCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5694/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5694/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5694/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12767117 - PreCommit-HIVE-TRUNK-Build

> re-enable most LLAP tests after merge
> -
>
> Key: HIVE-12014
> URL: https://issues.apache.org/jira/browse/HIVE-12014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12014.patch
>
>
> see HIVE-12013



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)