[jira] [Resolved] (HIVE-15175) NOT IN condition is not handled correctly with predicate push down

2016-11-10 Thread Teruyoshi Zenmyo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teruyoshi Zenmyo resolved HIVE-15175.
-
Resolution: Duplicate

> NOT IN condition is not handled correctly with predicate push down
> --
>
> Key: HIVE-15175
> URL: https://issues.apache.org/jira/browse/HIVE-15175
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Teruyoshi Zenmyo
>
> With predicate pushdown enabled, NOT IN conditions are treated as FALSE.
> Example (pkey is a partition key)
> {code}
> hive> select * from test;
> OK
> test.keytest.valtest.pkey
> a   1   a
> b   2   a
> c   3   a
> a   1   b
> b   2   b
> c   3   b
> Time taken: 0.171 seconds, Fetched: 6 row(s)
> hive> set hive.optimize.ppd=false;
> hive> select * from test where not pkey in ('a');
> OK
> test.keytest.valtest.pkey
> a   1   b
> b   2   b
> c   3   b
> Time taken: 0.237 seconds, Fetched: 3 row(s)
> hive> set hive.optimize.ppd=true;
> hive> select * from test where not pkey in ('a');
> OK
> test.keytest.valtest.pkey
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-14948) properly handle special characters in identifiers

2016-11-10 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman resolved HIVE-14948.
---
Resolution: Duplicate

included in patch 11 of HIVE-14943

> properly handle special characters in identifiers
> -
>
> Key: HIVE-14948
> URL: https://issues.apache.org/jira/browse/HIVE-14948
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> The treatment of quoted identifiers in HIVE-14943 is inconsistent.  Need to 
> clean this up and if possible only quote those identifiers that need to be 
> quoted in the generated SQL statement



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15178) ORC stripe merge may produce many MR jobs and no merge if split size is small

2016-11-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15178:

Description: 
orc_createas1
logs the following:
{noformat}
2016-11-10T13:38:54,366  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2400+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
2016-11-10T13:38:54,373  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2500+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
2016-11-10T13:38:54,380  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2600+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
2016-11-10T13:38:54,387  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2700+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
...
{noformat}

It tries to merge 2 files, but instead ends up running tons of MR tasks for 
every 100 bytes and produces 2 files again (I assume most tasks don't produce 
the files because the stripes are invalid).
{noformat}
2016-11-10T13:38:53,985  INFO [LocalJobRunner Map Task Executor #0] 
OrcFileMergeOperator: Merged stripe from file 
pfile:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/00_0
 [ offset : 3 length: 2770 row: 500 ]
2016-11-10T13:38:53,995  INFO [LocalJobRunner Map Task Executor #0] 
exec.AbstractFileMergeOperator: renamed path 
pfile:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/_task_tmp.-ext-10002/_tmp.02_0
 to 
pfile:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/_tmp.-ext-10002/02_0
 . File size is 2986
2016-11-10T13:38:54,206  INFO [LocalJobRunner Map Task Executor #0] 
OrcFileMergeOperator: Merged stripe from file 
pfile:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0
 [ offset : 3 length: 2770 row: 500 ]
2016-11-10T13:38:54,215  INFO [LocalJobRunner Map Task Executor #0] 
exec.AbstractFileMergeOperator: renamed path 
pfile:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/_task_tmp.-ext-10002/_tmp.30_0
 to 
pfile:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/_tmp.-ext-10002/30_0
 . File size is 2986
{noformat}

This is because the test sets the max split size to 100. Merge jobs is supposed 
to override that, but that doesn't happen somehow.

  was:
orc_createas1
logs the following:
{noformat}
2016-11-10T13:38:54,366  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2400+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
2016-11-10T13:38:54,373  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2500+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
2016-11-10T13:38:54,380  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2600+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
2016-11-10T13:38:54,387  INFO [LocalJobRunner Map Task Executor #0] 
mapred.MapTask: Processing split: 
Paths:/Users/sergey/git/hivegit2/itests/qtest/target/warehouse/.hive-staging_hive_2016-11-10_13-38-52_334_1323113125332102866-1/-ext-10004/01_0:2700+100InputFormatClass:
 org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat
...
{noformat}

It tries to merge 2 files, but instead 

[jira] [Updated] (HIVE-15085) Reduce the memory used by unit tests, MiniCliDriver, MiniLlapLocal, MiniSpark

2016-11-10 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-15085:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

> Reduce the memory used by unit tests, MiniCliDriver, MiniLlapLocal, MiniSpark
> -
>
> Key: HIVE-15085
> URL: https://issues.apache.org/jira/browse/HIVE-15085
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.2.0
>
> Attachments: HIVE-15085.01.patch, HIVE-15085.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15119) Support standard syntax for ROLLUP & CUBE

2016-11-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-15119:
---
Attachment: HIVE-15119.05.patch

> Support standard syntax for ROLLUP & CUBE
> -
>
> Key: HIVE-15119
> URL: https://issues.apache.org/jira/browse/HIVE-15119
> Project: Hive
>  Issue Type: Task
>  Components: Parser, SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-15119.03.patch, HIVE-15119.05.patch, 
> HIVE-15119.2.patch, HIVE-15119.4.patch, HIVE-15119.patch
>
>
> Standard ROLLUP and CUBE syntax is GROUP BY ROLLUP (expression list)... and 
> GROUP BY CUBE (expression list) respectively. 
> Currently HIVE only allows GROUP BY  WITH ROLLUP/CUBE syntax.
>  
>  We would like HIVE to support standard ROLLUP/CUBE syntax.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15176) Small typo in hiveserver2 webui

2016-11-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654293#comment-15654293
 ] 

Hive QA commented on HIVE-15176:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12838359/HIVE-15176.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10637 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_acid_non_acid]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=145)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_4] 
(batchId=91)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2062/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2062/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2062/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12838359 - PreCommit-HIVE-Build

> Small typo in hiveserver2 webui
> ---
>
> Key: HIVE-15176
> URL: https://issues.apache.org/jira/browse/HIVE-15176
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Web UI
>Reporter: Miklos Csanady
>Assignee: Miklos Csanady
>Priority: Trivial
> Attachments: HIVE-15176.patch
>
>
> I found a small typo in webui for hiveserver2
> The Waited is happened to spell Wtaited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15177) Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST

2016-11-10 Thread Subrahmanya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subrahmanya updated HIVE-15177:
---
Description: 
Authentication with hive fails when kerberos auth type is set to fromSubject 
and principal contains _HOST.

When auth type is set to fromSubject, _HOST in principal is not resolved to the 
actual host name even though the correct host name is available. This leads to 
connection failure. If auth type is not set to fromSubject host resolution is 
done correctly.

The problem is in getKerberosTransport method of 
org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is 
true host name in the principal is not resolved. When it is false, host name is 
passed on to HadoopThriftAuthBridge, which takes care of resolving the 
parameter.

  was:
Authentication with hive fails when kerberos auth type is set to fromSubject 
and principal contains _HOST.

When auth type is set to fromSubject, _HOST in principal is not resolved to the 
actual host name even though the correct host name is available. This leads to 
connection failure. If auth type is not set to fromSubject host resolution is 
done correctly.


> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST
> -
>
> Key: HIVE-15177
> URL: https://issues.apache.org/jira/browse/HIVE-15177
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Subrahmanya
>
> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST.
> When auth type is set to fromSubject, _HOST in principal is not resolved to 
> the actual host name even though the correct host name is available. This 
> leads to connection failure. If auth type is not set to fromSubject host 
> resolution is done correctly.
> The problem is in getKerberosTransport method of 
> org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is 
> true host name in the principal is not resolved. When it is false, host name 
> is passed on to HadoopThriftAuthBridge, which takes care of resolving the 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15176) Small typo in hiveserver2 webui

2016-11-10 Thread Miklos Csanady (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Csanady updated HIVE-15176:
--
Status: Patch Available  (was: Open)

> Small typo in hiveserver2 webui
> ---
>
> Key: HIVE-15176
> URL: https://issues.apache.org/jira/browse/HIVE-15176
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Web UI
>Reporter: Miklos Csanady
>Assignee: Miklos Csanady
>Priority: Trivial
> Attachments: HIVE-15176.patch
>
>
> I found a small typo in webui for hiveserver2
> The Waited is happened to spell Wtaited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15176) Small typo in hiveserver2 webui

2016-11-10 Thread Miklos Csanady (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Csanady updated HIVE-15176:
--
Attachment: HIVE-15176.patch

> Small typo in hiveserver2 webui
> ---
>
> Key: HIVE-15176
> URL: https://issues.apache.org/jira/browse/HIVE-15176
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Web UI
>Reporter: Miklos Csanady
>Assignee: Miklos Csanady
>Priority: Trivial
> Attachments: HIVE-15176.patch
>
>
> I found a small typo in webui for hiveserver2
> The Waited is happened to spell Wtaited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-14541) Beeline does not prompt for username and password properly

2016-11-10 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar resolved HIVE-14541.

Resolution: Not A Bug

Closing as per discussion above. Thanks [~mcsanady] for following up on this.

> Beeline does not prompt for username and password properly
> --
>
> Key: HIVE-14541
> URL: https://issues.apache.org/jira/browse/HIVE-14541
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>
> In the default mode, when we connect using !connect 
> jdbc:hive2://localhost:1 (without providing user and password) beeling 
> prompts for it as expected.
> But when we use beeline -u "url" and do not provide -n or -p arguments, it 
> does not prompt for the user/password
> {noformat}
> $ ./beeline -u jdbc:hive2://localhost:1
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:15 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}
> {noformat}
> $ ./beeline
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> beeline> !connect "jdbc:hive2://localhost:1"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Enter username for jdbc:hive2://localhost:1: hive
> Enter password for jdbc:hive2://localhost:1: 
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:03 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14541) Beeline does not prompt for username and password properly

2016-11-10 Thread Miklos Csanady (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Csanady reassigned HIVE-14541:
-

Assignee: Vihang Karajgaonkar  (was: Miklos Csanady)

I have no access to close it.

> Beeline does not prompt for username and password properly
> --
>
> Key: HIVE-14541
> URL: https://issues.apache.org/jira/browse/HIVE-14541
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>
> In the default mode, when we connect using !connect 
> jdbc:hive2://localhost:1 (without providing user and password) beeling 
> prompts for it as expected.
> But when we use beeline -u "url" and do not provide -n or -p arguments, it 
> does not prompt for the user/password
> {noformat}
> $ ./beeline -u jdbc:hive2://localhost:1
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:15 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}
> {noformat}
> $ ./beeline
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> beeline> !connect "jdbc:hive2://localhost:1"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Enter username for jdbc:hive2://localhost:1: hive
> Enter password for jdbc:hive2://localhost:1: 
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:03 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14541) Beeline does not prompt for username and password properly

2016-11-10 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653885#comment-15653885
 ] 

Vihang Karajgaonkar commented on HIVE-14541:


I think this JIRA can be closed. Earlier I thought that when you use beeline -u 
 to connect using Beeline, it should prompt for a username/password and 
not assume that username and password is empty string. But it seems like this 
is by design and should not be changed. Otherwise it might break backwards 
compatibility. Existing scripts which rely on this behavior might break. 
HIVE-13589 is a different use-case where if the user likes he/she can provide 
the password on the console using the password prompt (without showing it on 
the screen) for security reasons when using beeline -u  -n  -p 
syntax.

> Beeline does not prompt for username and password properly
> --
>
> Key: HIVE-14541
> URL: https://issues.apache.org/jira/browse/HIVE-14541
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Miklos Csanady
>
> In the default mode, when we connect using !connect 
> jdbc:hive2://localhost:1 (without providing user and password) beeling 
> prompts for it as expected.
> But when we use beeline -u "url" and do not provide -n or -p arguments, it 
> does not prompt for the user/password
> {noformat}
> $ ./beeline -u jdbc:hive2://localhost:1
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:15 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}
> {noformat}
> $ ./beeline
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> beeline> !connect "jdbc:hive2://localhost:1"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Enter username for jdbc:hive2://localhost:1: hive
> Enter password for jdbc:hive2://localhost:1: 
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:03 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15168) Flaky test: TestSparkClient.testJobSubmission (still flaky)

2016-11-10 Thread Barna Zsombor Klara (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653721#comment-15653721
 ] 

Barna Zsombor Klara commented on HIVE-15168:


The problem is similar to the one fixed before for this test, we have a race 
condition between the listeners being registered (rpc.addListenre and 
promis.addListener) and the submit of the message through RPC.
If you take a look at SparkClientImpl#ClientProtocol#submit you will see that 
currently the driverRpc.call is invoked before the listeners are registered.
To reproduce the test failure one needs to add for example a Thread.sleep after 
the driverRpc.call and before the listeners are being registered.
This would probably never or almost never happen in real life, because the 
execution of the spark job and the network latency should easily take longer 
than the time needed for the code with the listeners to run. But in the unit 
test it is a cause of intermittent failures.

> Flaky test: TestSparkClient.testJobSubmission (still flaky)
> ---
>
> Key: HIVE-15168
> URL: https://issues.apache.org/jira/browse/HIVE-15168
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Barna Zsombor Klara
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-15168.patch
>
>
> [HIVE-14910|https://issues.apache.org/jira/browse/HIVE-14910] already 
> addressed one source of flakyness bud sadly not all it seems.
> In JobHandleImpl the listeners are registered after the job has been 
> submitted.
> This may end up in a racecondition.
> {code}
>  // Link the RPC and the promise so that events from one are propagated to 
> the other as
>   // needed.
>   rpc.addListener(new 
> GenericFutureListener() {
> @Override
> public void operationComplete(io.netty.util.concurrent.Future 
> f) {
>   if (f.isSuccess()) {
> handle.changeState(JobHandle.State.QUEUED);
>   } else if (!promise.isDone()) {
> promise.setFailure(f.cause());
>   }
> }
>   });
>   promise.addListener(new GenericFutureListener() {
> @Override
> public void operationComplete(Promise p) {
>   if (jobId != null) {
> jobs.remove(jobId);
>   }
>   if (p.isCancelled() && !rpc.isDone()) {
> rpc.cancel(true);
>   }
> }
>   });
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15168) Flaky test: TestSparkClient.testJobSubmission (still flaky)

2016-11-10 Thread Barna Zsombor Klara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barna Zsombor Klara updated HIVE-15168:
---
Attachment: HIVE-15168.patch

> Flaky test: TestSparkClient.testJobSubmission (still flaky)
> ---
>
> Key: HIVE-15168
> URL: https://issues.apache.org/jira/browse/HIVE-15168
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Barna Zsombor Klara
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-15168.patch
>
>
> [HIVE-14910|https://issues.apache.org/jira/browse/HIVE-14910] already 
> addressed one source of flakyness bud sadly not all it seems.
> In JobHandleImpl the listeners are registered after the job has been 
> submitted.
> This may end up in a racecondition.
> {code}
>  // Link the RPC and the promise so that events from one are propagated to 
> the other as
>   // needed.
>   rpc.addListener(new 
> GenericFutureListener() {
> @Override
> public void operationComplete(io.netty.util.concurrent.Future 
> f) {
>   if (f.isSuccess()) {
> handle.changeState(JobHandle.State.QUEUED);
>   } else if (!promise.isDone()) {
> promise.setFailure(f.cause());
>   }
> }
>   });
>   promise.addListener(new GenericFutureListener() {
> @Override
> public void operationComplete(Promise p) {
>   if (jobId != null) {
> jobs.remove(jobId);
>   }
>   if (p.isCancelled() && !rpc.isDone()) {
> rpc.cancel(true);
>   }
> }
>   });
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15093) S3-to-S3 Renames: Files should be moved individually rather than at a directory level

2016-11-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653694#comment-15653694
 ] 

Steve Loughran edited comment on HIVE-15093 at 11/10/16 10:32 AM:
--

# I've just started HADOOP-13600, though busy with preparation and attendance 
at ApacheCon big data means expect no real progress for the next 10 days
# there's a recent discussion on common dev about when 2.8 RC comes out

as far as HDP goes, all the s3a phase II read pipeline work is in HDP-2.5; the 
HDP-cloud in AWS product adds the HADOOP-13560 write pipeline; with a faster 
update cycle it'd be out the door fairly rapidly too (disclaimer, no forward 
looking statements, etc etc). CDH hasn't shipped with any of the phase II 
changes in yet, that's something to discuss with your colleagues. Given the 
emphasis on Impala & S3, I'd expect it sooner rather than later

Here's [the work in 
progress|https://github.com/steveloughran/hadoop/blob/s3/HADOOOP-13600-rename/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L802];
 as I note in the code, I'm not doing it right. We should have the list and 
delete operations working in parallel too, because list is pretty slow too, and 
I want to eliminate all sequential points in the code.

I know it's complicated, but it shows why this routine is so much better down 
in the layers beneath: we can optimise every single HTTP request to S3a, order 
the copy calls for maximum overlapping operations, *and write functional tests 
against real s3 endpoints*. object stores are so different from filesystems 
that testing against localfs is misleading.


was (Author: ste...@apache.org):
#. I've just started HADOOP-13600, though busy with preparation and attendance 
at ApacheCon big data means expect no real progress for the next 10 days
# discussion on common dev about when 2.8 RC comes out

as far as HDP goes, all the s3a phase II read pipeline work is in HDP-2.5; the 
HDP-cloud in AWS product adds the HADOOP-13560 write pipeline; with a faster 
update cycle it'd be out the door fairly rapidly too (disclaimer, no forward 
looking statements, etc etc). CDH hasn't shipped with any of the phase II 
changes in yet, that's something to discuss with your colleagues. Given the 
emphasis on Impala & S3, I'd expect it sooner rather than later

Here's [the work in 
progress|https://github.com/steveloughran/hadoop/blob/s3/HADOOOP-13600-rename/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L802];
 as I note in the code, I'm not doing it right. We should have the list and 
delete operations working in parallel too, because list is pretty slow too, and 
I want to eliminate all sequential points in the code.

I know it's complicated, but it shows why this routine is so much better down 
in the layers beneath: we can optimise every single HTTP request to S3a, order 
the copy calls for maximum overlapping operations, *and write functional tests 
against real s3 endpoints*. object stores are so different from filesystems 
that testing against localfs is misleading.

> S3-to-S3 Renames: Files should be moved individually rather than at a 
> directory level
> -
>
> Key: HIVE-15093
> URL: https://issues.apache.org/jira/browse/HIVE-15093
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-15093.1.patch, HIVE-15093.2.patch, 
> HIVE-15093.3.patch, HIVE-15093.4.patch, HIVE-15093.5.patch, 
> HIVE-15093.6.patch, HIVE-15093.7.patch, HIVE-15093.8.patch, HIVE-15093.9.patch
>
>
> Hive's MoveTask uses the Hive.moveFile method to move data within a 
> distributed filesystem as well as blobstore filesystems.
> If the move is done within the same filesystem:
> 1: If the source path is a subdirectory of the destination path, files will 
> be moved one by one using a threapool of workers
> 2: If the source path is not a subdirectory of the destination path, a single 
> rename operation is used to move the entire directory
> The second option may not work well on blobstores such as S3. Renames are not 
> metadata operations and require copying all the data. Client connectors to 
> blobstores may not efficiently rename directories. Worst case, the connector 
> will copy each file one by one, sequentially rather than using a threadpool 
> of workers to copy the data (e.g. HADOOP-13600).
> Hive already has code to rename files using a threadpool of workers, but this 
> only occurs in case number 1.
> This JIRA aims to modify the code so that case 1 is triggered when copying 
> within a blobstore. The focus is on copies within a blobstore because 
> needToCopy will return true if the 

[jira] [Commented] (HIVE-15093) S3-to-S3 Renames: Files should be moved individually rather than at a directory level

2016-11-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653694#comment-15653694
 ] 

Steve Loughran commented on HIVE-15093:
---

#. I've just started HADOOP-13600, though busy with preparation and attendance 
at ApacheCon big data means expect no real progress for the next 10 days
# discussion on common dev about when 2.8 RC comes out

as far as HDP goes, all the s3a phase II read pipeline work is in HDP-2.5; the 
HDP-cloud in AWS product adds the HADOOP-13560 write pipeline; with a faster 
update cycle it'd be out the door fairly rapidly too (disclaimer, no forward 
looking statements, etc etc). CDH hasn't shipped with any of the phase II 
changes in yet, that's something to discuss with your colleagues. Given the 
emphasis on Impala & S3, I'd expect it sooner rather than later

Here's [the work in 
progress|https://github.com/steveloughran/hadoop/blob/s3/HADOOOP-13600-rename/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L802];
 as I note in the code, I'm not doing it right. We should have the list and 
delete operations working in parallel too, because list is pretty slow too, and 
I want to eliminate all sequential points in the code.

I know it's complicated, but it shows why this routine is so much better down 
in the layers beneath: we can optimise every single HTTP request to S3a, order 
the copy calls for maximum overlapping operations, *and write functional tests 
against real s3 endpoints*. object stores are so different from filesystems 
that testing against localfs is misleading.

> S3-to-S3 Renames: Files should be moved individually rather than at a 
> directory level
> -
>
> Key: HIVE-15093
> URL: https://issues.apache.org/jira/browse/HIVE-15093
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-15093.1.patch, HIVE-15093.2.patch, 
> HIVE-15093.3.patch, HIVE-15093.4.patch, HIVE-15093.5.patch, 
> HIVE-15093.6.patch, HIVE-15093.7.patch, HIVE-15093.8.patch, HIVE-15093.9.patch
>
>
> Hive's MoveTask uses the Hive.moveFile method to move data within a 
> distributed filesystem as well as blobstore filesystems.
> If the move is done within the same filesystem:
> 1: If the source path is a subdirectory of the destination path, files will 
> be moved one by one using a threapool of workers
> 2: If the source path is not a subdirectory of the destination path, a single 
> rename operation is used to move the entire directory
> The second option may not work well on blobstores such as S3. Renames are not 
> metadata operations and require copying all the data. Client connectors to 
> blobstores may not efficiently rename directories. Worst case, the connector 
> will copy each file one by one, sequentially rather than using a threadpool 
> of workers to copy the data (e.g. HADOOP-13600).
> Hive already has code to rename files using a threadpool of workers, but this 
> only occurs in case number 1.
> This JIRA aims to modify the code so that case 1 is triggered when copying 
> within a blobstore. The focus is on copies within a blobstore because 
> needToCopy will return true if the src and target filesystems are different, 
> in which case a different code path is triggered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15085) Reduce the memory used by unit tests, MiniCliDriver, MiniLlapLocal, MiniSpark

2016-11-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653687#comment-15653687
 ] 

Prasanth Jayachandran commented on HIVE-15085:
--

+1. As we no longer run ptest in jdk7. Should we remove MaxPermSize? To avoid 
jdk8 warning.

> Reduce the memory used by unit tests, MiniCliDriver, MiniLlapLocal, MiniSpark
> -
>
> Key: HIVE-15085
> URL: https://issues.apache.org/jira/browse/HIVE-15085
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15085.01.patch, HIVE-15085.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15164) Change default RPC port for llap to be a dynamic port

2016-11-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653683#comment-15653683
 ] 

Prasanth Jayachandran commented on HIVE-15164:
--

+1

> Change default RPC port for llap to be a dynamic port
> -
>
> Key: HIVE-15164
> URL: https://issues.apache.org/jira/browse/HIVE-15164
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-15164.01.patch, HIVE-15164.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15040) LATERAL VIEW + WHERE IN ...= WRONG RESULT

2016-11-10 Thread Teruyoshi Zenmyo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653654#comment-15653654
 ] 

Teruyoshi Zenmyo commented on HIVE-15040:
-

Hi [~fpin], I have found similar (maybe same) issue (HIVE-15175).
I had tried the example query with hive.optimize.ppd=false and got 0 as result.

Would you confirm this workaround?

> LATERAL VIEW + WHERE IN ...= WRONG RESULT
> -
>
> Key: HIVE-15040
> URL: https://issues.apache.org/jira/browse/HIVE-15040
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Furcy Pin
>Priority: Critical
>
> This query:
> {code}
> SELECT 
>   COUNT(1)
> FROM (
>   SELECT 1 as c1 , Array(1, 2, 3) as c2 
>   UNION ALL 
>   SELECT 2 as c1 , Array(2, 3, 4) as c2 
> ) T
> LATERAL VIEW explode(c2) LV AS c
> WHERE c = 42
> AND T.c1 NOT IN (SELECT 1 UNION ALL SELECT 3) 
> ;
> {code}
> returns {{3}} in Hive 1.1.0 and 2.0.0
> But obviously it should return 0, since {{c = 42}} is false.
> It seems that the clause is ignored.
> Spark-SQL does return {{0}}.
> P.S. The UNION ALL is not causing the bug, I just wanted to demonstrate is 
> with a standalone query. Using regular tables instead still causes the same 
> bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15057) Support other types of operators (other than SELECT)

2016-11-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653600#comment-15653600
 ] 

Hive QA commented on HIVE-15057:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12838321/HIVE-15057.wip.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 358 failed/errored test(s), 10639 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[allcolref_in_udf] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_change_col]
 (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_cascade] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[archive_excludeHadoop20] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[archive_multi] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_view_1] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_view_3] 
(batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_view_disable_cbo_1]
 (batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_view_disable_cbo_3]
 (batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_6] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_9] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=65)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join6] (batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join7] (batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_reordering_values]
 (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask]
 (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_10] 
(batchId=65)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[binary_output_format] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_const] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_join0] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_join1] 
(batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_subq_exists] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_union] (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_subq_exists] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_union] (batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[char_join1] (batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[char_union1] (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[combine2] (batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[complex_alias] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer10] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer11] 
(batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer13] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer15] 
(batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer8] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer9] 
(batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_2] (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_mat_4] (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[date_join1] (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_3] (batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_join2] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_join] (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_join_breaktask] 
(batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[gby_star] (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_bigdata] 
(batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_1_23] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_skew_1_23] 
(batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[index_auto_self_join] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_convert_join]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[innerjoin] (batchId=30)

[jira] [Updated] (HIVE-15057) Support other types of operators (other than SELECT)

2016-11-10 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-15057:

Attachment: HIVE-15057.wip.patch

> Support other types of operators (other than SELECT)
> 
>
> Key: HIVE-15057
> URL: https://issues.apache.org/jira/browse/HIVE-15057
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer, Physical Optimizer
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-15057.wip.patch
>
>
> Currently only SELECT operators are supported for nested column pruning. We 
> should add support for other types of operators so the optimization can work 
> for complex queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15057) Support other types of operators (other than SELECT)

2016-11-10 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-15057:

Attachment: (was: HIVE-15057.wip.patch)

> Support other types of operators (other than SELECT)
> 
>
> Key: HIVE-15057
> URL: https://issues.apache.org/jira/browse/HIVE-15057
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer, Physical Optimizer
>Reporter: Chao Sun
>Assignee: Chao Sun
>
> Currently only SELECT operators are supported for nested column pruning. We 
> should add support for other types of operators so the optimization can work 
> for complex queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14541) Beeline does not prompt for username and password properly

2016-11-10 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653422#comment-15653422
 ] 

Lefty Leverenz commented on HIVE-14541:
---

Is this related to HIVE-13589?

> Beeline does not prompt for username and password properly
> --
>
> Key: HIVE-14541
> URL: https://issues.apache.org/jira/browse/HIVE-14541
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Miklos Csanady
>
> In the default mode, when we connect using !connect 
> jdbc:hive2://localhost:1 (without providing user and password) beeling 
> prompts for it as expected.
> But when we use beeline -u "url" and do not provide -n or -p arguments, it 
> does not prompt for the user/password
> {noformat}
> $ ./beeline -u jdbc:hive2://localhost:1
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:15 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}
> {noformat}
> $ ./beeline
> Beeline version 2.2.0-SNAPSHOT by Apache Hive
> beeline> !connect "jdbc:hive2://localhost:1"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:1
> Enter username for jdbc:hive2://localhost:1: hive
> Enter password for jdbc:hive2://localhost:1: 
> Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
> Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
> 16/08/15 18:09:03 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
> to false; Hive does not support autoCommit=false.
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://localhost:1> !quit
> Closing: 0: jdbc:hive2://localhost:1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15132) Docs on Wiki

2016-11-10 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653390#comment-15653390
 ] 

Lefty Leverenz commented on HIVE-15132:
---

[~ekoifman], would you please change the title (summary) of this issue to 
something less generic?  "Docs on Wiki for MERGE support" or some such.

> Docs on Wiki
> 
>
> Key: HIVE-15132
> URL: https://issues.apache.org/jira/browse/HIVE-15132
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15137) metastore add partitions background thread should use current username

2016-11-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653374#comment-15653374
 ] 

Hive QA commented on HIVE-15137:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12838303/HIVE-15137.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10637 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_acid_non_acid]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=145)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_3] 
(batchId=90)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[external2] 
(batchId=83)
org.apache.hive.hcatalog.streaming.TestStreaming.testAddPartition (batchId=179)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2060/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2060/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2060/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12838303 - PreCommit-HIVE-Build

> metastore add partitions background thread should use current username
> --
>
> Key: HIVE-15137
> URL: https://issues.apache.org/jira/browse/HIVE-15137
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Thejas M Nair
>Assignee: Daniel Dai
> Attachments: HIVE-15137.1.patch
>
>
> The background thread used in HIVE-13901 for adding partitions needs to be 
> reinitialized with current UGI for each invocation. Otherwise the user in 
> context while thread was created would be the current UGI during the actions 
> in the thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)