[jira] [Commented] (HIVE-12965) Insert overwrite local directory should perserve the overwritten directory permission

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135209#comment-15135209
 ] 

Hive QA commented on HIVE-12965:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786182/HIVE-12965.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_llapdecider
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testThreadPoolMetrics
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6878/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6878/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6878/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786182 - PreCommit-HIVE-TRUNK-Build

> Insert overwrite local directory should perserve the overwritten directory 
> permission
> -
>
> Key: HIVE-12965
> URL: https://issues.apache.org/jira/browse/HIVE-12965
> Project: Hive
>  Issue Type: Bug
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12965.patch
>
>
> In Hive, "insert overwrite local directory" first deletes the overwritten 
> directory if exists, recreate a new one, then copy the files from src 
> directory to the new local directory. This process sometimes changes the 
> permissions of the to-be-overwritten local directory, therefore causing some 
> applications no more to be able to access its content.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12949) Fix description of hive.server2.logging.operation.level

2016-02-05 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135634#comment-15135634
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-12949:
--

[~mohitsabharwal] Thanks for the comments. If you see, LogDivertAppender has a 
Filter which decides which server side message to be logged at the session 
based on the logging level. The logic is here : 
https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/LogDivertAppender.java#L150.
As for the operation log, it depends on the parentSession's HiveConf which 
decides the logging level as initiated here : 
https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/Operation.java#L246
Everytime you modify the parameter from the session, the HiveConf object 
corresponding to the session gets modified to reflect the parameter change and 
hence the operation is initialized with this information.

I tested this parameter with the current master and the feature works as 
intended, if you feel that this is not the case, please update the jira/title 
to include the actual issue with a testcase. I don't think a description change 
of hive.server2.logging.operation.level is the correct way to fix this problem 
if this is an actual bug.

Thanks
Hari

> Fix description of hive.server2.logging.operation.level
> ---
>
> Key: HIVE-12949
> URL: https://issues.apache.org/jira/browse/HIVE-12949
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
> Attachments: HIVE-12949.patch
>
>
> Description of {{hive.server2.logging.operation.level}} states that this 
> property is available to set at session level. This is not correct. The 
> property is only settable at HS2 level currently.
> A log4j appender is created just once currently - based on logging level 
> specified by this config, at OperationManager init time in 
> initOperationLogCapture(). And there is only one instance of OperationManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12923) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_grouping_sets4.q failure

2016-02-05 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135643#comment-15135643
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-12923:
--

[~jcamachorodriguez] Thanks for the review comments.
1.  I totally agree that this can potentially cause regression in some cases on 
join merging in the return path. As commented in the fix, this should be 
removed once CALCITE-1069 is fixed, but this fix is better than throwing an 
exception to the end user. One way to optimize this would be to check for the 
only first AGGREGATE node below the current PROJECT instead of the entire 
subtree. Does that sound fine ?

2. I dont totally get your second point. There can be a filter between the 
Project and Aggregate as JOIN->PROJECT->FILTER->AGGREGATE, for e.g. NOT NULL 
filter, but I believe the PROJECT is always present somewhere above a AGGREGATE 
after  CalcitePlanner.genLogicalPlan(QB qb, boolean outerMostQB) introduces a 
SELECT to remove the indicator columns from the GBY rowschema. If you are 
referring to FILTER above the PROJECT, then the HiveJoinProjectTranspose rule 
shouldnt kick in, right?

Thanks
Hari



> CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver 
> groupby_grouping_sets4.q failure
> 
>
> Key: HIVE-12923
> URL: https://issues.apache.org/jira/browse/HIVE-12923
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-12923.1.patch, HIVE-12923.2.patch
>
>
> {code}
> EXPLAIN
> SELECT * FROM
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq1
> join
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq2
> on subq1.a = subq2.a
> {code}
> Stack trace:
> {code}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.pruneJoinOperator(ColumnPrunerProcFactory.java:1110)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.access$400(ColumnPrunerProcFactory.java:85)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerJoinProc.process(ColumnPrunerProcFactory.java:941)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:172)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:135)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:237)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10176)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:472)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1168)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1256)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1094)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1129)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1103)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:10444)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets4(TestCliDriver.java:3313)
> {code}



--
This 

[jira] [Commented] (HIVE-13008) WebHcat DDL commands in secure mode NPE when default FileSystem doesn't support delegation tokens

2016-02-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134432#comment-15134432
 ] 

Chris Nauroth commented on HIVE-13008:
--

+1 (non-binding), pending pre-commit run.  This is the right thing to do from 
the perspective of integration with [HDFS 
Federation|http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/Federation.html]
 and use of 
[ViewFs|http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/ViewFs.html]
 as an abstraction over multiple file systems.

> WebHcat DDL commands in secure mode NPE when default FileSystem doesn't 
> support delegation tokens
> -
>
> Key: HIVE-13008
> URL: https://issues.apache.org/jira/browse/HIVE-13008
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13008.patch
>
>
> {noformat}
> ERROR | 11 Jan 2016 20:19:02,781 | 
> org.apache.hive.hcatalog.templeton.CatchallExceptionMapper |
> java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport$2.run(SecureProxySupport.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport.writeProxyDelegationTokens(SecureProxySupport.java:168)
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport.open(SecureProxySupport.java:95)
> at 
> org.apache.hive.hcatalog.templeton.HcatDelegator.run(HcatDelegator.java:63)
> at org.apache.hive.hcatalog.templeton.Server.ddl(Server.java:217)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1360)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:574)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:88)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1331)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:477)
> at 
> 

[jira] [Commented] (HIVE-10115) HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and Delegation token(DIGEST) when alternate authentication is enabled

2016-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134483#comment-15134483
 ] 

Sergio Peña commented on HIVE-10115:


The JDBC spark tests that failed are not related to this patch. Seems that 
those tests are flaky. I looked at other jobs, and there are other JDBC spark 
test classes that fail in the same way.
I run this test in my machine and it works.

Another failed test, {{TestTxnCommands2}}, is not related to this patch either. 
It does not touch anything from HS2 authentication, and the failure is due to a 
NullPointerException that happens in getting transactions from the metastore.
I run this test in my machine and it works.

The other 2  tests failed in previous jobs as well.
I'll do a +1 to this patch.

[~xuefuz] Could you confirm that the Spark tests are not related?

> HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and 
> Delegation token(DIGEST) when alternate authentication is enabled
> ---
>
> Key: HIVE-10115
> URL: https://issues.apache.org/jira/browse/HIVE-10115
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Affects Versions: 1.1.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
>  Labels: patch
> Attachments: HIVE-10115.0.patch, HIVE-10115.2.patch
>
>
> In a Kerberized cluster when alternate authentication is enabled on HS2, it 
> should also accept Kerberos Authentication. The reason this is important is 
> because when we enable LDAP authentication HS2 stops accepting delegation 
> token authentication. So we are forced to enter username passwords in the 
> oozie configuration.
> The whole idea of SASL is that multiple authentication mechanism can be 
> offered. If we disable Kerberos(GSSAPI) and delegation token (DIGEST) 
> authentication when we enable LDAP authentication, this defeats SASL purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12918) LLAP should never create embedded metastore when localizing functions

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134384#comment-15134384
 ] 

Hive QA commented on HIVE-12918:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785926/HIVE-12918.02.patch

{color:green}SUCCESS:{color} +1 due to 21 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 79 failed/errored test(s), 10044 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_bucket_map_join_tez1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_bucket_map_join_tez2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_constprog_dpp
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_llapdecider
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_lvj_mapjoin
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mapjoin_decimal
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mrr
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_ppd_basic
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_bmj_schema_evolution
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dml
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dynpart_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_fsstat
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_insert_overwrite_local_directory_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_hash
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_result_complex
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_tests
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_joins_explain
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_multi_union
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_schema_evolution
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_self_join
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_smb_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_smb_main
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union_decimal
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union_dynamic_partition
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union_group_by
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union_multiinsert
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_join_part_col_char
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vectorized_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInEmbed.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInEmbed
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterPartition
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterTable
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testColumnStatistics
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testComplexTable
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testComplexTypeApi
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testConcurrentMetastores
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testDBOwner
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testDBOwnerChange
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testDatabase
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testDatabaseLocation
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testDatabaseLocationWithPermissionProblems

[jira] [Commented] (HIVE-12165) wrong result when hive.optimize.sampling.orderby=true with some aggregate functions

2016-02-05 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134435#comment-15134435
 ] 

Aihua Xu commented on HIVE-12165:
-

Internally when the query is compiled, an order by is added and we are trying 
to do a parallel order by for this case. Ideally maybe we can do another reduce 
to merge the result. But that probably is big change. I will force it to use 1 
reducer for this case just the same as hive.optimize.sampling.orderby=false 
right now.

> wrong result when hive.optimize.sampling.orderby=true with some aggregate 
> functions
> ---
>
> Key: HIVE-12165
> URL: https://issues.apache.org/jira/browse/HIVE-12165
> Project: Hive
>  Issue Type: Bug
> Environment: hortonworks  2.3
>Reporter: ErwanMAS
>Assignee: Aihua Xu
>Priority: Critical
>
> This simple query give wrong result , when , i use the parallel order .
> {noformat}
> select count(*) , count(distinct dummyint ) , min(dummyint),max(dummyint) 
> from foobar_1M ;
> {noformat}
> Current wrong result :
> {noformat}
> c0c1  c2  c3
> 32740 32740   0   163695
> 113172113172  163700  729555
> 54088 54088   729560  95
> {noformat}
> Right result :
> {noformat}
> c0c1  c2  c3
> 100   100 0   99
> {noformat}
> The sql script for my test 
> {noformat}
> drop table foobar_1 ;
> create table foobar_1 ( dummyint int  , dummystr string ) ;
> insert into table foobar_1 select count(*),'dummy 0'  from foobar_1 ;
> drop table foobar_1M ;
> create table foobar_1M ( dummyint bigint  , dummystr string ) ;
> insert overwrite table foobar_1M
>select val_int  , concat('dummy ',val_int) from
>  ( select ((d_1*10)+d_2)*10+d_3)*10+d_4)*10+d_5)*10+d_6) as 
> val_int from foobar_1
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_1 as d_1
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_2 as d_2
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_3 as d_3
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_4 as d_4
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_5 as d_5
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_6 as d_6  ) as f ;
> set hive.optimize.sampling.orderby.number=1;
> set hive.optimize.sampling.orderby.percent=0.1f;
> set mapreduce.job.reduces=3 ;
> set hive.optimize.sampling.orderby=false;
> select count(*) , count(distinct dummyint ) , min(dummyint),max(dummyint) 
> from foobar_1M ;
> set hive.optimize.sampling.orderby=true;
> select count(*) , count(distinct dummyint ) , min(dummyint),max(dummyint) 
> from foobar_1M ;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13013) Further Improve concurrency in TxnHandler

2016-02-05 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13013:
--
Priority: Critical  (was: Major)

> Further Improve concurrency in TxnHandler
> -
>
> Key: HIVE-13013
> URL: https://issues.apache.org/jira/browse/HIVE-13013
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
>
> There are still a few operations in TxnHandler that run at Serializable 
> isolation.
> Most or all of them can be dropped to READ_COMMITTED now that we have SELECT 
> ... FOR UPDATE support.  This will reduce number of deadlocks in the DBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12990) LLAP: ORC cache NPE without FileID support

2016-02-05 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134687#comment-15134687
 ] 

Prasanth Jayachandran commented on HIVE-12990:
--

LGTM, +1

> LLAP: ORC cache NPE without FileID support
> --
>
> Key: HIVE-12990
> URL: https://issues.apache.org/jira/browse/HIVE-12990
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12990.01.patch, HIVE-12990.patch
>
>
> {code}
>OrcBatchKey stripeKey = hasFileId ? new OrcBatchKey(fileId, -1, 0) : null;
>...
>   if (hasFileId && metadataCache != null) {
> stripeKey.stripeIx = stripeIx;
> stripeMetadata = metadataCache.getStripeMetadata(stripeKey);
>   }
> ...
>   public void setStripeMetadata(OrcStripeMetadata m) {
> assert stripes != null;
> stripes[m.getStripeIx()] = m;
>   }
> {code}
> {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.llap.io.metadata.OrcStripeMetadata.getStripeIx(OrcStripeMetadata.java:106)
> at 
> org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.setStripeMetadata(OrcEncodedDataConsumer.java:70)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.readStripesMetadata(OrcEncodedDataReader.java:685)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:283)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:215)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:212)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:93)
> ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-12994:
---
Attachment: (was: HIVE-12994.01.patch)

> Implement support for NULLS FIRST/NULLS LAST
> 
>
> Key: HIVE-12994
> URL: https://issues.apache.org/jira/browse/HIVE-12994
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO, Metastore, Parser, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12994.01.patch, HIVE-12994.patch
>
>
> From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to 
> determine whether nulls appear before or after non-null data values when the 
> ORDER BY clause is used.
> SQL standard does not specify the behavior by default. Currently in Hive, 
> null values sort as if lower than any non-null value; that is, NULLS FIRST is 
> the default for ASC order, and NULLS LAST for DESC order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12927) HBase metastore: sequences are not safe

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134617#comment-15134617
 ] 

Hive QA commented on HIVE-12927:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786166/HIVE-12927.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10052 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6876/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6876/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6876/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786166 - PreCommit-HIVE-TRUNK-Build

> HBase metastore: sequences are not safe
> ---
>
> Key: HIVE-12927
> URL: https://issues.apache.org/jira/browse/HIVE-12927
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Sergey Shelukhin
>Assignee: Alan Gates
>Priority: Critical
> Attachments: HIVE-12927.patch
>
>
> {noformat}
>   long getNextSequence(byte[] sequence) throws IOException {
> {noformat}
> Is not safe in presence of any concurrency. It should use HBase increment API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12064) prevent transactional=false

2016-02-05 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134526#comment-15134526
 ] 

Eugene Koifman commented on HIVE-12064:
---

I would describe the invariants like this:

1) if "transactional=true" then table must be bucketed and implement Acid I/O 
Format
2) if running Alter table on an Acid table then after alter table 
"transactional=true" must be present

> prevent transactional=false
> ---
>
> Key: HIVE-12064
> URL: https://issues.apache.org/jira/browse/HIVE-12064
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Critical
> Attachments: HIVE-12064.2.patch, HIVE-12064.patch
>
>
> currently a tblproperty transactional=true must be set to make a table behave 
> in ACID compliant way.
> This is misleading in that it seems like changing it to transactional=false 
> makes the table non-acid but on disk layout of acid table is different than 
> plain tables.  So changing this  property may cause wrong data to be returned.
> Should prevent transactional=false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12923) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_grouping_sets4.q failure

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134639#comment-15134639
 ] 

Jesus Camacho Rodriguez commented on HIVE-12923:


[~hsubramaniyan], I added a minor comment to the patch. But I do have other 
global concerns.

First, we might have regressions on join merging in the return path. Given the 
current patch, projects might not be pulled up if there is an Aggregate with 
grouping id in the subtree rooted at one of its inputs. Thus, project operators 
might stay between multiple joins.

Second, are we sure that in every case the Project operator that prunes the 
indicator columns of the Aggregate operator is going to stay just on top? For 
instance, could by any chance a Filter stay between them?

In general, this is a workaround to continue progressing on the return path, 
but we definitively need the right solution in place: reimplementing indicator 
columns into a single column in Calcite.

Thanks

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver 
> groupby_grouping_sets4.q failure
> 
>
> Key: HIVE-12923
> URL: https://issues.apache.org/jira/browse/HIVE-12923
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-12923.1.patch, HIVE-12923.2.patch
>
>
> {code}
> EXPLAIN
> SELECT * FROM
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq1
> join
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq2
> on subq1.a = subq2.a
> {code}
> Stack trace:
> {code}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.pruneJoinOperator(ColumnPrunerProcFactory.java:1110)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.access$400(ColumnPrunerProcFactory.java:85)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerJoinProc.process(ColumnPrunerProcFactory.java:941)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:172)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:135)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:237)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10176)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:472)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1168)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1256)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1094)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1129)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1103)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:10444)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets4(TestCliDriver.java:3313)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134658#comment-15134658
 ] 

Ashutosh Chauhan commented on HIVE-12839:
-

[~pxiong] Can you describe changes for HiveRelFieldTrimmer in latest patch ?

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13012) NPE from simple nested ANSI Join

2016-02-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134657#comment-15134657
 ] 

Sergey Shelukhin commented on HIVE-13012:
-

HIVE-11960 added the parser support; that usually works, but apparently not in 
all cases.

> NPE from simple nested ANSI Join
> 
>
> Key: HIVE-13012
> URL: https://issues.apache.org/jira/browse/HIVE-13012
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 1.2.1
>Reporter: Dave Nicodemus
>Assignee: Pengcheng Xiong
>
> Using hive 1.2.1.2.3  Connecting using JDBC, issuing the following query : 
> SELECT COUNT(*) 
> FROM nation n 
> INNER JOIN (customer c
>  INNER JOIN orders o ON c.c_custkey = o.o_custkey)
>  ON n.n_nationkey = c.c_nationkey;
> Generates the NPE and stack below. Fields are integer data type.
> NOTE: Similar stack as https://issues.apache.org/jira/browse/HIVE-11433 
> Stack
> 
> Caused by: java.lang.NullPointerExcEeption: Remote 
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.isPresent(SemanticAnalyzer.java:2046)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2109)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2185)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2445)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2386)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:8192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:8131)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9709)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9636)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10109)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:329)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10120)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:454)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:314)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1164)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1158)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:110)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13010) partitions autogenerated predicates broken

2016-02-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134648#comment-15134648
 ] 

Sergey Shelukhin commented on HIVE-13010:
-

Use current_timestamp instead, that is deterministic (although I'm not sure if 
it's available in 1.1)

> partitions autogenerated predicates broken
> --
>
> Key: HIVE-13010
> URL: https://issues.apache.org/jira/browse/HIVE-13010
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Stanilovsky Evgeny
>Priority: Trivial
>
> hi, i`m looking for simalar problem but found only:
> https://issues.apache.org/jira/browse/HIVE-9630
> it`s looks like the same but you can easily repeat it on testing i hope.
> I have two simalar requests , the difference in autogenerated data 
> predicates, in first case explain show ful lscan.
> '''
> set hive.optimize.constant.propagation=true;
> explain select * from logs.weather_forecasts where dt between
> from_unixtime(unix_timestamp() - 3600*24*3, '-MM-dd') and
> from_unixtime(unix_timestamp() - 3600*24*1, '-MM-dd') and
> provider_id = 100
> STAGE PLANS:
> 5   Stage: Stage-1
> 6 Map Reduce
> 7   Map Operator Tree:
> 8   TableScan
> 9 alias: weather_forecasts
> 10Statistics: Num rows: 36124837607 Data size: 47395787122046 
> Basic stats: PARTIAL Column stats: NONE
> 
> and
> 
> set hive.optimize.constant.propagation=true;
> explain select * from logs.redir_log where dt between
> '2016-02-02' and
> '2016-02-04' and
> pid = 100
> 0 STAGE DEPENDENCIES:
> 1   Stage-1 is a root stage
> 2   Stage-0 depends on stages: Stage-1
> 3 
> 4 STAGE PLANS:
> 5   Stage: Stage-1
> 6 Map Reduce
> 7   Map Operator Tree:
> 8   TableScan
> 9 alias: redir_log
> 10Statistics: Num rows: 2798358420 Data size: 5761819991150 
> Basic stats: COMPLETE Column stats: NONE
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13008) WebHcat DDL commands in secure mode NPE when default FileSystem doesn't support delegation tokens

2016-02-05 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134684#comment-15134684
 ] 

Thejas M Nair commented on HIVE-13008:
--

+1 LGTM

Thanks for your guidance and review [~cnauroth]!


> WebHcat DDL commands in secure mode NPE when default FileSystem doesn't 
> support delegation tokens
> -
>
> Key: HIVE-13008
> URL: https://issues.apache.org/jira/browse/HIVE-13008
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13008.patch
>
>
> {noformat}
> ERROR | 11 Jan 2016 20:19:02,781 | 
> org.apache.hive.hcatalog.templeton.CatchallExceptionMapper |
> java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport$2.run(SecureProxySupport.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport.writeProxyDelegationTokens(SecureProxySupport.java:168)
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport.open(SecureProxySupport.java:95)
> at 
> org.apache.hive.hcatalog.templeton.HcatDelegator.run(HcatDelegator.java:63)
> at org.apache.hive.hcatalog.templeton.Server.ddl(Server.java:217)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1360)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:574)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:88)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1331)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:477)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
> at 
> 

[jira] [Commented] (HIVE-12885) LDAP Authenticator improvements

2016-02-05 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134689#comment-15134689
 ] 

Chaoyu Tang commented on HIVE-12885:


Committed to 2.1.0. There are some conflicts when merged to branch-1 and have 
to create a separate patch for 1.3.0

> LDAP Authenticator improvements
> ---
>
> Key: HIVE-12885
> URL: https://issues.apache.org/jira/browse/HIVE-12885
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-12885.2.patch, HIVE-12885.3.patch, HIVE-12885.patch
>
>
> Currently Hive's LDAP Atn provider assumes certain defaults to keep its 
> configuration simple. 
> 1) One of the assumptions is the presence of an attribute 
> "distinguishedName". In certain non-standard LDAP implementations, this 
> attribute may not be available. So instead of basing all ldap searches on 
> this attribute, getNameInNamespace() returns the same value. So this API is 
> to be used instead.
> 2) It also assumes that the "user" value being passed in, will be able to 
> bind to LDAP. However, certain LDAP implementations, by default, only allow 
> the full DN to be used, just short user names are not permitted. We will need 
> to be able to support short names too when hive configuration only has 
> "BaseDN" specified (not userDNPatterns). So instead of hard-coding "uid" or 
> "CN" as keys for the short usernames, it probably better to make this a 
> configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12987) Add metrics for HS2 active users and SQL operations

2016-02-05 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-12987:
---
Attachment: HIVE-12987.3.patch

Attached patch v3. Since the existing metrics is already in 2.0, to be backward 
compatible, I changed the implementation so that the old metrics is not 
changed, just added some new metrics.

As to use CodahaleMetrics for the user counting, it's better not to do so. 
Otherwise, it will emit user specific metrics which is too many, since we care 
about distinct active user counter only.

Since Szehon is not around, [~aihuaxu], [~ctang.ma], could you take a look?

> Add metrics for HS2 active users and SQL operations
> ---
>
> Key: HIVE-12987
> URL: https://issues.apache.org/jira/browse/HIVE-12987
> Project: Hive
>  Issue Type: Task
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch, 
> HIVE-12987.2.patch, HIVE-12987.3.patch
>
>
> HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also 
> interested in metrics just for SQL operations.
> It is useful to track active user count as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-12994:
---
Attachment: HIVE-12994.01.patch

> Implement support for NULLS FIRST/NULLS LAST
> 
>
> Key: HIVE-12994
> URL: https://issues.apache.org/jira/browse/HIVE-12994
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO, Metastore, Parser, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12994.01.patch, HIVE-12994.patch
>
>
> From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to 
> determine whether nulls appear before or after non-null data values when the 
> ORDER BY clause is used.
> SQL standard does not specify the behavior by default. Currently in Hive, 
> null values sort as if lower than any non-null value; that is, NULLS FIRST is 
> the default for ASC order, and NULLS LAST for DESC order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12923) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_grouping_sets4.q failure

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134667#comment-15134667
 ] 

Jesus Camacho Rodriguez commented on HIVE-12923:


cc [~jpullokkaran]

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver 
> groupby_grouping_sets4.q failure
> 
>
> Key: HIVE-12923
> URL: https://issues.apache.org/jira/browse/HIVE-12923
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-12923.1.patch, HIVE-12923.2.patch
>
>
> {code}
> EXPLAIN
> SELECT * FROM
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq1
> join
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq2
> on subq1.a = subq2.a
> {code}
> Stack trace:
> {code}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.pruneJoinOperator(ColumnPrunerProcFactory.java:1110)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.access$400(ColumnPrunerProcFactory.java:85)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerJoinProc.process(ColumnPrunerProcFactory.java:941)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:172)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:135)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:237)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10176)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:472)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1168)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1256)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1094)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1129)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1103)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:10444)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets4(TestCliDriver.java:3313)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12976) MetaStoreDirectSql doesn't batch IN lists in all cases

2016-02-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134700#comment-15134700
 ] 

Ashutosh Chauhan commented on HIVE-12976:
-

+1 LGTM.
 [~gopalv] If you happen to have stack trace for exception can you post it 
here? I want to know which method was failing, I assume it was 
getAggrPartsStats(). It will also be useful for future reference.

> MetaStoreDirectSql doesn't batch IN lists in all cases
> --
>
> Key: HIVE-12976
> URL: https://issues.apache.org/jira/browse/HIVE-12976
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12976.01.patch, HIVE-12976.02.patch, 
> HIVE-12976.patch
>
>
> That means that some RDBMS products with arbitrary limits cannot run these 
> queries. I hope HBase metastore comes soon and delivers us from Oracle! For 
> now, though, we have to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12923) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_grouping_sets4.q failure

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133814#comment-15133814
 ] 

Hive QA commented on HIVE-12923:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786107/HIVE-12923.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6870/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6870/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6870/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786107 - PreCommit-HIVE-TRUNK-Build

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver 
> groupby_grouping_sets4.q failure
> 
>
> Key: HIVE-12923
> URL: https://issues.apache.org/jira/browse/HIVE-12923
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-12923.1.patch, HIVE-12923.2.patch
>
>
> {code}
> EXPLAIN
> SELECT * FROM
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq1
> join
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq2
> on subq1.a = subq2.a
> {code}
> Stack trace:
> {code}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.pruneJoinOperator(ColumnPrunerProcFactory.java:1110)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.access$400(ColumnPrunerProcFactory.java:85)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerJoinProc.process(ColumnPrunerProcFactory.java:941)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:172)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:135)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:237)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10176)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:472)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1168)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1256)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1094)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1129)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1103)
> at 
> 

[jira] [Commented] (HIVE-12998) ORC FileDump.printJsonData() does not close RecordReader

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135022#comment-15135022
 ] 

Hive QA commented on HIVE-12998:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786183/HIVE-12998.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10036 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testSimpleTable
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6877/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6877/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6877/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786183 - PreCommit-HIVE-TRUNK-Build

> ORC FileDump.printJsonData() does not close RecordReader
> 
>
> Key: HIVE-12998
> URL: https://issues.apache.org/jira/browse/HIVE-12998
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12998.1.patch, HIVE-12998.2.patch
>
>
> This causes TestFileDump to fail on Windows, because the test ORC file does 
> not get deleted between tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12976) MetaStoreDirectSql doesn't batch IN lists in all cases

2016-02-05 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-12976:
---
Description: 
That means that some RDBMS products with arbitrary limits cannot run these 
queries. I hope HBase metastore comes soon and delivers us from Oracle! For 
now, though, we have to fix this.

{code}
hive> select * from lineitem where l_orderkey = 121201;

Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The incoming 
request has too many parameters. The server supports a maximum of 2100 
parameters. Reduce the number of parameters and resend the request.
at 
com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:215)
 ~[sqljdbc41.jar:?]
at 
com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1635)
 ~[sqljdbc41.jar:?]
at 
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:426)
 ~[sqljdbc41.jar:?]
{code}

{code}
at 
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
 ~[datanucleus-api-jdo-4.2.1.jar:?]
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) 
~[datanucleus-api-jdo-4.2.1.jar:?]
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:264) 
~[datanucleus-api-jdo-4.2.1.jar:?]
at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1681)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.partsFoundForPartitions(MetaStoreDirectSql.java:1266)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.aggrColStatsForPartitions(MetaStoreDirectSql.java:1196)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.metastore.ObjectStore$9.getSqlResult(ObjectStore.java:6742)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.metastore.ObjectStore$9.getSqlResult(ObjectStore.java:6738)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2525)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.metastore.ObjectStore.get_aggr_stats_for(ObjectStore.java:6738)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
{code}

  was:That means that some RDBMS products with arbitrary limits cannot run 
these queries. I hope HBase metastore comes soon and delivers us from Oracle! 
For now, though, we have to fix this.


> MetaStoreDirectSql doesn't batch IN lists in all cases
> --
>
> Key: HIVE-12976
> URL: https://issues.apache.org/jira/browse/HIVE-12976
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12976.01.patch, HIVE-12976.02.patch, 
> HIVE-12976.patch
>
>
> That means that some RDBMS products with arbitrary limits cannot run these 
> queries. I hope HBase metastore comes soon and delivers us from Oracle! For 
> now, though, we have to fix this.
> {code}
> hive> select * from lineitem where l_orderkey = 121201;
> Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The incoming 
> request has too many parameters. The server supports a maximum of 2100 
> parameters. Reduce the number of parameters and resend the request.
> at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:215)
>  ~[sqljdbc41.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1635)
>  ~[sqljdbc41.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:426)
>  ~[sqljdbc41.jar:?]
> {code}
> {code}
> at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
>  ~[datanucleus-api-jdo-4.2.1.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) 
> ~[datanucleus-api-jdo-4.2.1.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:264) 
> ~[datanucleus-api-jdo-4.2.1.jar:?]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1681)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.partsFoundForPartitions(MetaStoreDirectSql.java:1266)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.aggrColStatsForPartitions(MetaStoreDirectSql.java:1196)
>  

[jira] [Updated] (HIVE-13016) ORC FileDump recovery utility fails in Windows

2016-02-05 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13016:
-
Attachment: HIVE-13016.1.patch

[~jdere] Could you review the patch?

> ORC FileDump recovery utility fails in Windows
> --
>
> Key: HIVE-13016
> URL: https://issues.apache.org/jira/browse/HIVE-13016
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.3.0, 2.1.0
>Reporter: Jason Dere
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13016.1.patch
>
>
> org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptDataFiles
> org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptSideFiles
> java.io.IOException: Unable to move 
> file:/E:/hive/hcatalog/streaming/target/tmp/junit4129594478393496260/testing3.db/dimensionTable/delta_001_002/bucket_0
>  to 
> E:/hive/hcatalog/streaming/target/tmp/E:/hive/hcatalog/streaming/target/tmp/junit4129594478393496260/testing3.db/dimensionTable/delta_001_002/bucket_0
> at 
> org.apache.hadoop.hive.ql.io.orc.FileDump.moveFiles(FileDump.java:546)^M
> at 
> org.apache.hadoop.hive.ql.io.orc.FileDump.recoverFile(FileDump.java:513)^M
> at 
> org.apache.hadoop.hive.ql.io.orc.FileDump.recoverFiles(FileDump.java:428)^M
> at org.apache.hadoop.hive.ql.io.orc.FileDump.main(FileDump.java:125)^M
> at 
> org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptSideFiles(TestStreaming.java:1523)^M
> Note that FileDump appends the full source path to the backup path when 
> trying to recover files (see "E:" in the middle of the destination path).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12998) ORC FileDump.printJsonData() does not close RecordReader

2016-02-05 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135051#comment-15135051
 ] 

Gopal V commented on HIVE-12998:


LGTM - +1

> ORC FileDump.printJsonData() does not close RecordReader
> 
>
> Key: HIVE-12998
> URL: https://issues.apache.org/jira/browse/HIVE-12998
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12998.1.patch, HIVE-12998.2.patch
>
>
> This causes TestFileDump to fail on Windows, because the test ORC file does 
> not get deleted between tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13010) partitions autogenerated predicates broken

2016-02-05 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135092#comment-15135092
 ] 

Gopal V commented on HIVE-13010:


I have a backport to 1.0 

https://github.com/t3rmin4t0r/current-timestamp/blob/master/src/main/java/org/notmysock/hive/udf/CurrentTimeStampUDF.java

> partitions autogenerated predicates broken
> --
>
> Key: HIVE-13010
> URL: https://issues.apache.org/jira/browse/HIVE-13010
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Stanilovsky Evgeny
>Priority: Trivial
>
> hi, i`m looking for simalar problem but found only:
> https://issues.apache.org/jira/browse/HIVE-9630
> it`s looks like the same but you can easily repeat it on testing i hope.
> I have two simalar requests , the difference in autogenerated data 
> predicates, in first case explain show ful lscan.
> '''
> set hive.optimize.constant.propagation=true;
> explain select * from logs.weather_forecasts where dt between
> from_unixtime(unix_timestamp() - 3600*24*3, '-MM-dd') and
> from_unixtime(unix_timestamp() - 3600*24*1, '-MM-dd') and
> provider_id = 100
> STAGE PLANS:
> 5   Stage: Stage-1
> 6 Map Reduce
> 7   Map Operator Tree:
> 8   TableScan
> 9 alias: weather_forecasts
> 10Statistics: Num rows: 36124837607 Data size: 47395787122046 
> Basic stats: PARTIAL Column stats: NONE
> 
> and
> 
> set hive.optimize.constant.propagation=true;
> explain select * from logs.redir_log where dt between
> '2016-02-02' and
> '2016-02-04' and
> pid = 100
> 0 STAGE DEPENDENCIES:
> 1   Stage-1 is a root stage
> 2   Stage-0 depends on stages: Stage-1
> 3 
> 4 STAGE PLANS:
> 5   Stage: Stage-1
> 6 Map Reduce
> 7   Map Operator Tree:
> 8   TableScan
> 9 alias: redir_log
> 10Statistics: Num rows: 2798358420 Data size: 5761819991150 
> Basic stats: COMPLETE Column stats: NONE
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10115) HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and Delegation token(DIGEST) when alternate authentication is enabled

2016-02-05 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134866#comment-15134866
 ] 

Xuefu Zhang commented on HIVE-10115:


It doesn't seem the spark failures are related to the patch.

> HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and 
> Delegation token(DIGEST) when alternate authentication is enabled
> ---
>
> Key: HIVE-10115
> URL: https://issues.apache.org/jira/browse/HIVE-10115
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Affects Versions: 1.1.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
>  Labels: patch
> Attachments: HIVE-10115.0.patch, HIVE-10115.2.patch
>
>
> In a Kerberized cluster when alternate authentication is enabled on HS2, it 
> should also accept Kerberos Authentication. The reason this is important is 
> because when we enable LDAP authentication HS2 stops accepting delegation 
> token authentication. So we are forced to enter username passwords in the 
> oozie configuration.
> The whole idea of SASL is that multiple authentication mechanism can be 
> offered. If we disable Kerberos(GSSAPI) and delegation token (DIGEST) 
> authentication when we enable LDAP authentication, this defeats SASL purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-12857) LLAP: modify the decider to allow using LLAP with whitelisted UDFs

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-12857:
---

Assignee: Sergey Shelukhin

> LLAP: modify the decider to allow using LLAP with whitelisted UDFs
> --
>
> Key: HIVE-12857
> URL: https://issues.apache.org/jira/browse/HIVE-12857
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13015) Add dependency convergence for SLF4J

2016-02-05 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134805#comment-15134805
 ] 

Prasanth Jayachandran commented on HIVE-13015:
--

[~sershe] [~gopalv] fyi..

> Add dependency convergence for SLF4J
> 
>
> Key: HIVE-13015
> URL: https://issues.apache.org/jira/browse/HIVE-13015
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>
> In some of the recent test runs, we are seeing multiple bindings for SLF4j 
> that causes issues with LOG4j2 logger. 
> {code}
> SLF4J: Found binding in 
> [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> {code}
> We have added explicit exclusions for slf4j-log4j12 but some library is 
> pulling it transitively and it's getting packaged with hive libs. Also hive 
> currently uses version 1.7.5 for slf4j. We should add dependency convergence 
> for sl4fj and also remove packaging of slf4j-log4j12.*.jar 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9422) LLAP: row-level vectorized SARGs

2016-02-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134844#comment-15134844
 ] 

Sergey Shelukhin commented on HIVE-9422:


[~yohabe] this is the JIRA for row-level SARGs. The JIRA to improve dictionary 
sarg would probably only make sense after this JIRA.

> LLAP: row-level vectorized SARGs
> 
>
> Key: HIVE-9422
> URL: https://issues.apache.org/jira/browse/HIVE-9422
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Sergey Shelukhin
>
> When VRBs are built from encoded data, sargs can be applied on low level to 
> reduce the number of rows to process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12987) Add metrics for HS2 active users and SQL operations

2016-02-05 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135034#comment-15135034
 ] 

Aihua Xu commented on HIVE-12987:
-

I'm wondering why we need to keep track of the counts in Metrics and the newly 
created Map again? Seems redundant to me, but I'm not exactly following the 
metrics change.

> Add metrics for HS2 active users and SQL operations
> ---
>
> Key: HIVE-12987
> URL: https://issues.apache.org/jira/browse/HIVE-12987
> Project: Hive
>  Issue Type: Task
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch, 
> HIVE-12987.2.patch, HIVE-12987.3.patch
>
>
> HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also 
> interested in metrics just for SQL operations.
> It is useful to track active user count as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12987) Add metrics for HS2 active users and SQL operations

2016-02-05 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134851#comment-15134851
 ] 

Aihua Xu commented on HIVE-12987:
-

Sure. I will take a look.

> Add metrics for HS2 active users and SQL operations
> ---
>
> Key: HIVE-12987
> URL: https://issues.apache.org/jira/browse/HIVE-12987
> Project: Hive
>  Issue Type: Task
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch, 
> HIVE-12987.2.patch, HIVE-12987.3.patch
>
>
> HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also 
> interested in metrics just for SQL operations.
> It is useful to track active user count as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13015) Add dependency convergence for SLF4J

2016-02-05 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134858#comment-15134858
 ] 

Gopal V commented on HIVE-13015:


[~prasanth_j]: I suspect we're pulling log4j12 from Hadoop (via Tez lib/).

> Add dependency convergence for SLF4J
> 
>
> Key: HIVE-13015
> URL: https://issues.apache.org/jira/browse/HIVE-13015
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>
> In some of the recent test runs, we are seeing multiple bindings for SLF4j 
> that causes issues with LOG4j2 logger. 
> {code}
> SLF4J: Found binding in 
> [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> {code}
> We have added explicit exclusions for slf4j-log4j12 but some library is 
> pulling it transitively and it's getting packaged with hive libs. Also hive 
> currently uses version 1.7.5 for slf4j. We should add dependency convergence 
> for sl4fj and also remove packaging of slf4j-log4j12.*.jar 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-10115) HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and Delegation token(DIGEST) when alternate authentication is enabled

2016-02-05 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134866#comment-15134866
 ] 

Xuefu Zhang edited comment on HIVE-10115 at 2/5/16 8:02 PM:


It doesn't seem the spark failures are related to the patch. +1 on the patch as 
well.


was (Author: xuefuz):
It doesn't seem the spark failures are related to the patch.

> HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and 
> Delegation token(DIGEST) when alternate authentication is enabled
> ---
>
> Key: HIVE-10115
> URL: https://issues.apache.org/jira/browse/HIVE-10115
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Affects Versions: 1.1.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
>  Labels: patch
> Attachments: HIVE-10115.0.patch, HIVE-10115.2.patch
>
>
> In a Kerberized cluster when alternate authentication is enabled on HS2, it 
> should also accept Kerberos Authentication. The reason this is important is 
> because when we enable LDAP authentication HS2 stops accepting delegation 
> token authentication. So we are forced to enter username passwords in the 
> oozie configuration.
> The whole idea of SASL is that multiple authentication mechanism can be 
> offered. If we disable Kerberos(GSSAPI) and delegation token (DIGEST) 
> authentication when we enable LDAP authentication, this defeats SASL purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12885) LDAP Authenticator improvements

2016-02-05 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-12885:
-
Attachment: HIVE-12885-branch1.patch

Attaching a patch for branch-1 because of conflicts from HIVE-12237

> LDAP Authenticator improvements
> ---
>
> Key: HIVE-12885
> URL: https://issues.apache.org/jira/browse/HIVE-12885
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-12885-branch1.patch, HIVE-12885.2.patch, 
> HIVE-12885.3.patch, HIVE-12885.patch
>
>
> Currently Hive's LDAP Atn provider assumes certain defaults to keep its 
> configuration simple. 
> 1) One of the assumptions is the presence of an attribute 
> "distinguishedName". In certain non-standard LDAP implementations, this 
> attribute may not be available. So instead of basing all ldap searches on 
> this attribute, getNameInNamespace() returns the same value. So this API is 
> to be used instead.
> 2) It also assumes that the "user" value being passed in, will be able to 
> bind to LDAP. However, certain LDAP implementations, by default, only allow 
> the full DN to be used, just short user names are not permitted. We will need 
> to be able to support short names too when hive configuration only has 
> "BaseDN" specified (not userDNPatterns). So instead of hard-coding "uid" or 
> "CN" as keys for the short usernames, it probably better to make this a 
> configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13015) Add dependency convergence for SLF4J

2016-02-05 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134961#comment-15134961
 ] 

Prasanth Jayachandran commented on HIVE-13015:
--

[~gopalv] Yeah. Just verified hive is not pulling slf4j-log4j12-1.7.10.jar. Tez 
pulls it. My guess is HADOOP_USERCLASSPATH_FIRST which is set by hive script is 
not propagated to Tez/LLAP.

> Add dependency convergence for SLF4J
> 
>
> Key: HIVE-13015
> URL: https://issues.apache.org/jira/browse/HIVE-13015
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>
> In some of the recent test runs, we are seeing multiple bindings for SLF4j 
> that causes issues with LOG4j2 logger. 
> {code}
> SLF4J: Found binding in 
> [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> {code}
> We have added explicit exclusions for slf4j-log4j12 but some library is 
> pulling it transitively and it's getting packaged with hive libs. Also hive 
> currently uses version 1.7.5 for slf4j. We should add dependency convergence 
> for sl4fj and also remove packaging of slf4j-log4j12.*.jar 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12885) LDAP Authenticator improvements

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135023#comment-15135023
 ] 

Hive QA commented on HIVE-12885:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786552/HIVE-12885-branch1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BRANCH_1-Build/21/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BRANCH_1-Build/21/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-BRANCH_1-Build-21/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-BRANCH_1-Build-21/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-1 ]]
+ [[ -d apache-github-branch1-source ]]
+ [[ ! -d apache-github-branch1-source/.git ]]
+ [[ ! -d apache-github-branch1-source ]]
+ cd apache-github-branch1-source
+ git fetch origin
>From https://github.com/apache/hive
   aa35e21..f3df2dc  branch-1   -> origin/branch-1
   d649379..0b9b1d8  branch-1.2 -> origin/branch-1.2
   0ffef3f..1b31f40  branch-2.0 -> origin/branch-2.0
   165430b..9bb7850  master -> origin/master
   8e0a10c..e078260  spark  -> origin/spark
>From https://github.com/apache/hive
 * [new tag] release-2.0.0-rc0 -> release-2.0.0-rc0
 * [new tag] release-2.0.0-rc1 -> release-2.0.0-rc1
+ git reset --hard HEAD
HEAD is now at aa35e21 HIVE-12353 When Compactor fails it calls 
CompactionTxnHandler.markedCleaned().  it should not. (Eugene Koifman, reviewed 
by Alan Gates)
+ git clean -f -d
+ git checkout branch-1
Already on 'branch-1'
Your branch is behind 'origin/branch-1' by 9 commits, and can be fast-forwarded.
+ git reset --hard origin/branch-1
HEAD is now at f3df2dc HIVE-12885: LDAP Authenticator improvements (Naveen 
Gangam via Chaoyu Tang)
+ git merge --ff-only origin/branch-1
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786552 - PreCommit-HIVE-BRANCH_1-Build

> LDAP Authenticator improvements
> ---
>
> Key: HIVE-12885
> URL: https://issues.apache.org/jira/browse/HIVE-12885
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-12885-branch1.patch, HIVE-12885.2.patch, 
> HIVE-12885.3.patch, HIVE-12885.patch
>
>
> Currently Hive's LDAP Atn provider assumes certain defaults to keep its 
> configuration simple. 
> 1) One of the assumptions is the presence of an attribute 
> "distinguishedName". In certain non-standard LDAP implementations, this 
> attribute may not be available. So instead of basing all ldap searches on 
> this attribute, getNameInNamespace() returns the same value. So this API is 
> to be used instead.
> 2) It also assumes that the "user" value being passed in, will be able to 
> bind to LDAP. However, certain LDAP implementations, by default, only allow 
> the full DN to be used, just short user names are not permitted. We will 

[jira] [Updated] (HIVE-10115) HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and Delegation token(DIGEST) when alternate authentication is enabled

2016-02-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-10115:
---
Fix Version/s: 1.3.0

> HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and 
> Delegation token(DIGEST) when alternate authentication is enabled
> ---
>
> Key: HIVE-10115
> URL: https://issues.apache.org/jira/browse/HIVE-10115
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Affects Versions: 1.1.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
>  Labels: patch
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-10115.0.patch, HIVE-10115.2.patch
>
>
> In a Kerberized cluster when alternate authentication is enabled on HS2, it 
> should also accept Kerberos Authentication. The reason this is important is 
> because when we enable LDAP authentication HS2 stops accepting delegation 
> token authentication. So we are forced to enter username passwords in the 
> oozie configuration.
> The whole idea of SASL is that multiple authentication mechanism can be 
> offered. If we disable Kerberos(GSSAPI) and delegation token (DIGEST) 
> authentication when we enable LDAP authentication, this defeats SASL purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12856) LLAP: update (add/remove) the UDFs available in LLAP when they are changed (refresh periodically)

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12856:

Attachment: HIVE-12856.patch
HIVE-12856.nogen.patch

The patch; no-gen patch for this JIRA, and an epic patch with generated code 
and including HIVE-12892 and HIVE-12918, for HiveQA

> LLAP: update (add/remove) the UDFs available in LLAP when they are changed 
> (refresh periodically)
> -
>
> Key: HIVE-12856
> URL: https://issues.apache.org/jira/browse/HIVE-12856
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12856.nogen.patch, HIVE-12856.patch
>
>
> I don't think re-querying the functions is going to scale, and the sessions 
> obviously cannot notify all LLAP clusters of every change. We should add 
> global versioning to metastore functions to track changes, and then possibly 
> add a notification mechanism, potentially thru ZK to avoid overloading the 
> metastore itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13007) add an API to force reload UDFs to LLAP (either force reload everything, or force a regular refresh)

2016-02-05 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-13007:
---
Summary: add an API to force reload UDFs to LLAP (either force reload 
everything, or force a regular refresh)  (was: add and API to force reload UDFs 
to LLAP (either force reload everything, or force a regular refresh))

> add an API to force reload UDFs to LLAP (either force reload everything, or 
> force a regular refresh)
> 
>
> Key: HIVE-13007
> URL: https://issues.apache.org/jira/browse/HIVE-13007
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12990) LLAP: ORC cache NPE without FileID support

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133919#comment-15133919
 ] 

Hive QA commented on HIVE-12990:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786316/HIVE-12990.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6871/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6871/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6871/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786316 - PreCommit-HIVE-TRUNK-Build

> LLAP: ORC cache NPE without FileID support
> --
>
> Key: HIVE-12990
> URL: https://issues.apache.org/jira/browse/HIVE-12990
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12990.01.patch, HIVE-12990.patch
>
>
> {code}
>OrcBatchKey stripeKey = hasFileId ? new OrcBatchKey(fileId, -1, 0) : null;
>...
>   if (hasFileId && metadataCache != null) {
> stripeKey.stripeIx = stripeIx;
> stripeMetadata = metadataCache.getStripeMetadata(stripeKey);
>   }
> ...
>   public void setStripeMetadata(OrcStripeMetadata m) {
> assert stripes != null;
> stripes[m.getStripeIx()] = m;
>   }
> {code}
> {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.llap.io.metadata.OrcStripeMetadata.getStripeIx(OrcStripeMetadata.java:106)
> at 
> org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.setStripeMetadata(OrcEncodedDataConsumer.java:70)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.readStripesMetadata(OrcEncodedDataReader.java:685)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:283)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:215)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:212)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:93)
> ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133934#comment-15133934
 ] 

Jesus Camacho Rodriguez commented on HIVE-12839:


Awesome, thanks!

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11752) Pre-materializing complex CTE queries

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11752:
---
Attachment: HIVE-11752.05.patch

> Pre-materializing complex CTE queries
> -
>
> Key: HIVE-11752
> URL: https://issues.apache.org/jira/browse/HIVE-11752
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Navis
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Attachments: HIVE-11752.03.patch, HIVE-11752.04.patch, 
> HIVE-11752.05.patch, HIVE-11752.1.patch.txt, HIVE-11752.2.patch.txt
>
>
> Currently, hive regards CTE clauses as a simple alias to the query block, 
> which makes redundant works if it's used multiple times in a query. This 
> introduces a reference threshold for pre-materializing the CTE clause as a 
> volatile table (which is not exists in any form of metastore and just 
> accessible from QB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-12994:
---
Attachment: HIVE-12994.01.patch

> Implement support for NULLS FIRST/NULLS LAST
> 
>
> Key: HIVE-12994
> URL: https://issues.apache.org/jira/browse/HIVE-12994
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO, Metastore, Parser, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12994.01.patch, HIVE-12994.patch
>
>
> From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to 
> determine whether nulls appear before or after non-null data values when the 
> ORDER BY clause is used.
> SQL standard does not specify the behavior by default. Currently in Hive, 
> null values sort as if lower than any non-null value; that is, NULLS FIRST is 
> the default for ASC order, and NULLS LAST for DESC order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-05 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133973#comment-15133973
 ] 

Julian Hyde commented on HIVE-12839:


It's not essential, but I recommend storing RelMetadataQuery.instance() in a 
variable if you are going to use it more than once in a method or rule 
invocation. We plan (as part of CALCITE-604) to cache results in the 
RelMetadataQuery instance, so if you keep the same instance you'll have to do 
less work. You should *definitely* use the same instance when one metadata 
method calls another.

Of course we'd like to cache results BETWEEN calls but I don't know how much we 
can safely cache when the graph is mutating.

Also, very minor, but I have a convention of naming RelMetadataQuery variables 
{{mq}}. They crop up a lot, so you don't want a long name, and a consistent 
short name helps you find one if it is around in the same method.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10632) Make sure TXN_COMPONENTS gets cleaned up if table is dropped before compaction.

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133921#comment-15133921
 ] 

Hive QA commented on HIVE-10632:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786134/HIVE-10632.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6872/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6872/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6872/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6872/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2bc0aed HIVE-12951: Reduce Spark executor prewarm timeout to 5s 
(reviewed by Rui)
+ git clean -f -d
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 2bc0aed HIVE-12951: Reduce Spark executor prewarm timeout to 5s 
(reviewed by Rui)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786134 - PreCommit-HIVE-TRUNK-Build

> Make sure TXN_COMPONENTS gets cleaned up if table is dropped before 
> compaction.
> ---
>
> Key: HIVE-10632
> URL: https://issues.apache.org/jira/browse/HIVE-10632
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Critical
> Attachments: HIVE-10632.1.patch
>
>
> The compaction process will clean up entries in  TXNS, 
> COMPLETED_TXN_COMPONENTS, TXN_COMPONENTS.  If the table/partition is dropped 
> before compaction is complete there will be data left in these tables.  Need 
> to investigate if there are other situations where this may happen and 
> address it.
> see HIVE-10595 for additional info



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8188) ExprNodeGenericFuncEvaluator::_evaluate() loads class annotations in a tight loop

2016-02-05 Thread Sergey Zadoroshnyak (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134050#comment-15134050
 ] 

Sergey Zadoroshnyak commented on HIVE-8188:
---

[~hagleitn]
[~prasanth_j]
[~thejas]

GenericUDAFEvaluator.isEstimable(agg) always returns false, because annotation 
AggregationType has default RetentionPolicy.CLASS and cannot  be  retained by 
the VM at run time.

As result estimate method will never be executed.

I am going to open new jira issue, if no objections

> ExprNodeGenericFuncEvaluator::_evaluate() loads class annotations in a tight 
> loop
> -
>
> Key: HIVE-8188
> URL: https://issues.apache.org/jira/browse/HIVE-8188
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Performance
> Fix For: 0.14.0
>
> Attachments: HIVE-8188.1.patch, HIVE-8188.2.patch, 
> udf-deterministic.png
>
>
> When running a near-constant UDF, most of the CPU is burnt within the VM 
> trying to read the class annotations for every row.
> !udf-deterministic.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11752) Pre-materializing complex CTE queries

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11752:
---
Attachment: (was: HIVE-11752.05.patch)

> Pre-materializing complex CTE queries
> -
>
> Key: HIVE-11752
> URL: https://issues.apache.org/jira/browse/HIVE-11752
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Navis
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Attachments: HIVE-11752.03.patch, HIVE-11752.04.patch, 
> HIVE-11752.1.patch.txt, HIVE-11752.2.patch.txt
>
>
> Currently, hive regards CTE clauses as a simple alias to the query block, 
> which makes redundant works if it's used multiple times in a query. This 
> introduces a reference threshold for pre-materializing the CTE clause as a 
> volatile table (which is not exists in any form of metastore and just 
> accessible from QB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11752) Pre-materializing complex CTE queries

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134060#comment-15134060
 ] 

Jesus Camacho Rodriguez commented on HIVE-11752:


[~leftylev], I also addressed the comment you left in RB.

> Pre-materializing complex CTE queries
> -
>
> Key: HIVE-11752
> URL: https://issues.apache.org/jira/browse/HIVE-11752
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Navis
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Attachments: HIVE-11752.03.patch, HIVE-11752.04.patch, 
> HIVE-11752.05.patch, HIVE-11752.1.patch.txt, HIVE-11752.2.patch.txt
>
>
> Currently, hive regards CTE clauses as a simple alias to the query block, 
> which makes redundant works if it's used multiple times in a query. This 
> introduces a reference threshold for pre-materializing the CTE clause as a 
> volatile table (which is not exists in any form of metastore and just 
> accessible from QB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11752) Pre-materializing complex CTE queries

2016-02-05 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11752:
---
Attachment: HIVE-11752.05.patch

[~jpullokkaran], new version of the patch addresses your concerns about CTEs 
lifecycle: no collision with other temp tables in the same database, being able 
to query correctly tables with the same name in other databases i.e. that those 
tables are not shadowed by the CTE, and deleting the CTE data once the query 
has executed.

Further, I had to review the logic to traverse the tasks tree and link tasks 
generated by CTE/non-CTE to each other.

Finally, I added the additional CTE tests to MiniTez and MiniLLAP testsuite, 
and everything is working fine.

> Pre-materializing complex CTE queries
> -
>
> Key: HIVE-11752
> URL: https://issues.apache.org/jira/browse/HIVE-11752
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Navis
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Attachments: HIVE-11752.03.patch, HIVE-11752.04.patch, 
> HIVE-11752.05.patch, HIVE-11752.1.patch.txt, HIVE-11752.2.patch.txt
>
>
> Currently, hive regards CTE clauses as a simple alias to the query block, 
> which makes redundant works if it's used multiple times in a query. This 
> introduces a reference threshold for pre-materializing the CTE clause as a 
> volatile table (which is not exists in any form of metastore and just 
> accessible from QB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13000) Hive returns useless parsing error

2016-02-05 Thread Alina Abramova (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alina Abramova updated HIVE-13000:
--
Attachment: HIVE-13000.1.patch

> Hive returns useless parsing error 
> ---
>
> Key: HIVE-13000
> URL: https://issues.apache.org/jira/browse/HIVE-13000
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 1.0.0, 1.2.1
>Reporter: Alina Abramova
>Assignee: Alina Abramova
>Priority: Minor
> Attachments: HIVE-13000.1.patch
>
>
> When I run query like these I receive unclear exception
> hive> SELECT record FROM ctest GROUP BY record.instance_id;
> FAILED: SemanticException Error in parsing 
> It will be clearer if it would be like:
> hive> SELECT record FROM ctest GROUP BY record.instance_id;
> FAILED: SemanticException  Expression not in GROUP BY key record



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11887) spark tests break the build on a shared machine

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134115#comment-15134115
 ] 

Hive QA commented on HIVE-11887:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785832/HIVE-11887.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10021 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-unionDistinct_1.q-insert_values_non_partitioned.q-selectDistinctStar.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_interval_2.q-vectorization_7.q-vectorization_14.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_char_simple
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6873/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6873/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6873/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785832 - PreCommit-HIVE-TRUNK-Build

> spark tests break the build on a shared machine
> ---
>
> Key: HIVE-11887
> URL: https://issues.apache.org/jira/browse/HIVE-11887
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11887.patch
>
>
> Spark download creates UDFExampleAdd jar in /tmp; when building on a shared 
> machine, someone else's jar from a build prevents this jar from being created 
> (I have no permissions to this file because it was created by a different 
> user) and the build fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12790) Metastore connection leaks in HiveServer2

2016-02-05 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134217#comment-15134217
 ] 

Naveen Gangam commented on HIVE-12790:
--

The test failures are not related to the patch. The failing qtest works fine 
locally.

> Metastore connection leaks in HiveServer2
> -
>
> Key: HIVE-12790
> URL: https://issues.apache.org/jira/browse/HIVE-12790
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, 
> HIVE-12790.patch, snippedLog.txt
>
>
> HiveServer2 keeps opening new connections to HMS each time it launches a 
> task. These connections do not appear to be closed when the task completes 
> thus causing a HMS connection leak. "lsof" for the HS2 process shows 
> connections to port 9083.
> {code}
> 2015-12-03 04:20:56,352 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41
> 2015-12-03 04:20:56,354 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14824
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> 
> 2015-12-03 04:21:06,355 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41
> 2015-12-03 04:21:06,357 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14825
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ...
> 2015-12-03 04:21:08,357 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41
> 2015-12-03 04:21:08,360 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:08,364 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14826
> 2015-12-03 04:21:08,365 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ... 
> {code}
> The TaskRunner thread starts a new SessionState each time, which creates a 
> new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed.
> Even SessionState.close(), currently not being called by the TaskRunner 
> thread, does not close this connection.
> Attaching a anonymized log snippet where the number of HMS connections 
> reaches north of 25000+ connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12967) Change LlapServiceDriver to read a properties file instead of llap-daemon-site

2016-02-05 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133880#comment-15133880
 ] 

Gopal V commented on HIVE-12967:


The patch seems to be skipping some fairly important entries from the 
hive-site.xml 

{code}
2016-02-05T04:13:04,842 WARN  [main[]]: impl.LlapDaemon 
(LlapDaemon.java:main(336)) - Failed to start LLAP Daemon with exception
java.lang.IllegalArgumentException: Work dirs must be specified
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:93) 
~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.(LlapDaemon.java:108) 
~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:324) 
[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
{code}

{code}
+  public LlapDaemonConfiguration() {
+super(false);
+addResource(LLAP_DAEMON_SITE);
+  }
{code}

is not sufficient to pick up core-site.xml or yarn-site.xml shipped along-side.

> Change LlapServiceDriver to read a properties file instead of llap-daemon-site
> --
>
> Key: HIVE-12967
> URL: https://issues.apache.org/jira/browse/HIVE-12967
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-12967.01.patch, HIVE-12967.1.wip.txt
>
>
> Having a copy of llap-daemon-site on the client node can be quite confusing, 
> since LlapServiceDriver generates the actual llap-daemon-site used by daemons.
> Instead of this - base settings can be picked up from a properties file.
> Also add java_home as a parameter to the script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13010) partitions autogenerated predicates broken

2016-02-05 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133994#comment-15133994
 ] 

Gopal V commented on HIVE-13010:


[~zstan]: This is expected, because of unix_timestamp() - that is *NOT* 
constant folded at compile time.

> partitions autogenerated predicates broken
> --
>
> Key: HIVE-13010
> URL: https://issues.apache.org/jira/browse/HIVE-13010
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Stanilovsky Evgeny
>Priority: Trivial
>
> hi, i`m looking for simalar problem but found only:
> https://issues.apache.org/jira/browse/HIVE-9630
> it`s looks like the same but you can easily repeat it on testing i hope.
> I have two simalar requests , the difference in autogenerated data 
> predicates, in first case explain show ful lscan.
> '''
> set hive.optimize.constant.propagation=true;
> explain select * from logs.weather_forecasts where dt between
> from_unixtime(unix_timestamp() - 3600*24*3, '-MM-dd') and
> from_unixtime(unix_timestamp() - 3600*24*1, '-MM-dd') and
> provider_id = 100
> STAGE PLANS:
> 5   Stage: Stage-1
> 6 Map Reduce
> 7   Map Operator Tree:
> 8   TableScan
> 9 alias: weather_forecasts
> 10Statistics: Num rows: 36124837607 Data size: 47395787122046 
> Basic stats: PARTIAL Column stats: NONE
> 
> and
> 
> set hive.optimize.constant.propagation=true;
> explain select * from logs.redir_log where dt between
> '2016-02-02' and
> '2016-02-04' and
> pid = 100
> 0 STAGE DEPENDENCIES:
> 1   Stage-1 is a root stage
> 2   Stage-0 depends on stages: Stage-1
> 3 
> 4 STAGE PLANS:
> 5   Stage: Stage-1
> 6 Map Reduce
> 7   Map Operator Tree:
> 8   TableScan
> 9 alias: redir_log
> 10Statistics: Num rows: 2798358420 Data size: 5761819991150 
> Basic stats: COMPLETE Column stats: NONE
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12790) Metastore connection leaks in HiveServer2

2016-02-05 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134238#comment-15134238
 ] 

Naveen Gangam commented on HIVE-12790:
--

Thanks for reviewing and pushing this thru [~aihuaxu] and [~thejas].

> Metastore connection leaks in HiveServer2
> -
>
> Key: HIVE-12790
> URL: https://issues.apache.org/jira/browse/HIVE-12790
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 2.1.0
>
> Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, 
> HIVE-12790.patch, snippedLog.txt
>
>
> HiveServer2 keeps opening new connections to HMS each time it launches a 
> task. These connections do not appear to be closed when the task completes 
> thus causing a HMS connection leak. "lsof" for the HS2 process shows 
> connections to port 9083.
> {code}
> 2015-12-03 04:20:56,352 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41
> 2015-12-03 04:20:56,354 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14824
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> 
> 2015-12-03 04:21:06,355 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41
> 2015-12-03 04:21:06,357 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14825
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ...
> 2015-12-03 04:21:08,357 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41
> 2015-12-03 04:21:08,360 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:08,364 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14826
> 2015-12-03 04:21:08,365 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ... 
> {code}
> The TaskRunner thread starts a new SessionState each time, which creates a 
> new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed.
> Even SessionState.close(), currently not being called by the TaskRunner 
> thread, does not close this connection.
> Attaching a anonymized log snippet where the number of HMS connections 
> reaches north of 25000+ connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12790) Metastore connection leaks in HiveServer2

2016-02-05 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134240#comment-15134240
 ] 

Aihua Xu commented on HIVE-12790:
-

Pushed to branch-1 and branch-2.0 as well.

> Metastore connection leaks in HiveServer2
> -
>
> Key: HIVE-12790
> URL: https://issues.apache.org/jira/browse/HIVE-12790
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 1.3.0, 2.0.0, 2.1.0
>
> Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, 
> HIVE-12790.patch, snippedLog.txt
>
>
> HiveServer2 keeps opening new connections to HMS each time it launches a 
> task. These connections do not appear to be closed when the task completes 
> thus causing a HMS connection leak. "lsof" for the HS2 process shows 
> connections to port 9083.
> {code}
> 2015-12-03 04:20:56,352 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41
> 2015-12-03 04:20:56,354 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14824
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> 
> 2015-12-03 04:21:06,355 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41
> 2015-12-03 04:21:06,357 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14825
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ...
> 2015-12-03 04:21:08,357 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41
> 2015-12-03 04:21:08,360 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:08,364 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14826
> 2015-12-03 04:21:08,365 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ... 
> {code}
> The TaskRunner thread starts a new SessionState each time, which creates a 
> new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed.
> Even SessionState.close(), currently not being called by the TaskRunner 
> thread, does not close this connection.
> Attaching a anonymized log snippet where the number of HMS connections 
> reaches north of 25000+ connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12790) Metastore connection leaks in HiveServer2

2016-02-05 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-12790:

Fix Version/s: 2.0.0
   1.3.0

> Metastore connection leaks in HiveServer2
> -
>
> Key: HIVE-12790
> URL: https://issues.apache.org/jira/browse/HIVE-12790
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 1.3.0, 2.0.0, 2.1.0
>
> Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, 
> HIVE-12790.patch, snippedLog.txt
>
>
> HiveServer2 keeps opening new connections to HMS each time it launches a 
> task. These connections do not appear to be closed when the task completes 
> thus causing a HMS connection leak. "lsof" for the HS2 process shows 
> connections to port 9083.
> {code}
> 2015-12-03 04:20:56,352 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41
> 2015-12-03 04:20:56,354 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14824
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> 
> 2015-12-03 04:21:06,355 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41
> 2015-12-03 04:21:06,357 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14825
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ...
> 2015-12-03 04:21:08,357 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41
> 2015-12-03 04:21:08,360 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:08,364 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14826
> 2015-12-03 04:21:08,365 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ... 
> {code}
> The TaskRunner thread starts a new SessionState each time, which creates a 
> new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed.
> Even SessionState.close(), currently not being called by the TaskRunner 
> thread, does not close this connection.
> Attaching a anonymized log snippet where the number of HMS connections 
> reaches north of 25000+ connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12997) Fix WindowsPathUtil errors in HCat itests

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134255#comment-15134255
 ] 

Hive QA commented on HIVE-12997:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786159/HIVE-12997.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6874/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6874/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6874/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786159 - PreCommit-HIVE-TRUNK-Build

> Fix WindowsPathUtil errors in HCat itests
> -
>
> Key: HIVE-12997
> URL: https://issues.apache.org/jira/browse/HIVE-12997
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, Windows
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12997.1.patch
>
>
> Issues with the following tests on Windows:
> org.apache.hive.hcatalog.mapreduce.TestHCatHiveCompatibility.testPartedRead   
> ClassNotFound: WindowsPathUtil  
> org.apache.hive.hcatalog.mapreduce.TestHCatHiveCompatibility.testUnpartedReadWrite
> ClassNotFound: WindowsPathUtil  
> org.apache.hive.hcatalog.mapreduce.TestHCatHiveThriftCompatibility.testDynamicCols
> ClassNotFound: WindowsPathUtil
> Looks like itests/hcatalog-unit/pom.xml is missing a dependency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12918) LLAP should never create embedded metastore when localizing functions

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12918:

Attachment: HIVE-12918.03.patch

> LLAP should never create embedded metastore when localizing functions
> -
>
> Key: HIVE-12918
> URL: https://issues.apache.org/jira/browse/HIVE-12918
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Siddharth Seth
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12918.01.patch, HIVE-12918.02.patch, 
> HIVE-12918.03.patch, HIVE-12918.patch
>
>
> {code}
> 16/01/24 21:29:02 INFO service.AbstractService: Service LlapDaemon failed in 
> state INITED; cause: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1552)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3110)
>   at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3130)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3355)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.FunctionLocalizer.startLocalizeAllFunctions(FunctionLocalizer.java:88)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.serviceInit(LlapDaemon.java:244)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:323)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1550)
>   ... 10 more
> Caused by: java.lang.NoClassDefFoundError: org/datanucleus/NucleusContext
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getClass(MetaStoreUtils.java:1517)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:61)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:568)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:533)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:595)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:387)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5935)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:221)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:67)
>   ... 15 more
> Caused by: java.lang.ClassNotFoundException: org.datanucleus.NucleusContext
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> {code}
> Looks like the DataNucleus jar is not added to the LLAP classpath. This 
> appears to be caused by HIVE-12853



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12892:

Attachment: HIVE-12892.05.patch

Updated the Hbase part. Is the rest of the patch all good? ;)
[~sushanth], [~alangates]

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.05.nogen.patch, 
> HIVE-12892.05.patch, HIVE-12892.05.patch, HIVE-12892.nogen.patch, 
> HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12918) LLAP should never create embedded metastore when localizing functions

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12918:

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-12852

> LLAP should never create embedded metastore when localizing functions
> -
>
> Key: HIVE-12918
> URL: https://issues.apache.org/jira/browse/HIVE-12918
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Siddharth Seth
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12918.01.patch, HIVE-12918.02.patch, 
> HIVE-12918.03.patch, HIVE-12918.patch
>
>
> {code}
> 16/01/24 21:29:02 INFO service.AbstractService: Service LlapDaemon failed in 
> state INITED; cause: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1552)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3110)
>   at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3130)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3355)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.FunctionLocalizer.startLocalizeAllFunctions(FunctionLocalizer.java:88)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.serviceInit(LlapDaemon.java:244)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:323)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1550)
>   ... 10 more
> Caused by: java.lang.NoClassDefFoundError: org/datanucleus/NucleusContext
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getClass(MetaStoreUtils.java:1517)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:61)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:568)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:533)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:595)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:387)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5935)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:221)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:67)
>   ... 15 more
> Caused by: java.lang.ClassNotFoundException: org.datanucleus.NucleusContext
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> {code}
> Looks like the DataNucleus jar is not added to the LLAP classpath. This 
> appears to be caused by HIVE-12853



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results

2016-02-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135435#comment-15135435
 ] 

Sergey Shelukhin commented on HIVE-11527:
-

For multiple files, I think it should go back to the old path or error out, 
since we don't expect that. Or at least warn.

> bypass HiveServer2 thrift interface for query results
> -
>
> Key: HIVE-11527
> URL: https://issues.apache.org/jira/browse/HIVE-11527
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Sergey Shelukhin
>Assignee: Takanobu Asanuma
> Attachments: HIVE-11527.WIP.patch
>
>
> Right now, HS2 reads query results and returns them to the caller via its 
> thrift API.
> There should be an option for HS2 to return some pointer to results (an HDFS 
> link?) and for the user to read the results directly off HDFS inside the 
> cluster, or via something like WebHDFS outside the cluster
> Review board link: https://reviews.apache.org/r/40867



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12999) Tez: Vertex creation is slowed down when NN throttles IPCs

2016-02-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135454#comment-15135454
 ] 

Hive QA commented on HIVE-12999:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786185/HIVE-12999.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10037 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testSimpleTable
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6879/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6879/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6879/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786185 - PreCommit-HIVE-TRUNK-Build

> Tez: Vertex creation is slowed down when NN throttles IPCs
> --
>
> Key: HIVE-12999
> URL: https://issues.apache.org/jira/browse/HIVE-12999
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.0, 1.3.0, 2.0.0, 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-12999.1.patch
>
>
> Tez vertex building has a decidedly slow path in the code, which is not 
> related to the DAG plan at all.
> The total number of RPC calls is not related to the total number of 
> operators, due to a bug in the DagUtils inner loops.
> {code}
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3207)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3170)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:548)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:1151)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:388)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:175)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13002) metastore call timing is not threadsafe

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-13002:
---

Assignee: Sergey Shelukhin

> metastore call timing is not threadsafe
> ---
>
> Key: HIVE-13002
> URL: https://issues.apache.org/jira/browse/HIVE-13002
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> Discovered in some q test run:
> {noformat}
>  TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 
> Unexpected exception java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:966)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:964)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412)
>   at 
> org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-05 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135469#comment-15135469
 ] 

Pengcheng Xiong commented on HIVE-12839:


[~julianhyde], thanks for your comments. I will address them once QA returns.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-05 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135471#comment-15135471
 ] 

Pengcheng Xiong commented on HIVE-12839:


[~ashutoshc], sure. The major change is in the HiveRelFieldTrimmer. I added a 
new function
{code}
public TrimResult trimFields(
183   HiveSortLimit sort,
184   ImmutableBitSet fieldsUsed,
185   Set extraFields) {
{code}
which was copied and pasted from Calcite. It calls a function 
{code}
sortLimit(cluster, relBuilder, offset, fetch, fields);
{code}
And inside sortLimit, I removed the line 
{code}
return empty LogicalValues when offset == 0 && fetch == 0.
{code}
Thus, it will fall back to the processing logic in Calcite 1.5. I also added 
some "TODO" in the comments. Thanks.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-05 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135472#comment-15135472
 ] 

Pengcheng Xiong commented on HIVE-12839:


[~ashutoshc], sure. The major change is in the HiveRelFieldTrimmer. I added a 
new function
{code}
public TrimResult trimFields(
183   HiveSortLimit sort,
184   ImmutableBitSet fieldsUsed,
185   Set extraFields) {
{code}
which was copied and pasted from Calcite. It calls a function 
{code}
sortLimit(cluster, relBuilder, offset, fetch, fields);
{code}
And inside sortLimit, I removed the line 
{code}
return empty LogicalValues when offset == 0 && fetch == 0.
{code}
Thus, it will fall back to the processing logic in Calcite 1.5. I also added 
some "TODO" in the comments. Thanks.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12007) Hive LDAP Authenticator should allow just Domain without baseDN (for AD)

2016-02-05 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135479#comment-15135479
 ] 

Lefty Leverenz commented on HIVE-12007:
---

[~szehon] or [~ctang.ma], this was committed to master in October for release 
2.0.0 and to branch-1 today for release 1.3.0 so Fix Version/s needs to be 
updated.  Thanks.

> Hive LDAP Authenticator should allow just Domain without baseDN (for AD)
> 
>
> Key: HIVE-12007
> URL: https://issues.apache.org/jira/browse/HIVE-12007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-12007.patch, HIVE-12007.patch
>
>
> When the baseDN is not configured but only the Domain has been set in 
> hive-site.xml, LDAP Atn provider cannot locate the user in the directory. 
> Authentication fails in such cases. This is a change from the prior 
> implementation where the auth request succeeds based on being able to bind to 
> the directory. This has been called out in the design doc in HIVE-7193. 
> But we should allow this for now for backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12965) Insert overwrite local directory should perserve the overwritten directory permission

2016-02-05 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135510#comment-15135510
 ] 

Chaoyu Tang commented on HIVE-12965:


Failed tests seem not related to the patch. I was not able to reproduce in my 
local machine. [~xuefuz], [~ashutoshc] could you review the patch. Thanks

> Insert overwrite local directory should perserve the overwritten directory 
> permission
> -
>
> Key: HIVE-12965
> URL: https://issues.apache.org/jira/browse/HIVE-12965
> Project: Hive
>  Issue Type: Bug
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12965.patch
>
>
> In Hive, "insert overwrite local directory" first deletes the overwritten 
> directory if exists, recreate a new one, then copy the files from src 
> directory to the new local directory. This process sometimes changes the 
> permissions of the to-be-overwritten local directory, therefore causing some 
> applications no more to be able to access its content.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results

2016-02-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135436#comment-15135436
 ] 

Sergey Shelukhin commented on HIVE-11527:
-

[~jingzhao] ping? [~tasanuma0829] maybe it's better to have an example of URI 
based code, so Jing could review it :)

Also [~vgumashta] ping? :)

> bypass HiveServer2 thrift interface for query results
> -
>
> Key: HIVE-11527
> URL: https://issues.apache.org/jira/browse/HIVE-11527
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Sergey Shelukhin
>Assignee: Takanobu Asanuma
> Attachments: HIVE-11527.WIP.patch
>
>
> Right now, HS2 reads query results and returns them to the caller via its 
> thrift API.
> There should be an option for HS2 to return some pointer to results (an HDFS 
> link?) and for the user to read the results directly off HDFS inside the 
> cluster, or via something like WebHDFS outside the cluster
> Review board link: https://reviews.apache.org/r/40867



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11752) Pre-materializing complex CTE queries

2016-02-05 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135450#comment-15135450
 ] 

Lefty Leverenz commented on HIVE-11752:
---

Thanks Jesús, looks good.  +1 for the parameter description.

> Pre-materializing complex CTE queries
> -
>
> Key: HIVE-11752
> URL: https://issues.apache.org/jira/browse/HIVE-11752
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Navis
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Attachments: HIVE-11752.03.patch, HIVE-11752.04.patch, 
> HIVE-11752.05.patch, HIVE-11752.1.patch.txt, HIVE-11752.2.patch.txt
>
>
> Currently, hive regards CTE clauses as a simple alias to the query block, 
> which makes redundant works if it's used multiple times in a query. This 
> introduces a reference threshold for pre-materializing the CTE clause as a 
> volatile table (which is not exists in any form of metastore and just 
> accessible from QB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12949) Fix description of hive.server2.logging.operation.level

2016-02-05 Thread Mohit Sabharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135493#comment-15135493
 ] 

Mohit Sabharwal commented on HIVE-12949:


Thanks, [~hsubramaniyan]. I tested this on version 1.2 by setting 
{{hive.server2.logging.operation.level}} via beeline and it did not work. I 
have not tested it on current master.

As I said in the description, from reading the code, it appears that an 
appender based on the log level property is created just once at 
OperationManager init time (in initOperationLogCapture). Not each time an 
operation is issued or even once for every new session. Effectively, logging 
level gets fixed when HS2 first comes up and remains the same . Unless I am 
missing something here.

> Fix description of hive.server2.logging.operation.level
> ---
>
> Key: HIVE-12949
> URL: https://issues.apache.org/jira/browse/HIVE-12949
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
> Attachments: HIVE-12949.patch
>
>
> Description of {{hive.server2.logging.operation.level}} states that this 
> property is available to set at session level. This is not correct. The 
> property is only settable at HS2 level currently.
> A log4j appender is created just once currently - based on logging level 
> specified by this config, at OperationManager init time in 
> initOperationLogCapture(). And there is only one instance of OperationManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-1608) use sequencefile as the default for storing intermediate results

2016-02-05 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-1608:
--
Attachment: HIVE-1608.2.patch

fix the tests resulted from change to use SequenceFile

> use sequencefile as the default for storing intermediate results
> 
>
> Key: HIVE-1608
> URL: https://issues.apache.org/jira/browse/HIVE-1608
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.7.0
>Reporter: Namit Jain
>Assignee: Brock Noland
> Attachments: HIVE-1608.1.patch, HIVE-1608.2.patch, HIVE-1608.patch
>
>
> The only argument for having a text file for storing intermediate results 
> seems to be better debuggability.
> But, tailing a sequence file is possible, and it should be more space 
> efficient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12007) Hive LDAP Authenticator should allow just Domain without baseDN (for AD)

2016-02-05 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-12007:
---
Fix Version/s: 2.0.0
   1.3.0

> Hive LDAP Authenticator should allow just Domain without baseDN (for AD)
> 
>
> Key: HIVE-12007
> URL: https://issues.apache.org/jira/browse/HIVE-12007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-12007.patch, HIVE-12007.patch
>
>
> When the baseDN is not configured but only the Domain has been set in 
> hive-site.xml, LDAP Atn provider cannot locate the user in the directory. 
> Authentication fails in such cases. This is a change from the prior 
> implementation where the auth request succeeds based on being able to bind to 
> the directory. This has been called out in the design doc in HIVE-7193. 
> But we should allow this for now for backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12007) Hive LDAP Authenticator should allow just Domain without baseDN (for AD)

2016-02-05 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135505#comment-15135505
 ] 

Chaoyu Tang commented on HIVE-12007:


Thanks, [~leftylev] I updated the Fix Versions

> Hive LDAP Authenticator should allow just Domain without baseDN (for AD)
> 
>
> Key: HIVE-12007
> URL: https://issues.apache.org/jira/browse/HIVE-12007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-12007.patch, HIVE-12007.patch
>
>
> When the baseDN is not configured but only the Domain has been set in 
> hive-site.xml, LDAP Atn provider cannot locate the user in the directory. 
> Authentication fails in such cases. This is a change from the prior 
> implementation where the auth request succeeds based on being able to bind to 
> the directory. This has been called out in the design doc in HIVE-7193. 
> But we should allow this for now for backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7847) query orc partitioned table fail when table column type change

2016-02-05 Thread Charles Pritchard (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135260#comment-15135260
 ] 

Charles Pritchard commented on HIVE-7847:
-

I think I'm seeing this on an INSERT OVERWRITE tied with a series of GROUP 
BY/SELECT and WITH clauses.

> query orc partitioned table fail when table column type change
> --
>
> Key: HIVE-7847
> URL: https://issues.apache.org/jira/browse/HIVE-7847
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 0.11.0, 0.12.0, 0.13.0
>Reporter: Zhichun Wu
>Assignee: Zhichun Wu
> Attachments: HIVE-7847.1.patch, vector_alter_partition_change_col.q
>
>
> I use the following script to test orc column type change with partitioned 
> table on branch-0.13:
> {code}
> use test;
> DROP TABLE if exists orc_change_type_staging;
> DROP TABLE if exists orc_change_type;
> CREATE TABLE orc_change_type_staging (
> id int
> );
> CREATE TABLE orc_change_type (
> id int
> ) PARTITIONED BY (`dt` string)
> stored as orc;
> --- load staging table
> LOAD DATA LOCAL INPATH '../hive/examples/files/int.txt' OVERWRITE INTO TABLE 
> orc_change_type_staging;
> --- populate orc hive table
> INSERT OVERWRITE TABLE orc_change_type partition(dt='20140718') select * FROM 
> orc_change_type_staging limit 1;
> --- change column id from int to bigint
> ALTER TABLE orc_change_type CHANGE id id bigint;
> INSERT OVERWRITE TABLE orc_change_type partition(dt='20140719') select * FROM 
> orc_change_type_staging limit 1;
> SELECT id FROM orc_change_type where dt between '20140718' and '20140719';
> {code}
> if fails in the last query "SELECT id FROM orc_change_type where dt between 
> '20140718' and '20140719';" with exception:
> {code}
> Error: java.io.IOException: java.io.IOException: 
> java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
> to org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:256)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:171)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hadoop.io.IntWritable cannot be cast to 
> org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:344)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:122)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:254)
> ... 11 more
> Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
> cannot be cast to org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$LongTreeReader.next(RecordReaderImpl.java:717)
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1788)
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2997)
> at 
> 

[jira] [Updated] (HIVE-12857) LLAP: modify the decider to allow using LLAP with whitelisted UDFs

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12857:

Attachment: HIVE-12857.patch

Tests enabling the UDFs on the decider and running explain. Selects should be 
added to the same test after we enable UDFs by default (or maybe we can enable 
it by default in tests only).



> LLAP: modify the decider to allow using LLAP with whitelisted UDFs
> --
>
> Key: HIVE-12857
> URL: https://issues.apache.org/jira/browse/HIVE-12857
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12857.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12892:

Attachment: HIVE-12892.05.nogen.patch

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.05.nogen.patch, 
> HIVE-12892.05.patch, HIVE-12892.nogen.patch, HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13015) Add dependency convergence for SLF4J

2016-02-05 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135393#comment-15135393
 ] 

Siddharth Seth commented on HIVE-13015:
---

User classpath first does not matter to slf4j. If it see multipler versions of 
the bindings - this WARNING will show up.

If Tez jars are part of the classpath when launching Hive - the WARNING will 
show up. This will also come in from Hadoop iteslf.

Updating Hive to 1.7.10 would be good.

> Add dependency convergence for SLF4J
> 
>
> Key: HIVE-13015
> URL: https://issues.apache.org/jira/browse/HIVE-13015
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> In some of the recent test runs, we are seeing multiple bindings for SLF4j 
> that causes issues with LOG4j2 logger. 
> {code}
> SLF4J: Found binding in 
> [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> {code}
> We have added explicit exclusions for slf4j-log4j12 but some library is 
> pulling it transitively and it's getting packaged with hive libs. Also hive 
> currently uses version 1.7.5 for slf4j. We should add dependency convergence 
> for sl4fj and also remove packaging of slf4j-log4j12.*.jar 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9862) Vectorized execution corrupts timestamp values

2016-02-05 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135407#comment-15135407
 ] 

Matt McCline commented on HIVE-9862:


Oh I'll need to re-run the tests -- Jenkins swept the reports into the garbage.

> Vectorized execution corrupts timestamp values
> --
>
> Key: HIVE-9862
> URL: https://issues.apache.org/jira/browse/HIVE-9862
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.0.0
>Reporter: Nathan Howell
>Assignee: Matt McCline
> Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, 
> HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, 
> HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch
>
>
> Timestamps in the future (year 2250?) and before ~1700 are silently corrupted 
> in vectorized execution mode. Simple repro:
> {code}
> hive> DROP TABLE IF EXISTS test;
> hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC;
> hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59');
> hive> SET hive.vectorized.execution.enabled = false;
> hive> SELECT MAX(ts) FROM test;
> -12-31 23:59:59
> hive> SET hive.vectorized.execution.enabled = true;
> hive> SELECT MAX(ts) FROM test;
> 1816-03-30 05:56:07.066277376
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9862) Vectorized execution corrupts timestamp values

2016-02-05 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135418#comment-15135418
 ] 

Matt McCline commented on HIVE-9862:


As I recall none were timestamp related.

> Vectorized execution corrupts timestamp values
> --
>
> Key: HIVE-9862
> URL: https://issues.apache.org/jira/browse/HIVE-9862
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.0.0
>Reporter: Nathan Howell
>Assignee: Matt McCline
> Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, 
> HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, 
> HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch
>
>
> Timestamps in the future (year 2250?) and before ~1700 are silently corrupted 
> in vectorized execution mode. Simple repro:
> {code}
> hive> DROP TABLE IF EXISTS test;
> hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC;
> hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59');
> hive> SET hive.vectorized.execution.enabled = false;
> hive> SELECT MAX(ts) FROM test;
> -12-31 23:59:59
> hive> SET hive.vectorized.execution.enabled = true;
> hive> SELECT MAX(ts) FROM test;
> 1816-03-30 05:56:07.066277376
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13018) On branch-1 "RuntimeException: Vectorization is not supported for datatype:LIST"

2016-02-05 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135431#comment-15135431
 ] 

Matt McCline commented on HIVE-13018:
-

Thanks!

> On branch-1 "RuntimeException: Vectorization is not supported for 
> datatype:LIST"
> 
>
> Key: HIVE-13018
> URL: https://issues.apache.org/jira/browse/HIVE-13018
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.3.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13018.1-branch1.patch
>
>
> The Schema Evolution backport needs to include the new complex ColumnVector 
> types even though we don't actually allow using those types in vectorization. 
>  Except for an edge case for count(*) where we do vectorize the query with 
> complex data types since they are not referenced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10115) HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and Delegation token(DIGEST) when alternate authentication is enabled

2016-02-05 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135430#comment-15135430
 ] 

Lefty Leverenz commented on HIVE-10115:
---

Does this need to be documented in the wiki?  If so, please add a TODOC1.3 
label.

Perhaps it belongs in one of these sections:

* [HiveServer2 Clients -- JDBC Client Setup for a Secure Cluster | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBCClientSetupforaSecureCluster]
* [HiveServer2 Clients -- Advanced Features for Integration with Other Tools | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-AdvancedFeaturesforIntegrationwithOtherTools]
* [Setting Up HiveServer2 -- Authentication/Security Configuration | 
https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-Authentication/SecurityConfiguration]

> HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and 
> Delegation token(DIGEST) when alternate authentication is enabled
> ---
>
> Key: HIVE-10115
> URL: https://issues.apache.org/jira/browse/HIVE-10115
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Affects Versions: 1.1.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
>  Labels: patch
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-10115.0.patch, HIVE-10115.2.patch
>
>
> In a Kerberized cluster when alternate authentication is enabled on HS2, it 
> should also accept Kerberos Authentication. The reason this is important is 
> because when we enable LDAP authentication HS2 stops accepting delegation 
> token authentication. So we are forced to enter username passwords in the 
> oozie configuration.
> The whole idea of SASL is that multiple authentication mechanism can be 
> offered. If we disable Kerberos(GSSAPI) and delegation token (DIGEST) 
> authentication when we enable LDAP authentication, this defeats SASL purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13015) Add dependency convergence for SLF4J

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13015:

Assignee: Prasanth Jayachandran

> Add dependency convergence for SLF4J
> 
>
> Key: HIVE-13015
> URL: https://issues.apache.org/jira/browse/HIVE-13015
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> In some of the recent test runs, we are seeing multiple bindings for SLF4j 
> that causes issues with LOG4j2 logger. 
> {code}
> SLF4J: Found binding in 
> [jar:file:/grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1454694331819_0001/container_e06_1454694331819_0001_01_02/app/install/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> {code}
> We have added explicit exclusions for slf4j-log4j12 but some library is 
> pulling it transitively and it's getting packaged with hive libs. Also hive 
> currently uses version 1.7.5 for slf4j. We should add dependency convergence 
> for sl4fj and also remove packaging of slf4j-log4j12.*.jar 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9862) Vectorized execution corrupts timestamp values

2016-02-05 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135379#comment-15135379
 ] 

Jason Dere commented on HIVE-9862:
--

Didn't see the test report in time .. were any of the failures related to this 
patch, or were they all previously occurring failures?

> Vectorized execution corrupts timestamp values
> --
>
> Key: HIVE-9862
> URL: https://issues.apache.org/jira/browse/HIVE-9862
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.0.0
>Reporter: Nathan Howell
>Assignee: Matt McCline
> Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, 
> HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, 
> HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch
>
>
> Timestamps in the future (year 2250?) and before ~1700 are silently corrupted 
> in vectorized execution mode. Simple repro:
> {code}
> hive> DROP TABLE IF EXISTS test;
> hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC;
> hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59');
> hive> SET hive.vectorized.execution.enabled = false;
> hive> SELECT MAX(ts) FROM test;
> -12-31 23:59:59
> hive> SET hive.vectorized.execution.enabled = true;
> hive> SELECT MAX(ts) FROM test;
> 1816-03-30 05:56:07.066277376
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9862) Vectorized execution corrupts timestamp values

2016-02-05 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-9862:
---
Attachment: HIVE-9862.09.patch

Rebased and re-submitted.

> Vectorized execution corrupts timestamp values
> --
>
> Key: HIVE-9862
> URL: https://issues.apache.org/jira/browse/HIVE-9862
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.0.0
>Reporter: Nathan Howell
>Assignee: Matt McCline
> Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, 
> HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, 
> HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch, HIVE-9862.09.patch
>
>
> Timestamps in the future (year 2250?) and before ~1700 are silently corrupted 
> in vectorized execution mode. Simple repro:
> {code}
> hive> DROP TABLE IF EXISTS test;
> hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC;
> hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59');
> hive> SET hive.vectorized.execution.enabled = false;
> hive> SELECT MAX(ts) FROM test;
> -12-31 23:59:59
> hive> SET hive.vectorized.execution.enabled = true;
> hive> SELECT MAX(ts) FROM test;
> 1816-03-30 05:56:07.066277376
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12918) LLAP should never create embedded metastore when localizing functions

2016-02-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135271#comment-15135271
 ] 

Sergey Shelukhin commented on HIVE-12918:
-

Heh, the patch is successful beyond the wildest expectations, it disallowed 
MiniLlap from using the metastore. Great.

> LLAP should never create embedded metastore when localizing functions
> -
>
> Key: HIVE-12918
> URL: https://issues.apache.org/jira/browse/HIVE-12918
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Siddharth Seth
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12918.01.patch, HIVE-12918.02.patch, 
> HIVE-12918.patch
>
>
> {code}
> 16/01/24 21:29:02 INFO service.AbstractService: Service LlapDaemon failed in 
> state INITED; cause: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1552)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3110)
>   at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3130)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3355)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.FunctionLocalizer.startLocalizeAllFunctions(FunctionLocalizer.java:88)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.serviceInit(LlapDaemon.java:244)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:323)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1550)
>   ... 10 more
> Caused by: java.lang.NoClassDefFoundError: org/datanucleus/NucleusContext
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getClass(MetaStoreUtils.java:1517)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:61)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:568)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:533)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:595)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:387)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5935)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:221)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:67)
>   ... 15 more
> Caused by: java.lang.ClassNotFoundException: org.datanucleus.NucleusContext
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> {code}
> Looks like the DataNucleus jar is not added to the LLAP classpath. This 
> appears to be caused by HIVE-12853



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13017) A child process of MR job needs delegation token from AZURE file system

2016-02-05 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13017:

Target Version/s: 1.2.2  (was: 1.2.1)

> A child process of MR job needs delegation token from AZURE file system
> ---
>
> Key: HIVE-13017
> URL: https://issues.apache.org/jira/browse/HIVE-13017
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 1.2.1
> Environment: Secure 
>Reporter: Takahiko Saito
>Assignee: Sushanth Sowmyan
>
> The following query fails:
> {noformat}
> >>>  create temporary table s10k stored as orc as select * from studenttab10k;
> >>>  create temporary table v10k as select * from votertab10k;
> >>>  select registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;
> ERROR : Execution failed with exit status: 2
> ERROR : Obtaining error information
> ERROR : 
> Task failed!
> Task ID:
>   Stage-5
> Logs:
> ERROR : /var/log/hive/hiveServer2.log
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask (state=08S01,code=2)
> Aborting command set because "force" is false and command failed: "select 
> registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;"
> Closing: 0: 
> jdbc:hive2://zk2-hs21-h.hdinsight.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@hdinsight.net;transportMode=http;httpPath=cliservice
> hiveServer2.log shows:
> 2016-02-02 18:04:34,182 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,199 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,212 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(127)) - Could not 
> validate cookie sent, will try to generate a new cookie
> 2016-02-02 18:04:34,213 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:checkConcurrency(168)) - Concurrency mode is disabled, 
> not creating a lock manager
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doKerberosAuth(352)) - 
> Failed to authenticate with http/_HOST kerberos principal, trying with 
> hive/_HOST kerberos principal
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,225 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1390)) - Setting caller context to query id 
> hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0
> 2016-02-02 18:04:34,226 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1393)) - Starting 
> command(queryId=hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0): 
> select registration
> from s10k s join v10k v
> on (s.name = v.name) join studentparttab30k p
> on (p.name = v.name)
> where s.age < 25 and v.age < 25 and p.age < 25
> 2016-02-02 18:04:34,228 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> hooks.ATSHook (ATSHook.java:(90)) - Created ATS Hook
> 2016-02-02 18:04:34,229 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook 
> from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,237 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(169)) - Cookie added 
> for clientUserName hrt_qa
> 2016-02-02 18:04:34,238 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1454436274229 
> end=1454436274238 duration=9 from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,239 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.security.authorization.plugin.DisallowTransformHook
>  from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,240 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) -  

[jira] [Updated] (HIVE-13017) A child process of MR job needs delegation token from AZURE file system

2016-02-05 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13017:

Target Version/s: 1.2.2, 2.1.0  (was: 1.2.2)

> A child process of MR job needs delegation token from AZURE file system
> ---
>
> Key: HIVE-13017
> URL: https://issues.apache.org/jira/browse/HIVE-13017
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 1.2.1
> Environment: Secure 
>Reporter: Takahiko Saito
>Assignee: Sushanth Sowmyan
>
> The following query fails:
> {noformat}
> >>>  create temporary table s10k stored as orc as select * from studenttab10k;
> >>>  create temporary table v10k as select * from votertab10k;
> >>>  select registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;
> ERROR : Execution failed with exit status: 2
> ERROR : Obtaining error information
> ERROR : 
> Task failed!
> Task ID:
>   Stage-5
> Logs:
> ERROR : /var/log/hive/hiveServer2.log
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask (state=08S01,code=2)
> Aborting command set because "force" is false and command failed: "select 
> registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;"
> Closing: 0: 
> jdbc:hive2://zk2-hs21-h.hdinsight.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@hdinsight.net;transportMode=http;httpPath=cliservice
> hiveServer2.log shows:
> 2016-02-02 18:04:34,182 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,199 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,212 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(127)) - Could not 
> validate cookie sent, will try to generate a new cookie
> 2016-02-02 18:04:34,213 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:checkConcurrency(168)) - Concurrency mode is disabled, 
> not creating a lock manager
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doKerberosAuth(352)) - 
> Failed to authenticate with http/_HOST kerberos principal, trying with 
> hive/_HOST kerberos principal
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,225 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1390)) - Setting caller context to query id 
> hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0
> 2016-02-02 18:04:34,226 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1393)) - Starting 
> command(queryId=hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0): 
> select registration
> from s10k s join v10k v
> on (s.name = v.name) join studentparttab30k p
> on (p.name = v.name)
> where s.age < 25 and v.age < 25 and p.age < 25
> 2016-02-02 18:04:34,228 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> hooks.ATSHook (ATSHook.java:(90)) - Created ATS Hook
> 2016-02-02 18:04:34,229 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook 
> from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,237 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(169)) - Cookie added 
> for clientUserName hrt_qa
> 2016-02-02 18:04:34,238 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1454436274229 
> end=1454436274238 duration=9 from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,239 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.security.authorization.plugin.DisallowTransformHook
>  from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,240 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) -  

[jira] [Updated] (HIVE-13016) ORC FileDump recovery utility fails in Windows

2016-02-05 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13016:
-
Attachment: HIVE-13016.2.patch

Also changes the default tmp directory.

> ORC FileDump recovery utility fails in Windows
> --
>
> Key: HIVE-13016
> URL: https://issues.apache.org/jira/browse/HIVE-13016
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.3.0, 2.1.0
>Reporter: Jason Dere
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13016.1.patch, HIVE-13016.2.patch
>
>
> org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptDataFiles
> org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptSideFiles
> java.io.IOException: Unable to move 
> file:/E:/hive/hcatalog/streaming/target/tmp/junit4129594478393496260/testing3.db/dimensionTable/delta_001_002/bucket_0
>  to 
> E:/hive/hcatalog/streaming/target/tmp/E:/hive/hcatalog/streaming/target/tmp/junit4129594478393496260/testing3.db/dimensionTable/delta_001_002/bucket_0
> at 
> org.apache.hadoop.hive.ql.io.orc.FileDump.moveFiles(FileDump.java:546)^M
> at 
> org.apache.hadoop.hive.ql.io.orc.FileDump.recoverFile(FileDump.java:513)^M
> at 
> org.apache.hadoop.hive.ql.io.orc.FileDump.recoverFiles(FileDump.java:428)^M
> at org.apache.hadoop.hive.ql.io.orc.FileDump.main(FileDump.java:125)^M
> at 
> org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptSideFiles(TestStreaming.java:1523)^M
> Note that FileDump appends the full source path to the backup path when 
> trying to recover files (see "E:" in the middle of the destination path).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12976) MetaStoreDirectSql doesn't batch IN lists in all cases

2016-02-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12976:

Fix Version/s: 2.1.0

> MetaStoreDirectSql doesn't batch IN lists in all cases
> --
>
> Key: HIVE-12976
> URL: https://issues.apache.org/jira/browse/HIVE-12976
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Fix For: 2.1.0
>
> Attachments: HIVE-12976.01.patch, HIVE-12976.02.patch, 
> HIVE-12976.patch
>
>
> That means that some RDBMS products with arbitrary limits cannot run these 
> queries. I hope HBase metastore comes soon and delivers us from Oracle! For 
> now, though, we have to fix this.
> {code}
> hive> select * from lineitem where l_orderkey = 121201;
> Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The incoming 
> request has too many parameters. The server supports a maximum of 2100 
> parameters. Reduce the number of parameters and resend the request.
> at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:215)
>  ~[sqljdbc41.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1635)
>  ~[sqljdbc41.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:426)
>  ~[sqljdbc41.jar:?]
> {code}
> {code}
> at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
>  ~[datanucleus-api-jdo-4.2.1.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) 
> ~[datanucleus-api-jdo-4.2.1.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:264) 
> ~[datanucleus-api-jdo-4.2.1.jar:?]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1681)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.partsFoundForPartitions(MetaStoreDirectSql.java:1266)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.aggrColStatsForPartitions(MetaStoreDirectSql.java:1196)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$9.getSqlResult(ObjectStore.java:6742)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$9.getSqlResult(ObjectStore.java:6738)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2525)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.get_aggr_stats_for(ObjectStore.java:6738)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13018) On branch-1 "RuntimeException: Vectorization is not supported for datatype:LIST"

2016-02-05 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13018:

Attachment: HIVE-13018.1-branch1.patch

> On branch-1 "RuntimeException: Vectorization is not supported for 
> datatype:LIST"
> 
>
> Key: HIVE-13018
> URL: https://issues.apache.org/jira/browse/HIVE-13018
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.3.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13018.1-branch1.patch
>
>
> The Schema Evolution backport needs to include the new complex ColumnVector 
> types even though we don't actually allow using those types in vectorization. 
>  Except for an edge case for count(*) where we do vectorize the query with 
> complex data types since they are not referenced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13018) On branch-1 "RuntimeException: Vectorization is not supported for datatype:LIST"

2016-02-05 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135422#comment-15135422
 ] 

Prasanth Jayachandran commented on HIVE-13018:
--

LGTM, +1

> On branch-1 "RuntimeException: Vectorization is not supported for 
> datatype:LIST"
> 
>
> Key: HIVE-13018
> URL: https://issues.apache.org/jira/browse/HIVE-13018
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.3.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13018.1-branch1.patch
>
>
> The Schema Evolution backport needs to include the new complex ColumnVector 
> types even though we don't actually allow using those types in vectorization. 
>  Except for an edge case for count(*) where we do vectorize the query with 
> complex data types since they are not referenced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11866) Add framework to enable testing using LDAPServer using LDAP protocol

2016-02-05 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-11866:
-
Attachment: HIVE-11866.3.patch

Attaching a patch that uses Apache Directory Service instead.

> Add framework to enable testing using LDAPServer using LDAP protocol
> 
>
> Key: HIVE-11866
> URL: https://issues.apache.org/jira/browse/HIVE-11866
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-11866.2.patch, HIVE-11866.3.patch, HIVE-11866.patch
>
>
> Currently there is no unit test coverage for HS2's LDAP Atn provider using a 
> LDAP Server on the backend. This prevents testing of the LDAPAtnProvider with 
> some realistic usecases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11866) Add framework to enable testing using LDAPServer using LDAP protocol

2016-02-05 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-11866:
-
Attachment: HIVE-11866.3.patch

> Add framework to enable testing using LDAPServer using LDAP protocol
> 
>
> Key: HIVE-11866
> URL: https://issues.apache.org/jira/browse/HIVE-11866
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-11866.2.patch, HIVE-11866.3.patch, HIVE-11866.patch
>
>
> Currently there is no unit test coverage for HS2's LDAP Atn provider using a 
> LDAP Server on the backend. This prevents testing of the LDAPAtnProvider with 
> some realistic usecases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11866) Add framework to enable testing using LDAPServer using LDAP protocol

2016-02-05 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-11866:
-
Attachment: (was: HIVE-11866.3.patch)

> Add framework to enable testing using LDAPServer using LDAP protocol
> 
>
> Key: HIVE-11866
> URL: https://issues.apache.org/jira/browse/HIVE-11866
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-11866.2.patch, HIVE-11866.3.patch, HIVE-11866.patch
>
>
> Currently there is no unit test coverage for HS2's LDAP Atn provider using a 
> LDAP Server on the backend. This prevents testing of the LDAPAtnProvider with 
> some realistic usecases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12987) Add metrics for HS2 active users and SQL operations

2016-02-05 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15135580#comment-15135580
 ] 

Jimmy Xiang commented on HIVE-12987:


Do you mean the user queries map? It is used to track active unique users. It 
is a metrics emitted to JMX/file so that users can find out how many active 
users are executing queries at a given time. This map is needed to not count 
the same user many times if s/he are running several queries at the same time.

> Add metrics for HS2 active users and SQL operations
> ---
>
> Key: HIVE-12987
> URL: https://issues.apache.org/jira/browse/HIVE-12987
> Project: Hive
>  Issue Type: Task
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch, 
> HIVE-12987.2.patch, HIVE-12987.3.patch
>
>
> HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also 
> interested in metrics just for SQL operations.
> It is useful to track active user count as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >