[jira] [Commented] (HIVE-11636) NPE in stats conversion with HBase metastore

2015-08-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715283#comment-14715283
 ] 

Sergey Shelukhin commented on HIVE-11636:
-

Hmm.. sure. It could fix some tests, who knows :)

 NPE in stats conversion with HBase metastore
 

 Key: HIVE-11636
 URL: https://issues.apache.org/jira/browse/HIVE-11636
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-11636.01.patch, HIVE-11636.patch


 NO PRECOMMIT TESTS
 {noformat}
 2015-08-24T20:37:22,285 ERROR [main]: ql.Driver 
 (SessionState.java:printError(963)) - FAILED: NullPointerException null
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.convertColStats(StatsUtils.java:740)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.getTableColumnStats(StatsUtils.java:731)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:186)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:139)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:127)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:110)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
 at 
 org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
 at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsAnnotation(TezCompiler.java:249)
 at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:123)
 at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:102)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:212)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:240)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:434)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:310)
 {noformat}
 Fails after importing some databases from regular metastore and running TPCDS 
 Q27.
 Simple select-where-limit query (not FetchTask) appears to run fine.
 With standalone Hbase metastore (might be the same issue):
 {noformat}
 2015-08-25 14:41:04,793 ERROR [pool-6-thread-53] server.TThreadPoolServer: 
 Thrift error occurred during processing of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'colStats' is 
 unset! Struct:AggrStats(colStats:null, partsFound:0)
 at 
 org.apache.hadoop.hive.metastore.api.AggrStats.validate(AggrStats.java:393)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1655)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 

[jira] [Updated] (HIVE-11642) LLAP: make sure tests pass #3

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11642:

Attachment: HIVE-11642.02.patch

Again

 LLAP: make sure tests pass #3
 -

 Key: HIVE-11642
 URL: https://issues.apache.org/jira/browse/HIVE-11642
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-11642.01.patch, HIVE-11642.02.patch, 
 HIVE-11642.patch


 Tests should pass against the most recent branch and Tez 0.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715436#comment-14715436
 ] 

Hive QA commented on HIVE-11383:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752531/HIVE-11383.13.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 9377 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5077/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5077/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5077/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12752531 - PreCommit-HIVE-TRUNK-Build

 Upgrade Hive to Calcite 1.4
 ---

 Key: HIVE-11383
 URL: https://issues.apache.org/jira/browse/HIVE-11383
 Project: Hive
  Issue Type: Bug
Reporter: Julian Hyde
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
 HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
 HIVE-11383.2.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
 HIVE-11383.3.patch, HIVE-11383.4.patch, HIVE-11383.5.patch, 
 HIVE-11383.6.patch, HIVE-11383.7.patch, HIVE-11383.8.patch, 
 HIVE-11383.8.patch, HIVE-11383.9.patch


 CLEAR LIBRARY CACHE
 Upgrade Hive to Calcite 1.4.0-incubating.
 There is currently a snapshot release, which is close to what will be in 1.4. 
 I have checked that Hive compiles against the new snapshot, fixing one issue. 
 The patch is attached.
 Next step is to validate that Hive runs against the new Calcite, and post any 
 issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
 can you please do that.
 [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
 the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11636) NPE in stats conversion with HBase metastore

2015-08-26 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715274#comment-14715274
 ] 

Alan Gates commented on HIVE-11636:
---

+1, patch looks fine.  I want to hold off on other checkins until we get 
HIVE-11654 resolved since we don't know what does and doesn't pass right now.

 NPE in stats conversion with HBase metastore
 

 Key: HIVE-11636
 URL: https://issues.apache.org/jira/browse/HIVE-11636
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-11636.01.patch, HIVE-11636.patch


 NO PRECOMMIT TESTS
 {noformat}
 2015-08-24T20:37:22,285 ERROR [main]: ql.Driver 
 (SessionState.java:printError(963)) - FAILED: NullPointerException null
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.convertColStats(StatsUtils.java:740)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.getTableColumnStats(StatsUtils.java:731)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:186)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:139)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:127)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:110)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
 at 
 org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
 at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsAnnotation(TezCompiler.java:249)
 at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:123)
 at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:102)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:212)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:240)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:434)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:310)
 {noformat}
 Fails after importing some databases from regular metastore and running TPCDS 
 Q27.
 Simple select-where-limit query (not FetchTask) appears to run fine.
 With standalone Hbase metastore (might be the same issue):
 {noformat}
 2015-08-25 14:41:04,793 ERROR [pool-6-thread-53] server.TThreadPoolServer: 
 Thrift error occurred during processing of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'colStats' is 
 unset! Struct:AggrStats(colStats:null, partsFound:0)
 at 
 org.apache.hadoop.hive.metastore.api.AggrStats.validate(AggrStats.java:393)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1655)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
 at 
 

[jira] [Updated] (HIVE-10021) Alter index rebuild statements submitted through HiveServer2 fail when Sentry is enabled

2015-08-26 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-10021:

Attachment: HIVE-10021.patch

 Alter index rebuild statements submitted through HiveServer2 fail when 
 Sentry is enabled
 --

 Key: HIVE-10021
 URL: https://issues.apache.org/jira/browse/HIVE-10021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, Indexing
Affects Versions: 0.13.1
 Environment: CDH 5.3.2
Reporter: Richard Williams
 Attachments: HIVE-10021.patch


 When HiveServer2 is configured to authorize submitted queries and statements 
 through Sentry, any attempt to issue an alter index rebuild statement fails 
 with a SemanticException caused by a NullPointerException. This occurs 
 regardless of whether the index is a compact or bitmap index. 
 The root cause of the problem appears to be the fact that the static 
 createRootTask function in org.apache.hadoop.hive.ql.optimizer.IndexUtils 
 creates a new 
 org.apache.hadoop.hive.ql.Driver object to compile the index builder query, 
 and this new Driver object, unlike the one used by HiveServer2 to compile the 
 submitted statement, is used without having its userName field initialized 
 with the submitting user's username. Adding null checks to the Sentry code is 
 insufficient to solve this problem, because Sentry needs the userName to 
 determine whether or not the submitting user should be able to execute the 
 index rebuild statement.
 Example stack trace from the HiveServer2 logs:
 {noformat}
 FAILED: NullPointerException null
 java.lang.NullPointerException
   at 
 java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:333)
   at 
 java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:988)
   at org.apache.hadoop.security.Groups.getGroups(Groups.java:161)
   at 
 org.apache.sentry.provider.common.HadoopGroupMappingService.getGroups(HadoopGroupMappingService.java:46)
   at 
 org.apache.sentry.binding.hive.authz.HiveAuthzBinding.getGroups(HiveAuthzBinding.java:370)
   at 
 org.apache.sentry.binding.hive.HiveAuthzBindingHook.postAnalyze(HiveAuthzBindingHook.java:314)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:440)
   at 
 org.apache.hadoop.hive.ql.optimizer.IndexUtils.createRootTask(IndexUtils.java:258)
   at 
 org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler.getIndexBuilderMapRedTask(CompactIndexHandler.java:149)
   at 
 org.apache.hadoop.hive.ql.index.TableBasedIndexHandler.generateIndexBuildTaskList(TableBasedIndexHandler.java:67)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.getIndexBuilderMapRed(DDLSemanticAnalyzer.java:1171)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterIndexRebuild(DDLSemanticAnalyzer.java:1117)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:410)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:204)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1026)
   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1019)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:100)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.run(SQLOperation.java:173)
   at 
 org.apache.hive.service.cli.session.HiveSessionImpl.runOperationWithLogCapture(HiveSessionImpl.java:715)
   at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:370)
   at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:357)
   at 
 org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:238)
   at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:393)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1373)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1358)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
   at org.apache.thrift.server.TServlet.doPost(TServlet.java:83)
   at 
 org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:99)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 

[jira] [Commented] (HIVE-11629) CBO: Calcite Operator To Hive Operator (Calcite Return Path) : fix the filter expressions for full outer join and right outer join

2015-08-26 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715266#comment-14715266
 ] 

Pengcheng Xiong commented on HIVE-11629:


[~jcamachorodriguez], as per [~jpullokkaran]'s request, here is one more that 
needs your review. Thanks.

 CBO: Calcite Operator To Hive Operator (Calcite Return Path) : fix the filter 
 expressions for full outer join and right outer join
 --

 Key: HIVE-11629
 URL: https://issues.apache.org/jira/browse/HIVE-11629
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-11629.01.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11636) NPE in stats conversion with HBase metastore

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11636:

Description: 
NO PRECOMMIT TESTS

{noformat}
2015-08-24T20:37:22,285 ERROR [main]: ql.Driver 
(SessionState.java:printError(963)) - FAILED: NullPointerException null
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.convertColStats(StatsUtils.java:740)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getTableColumnStats(StatsUtils.java:731)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:186)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:139)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:127)
at 
org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:110)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
at 
org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
at 
org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
at 
org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsAnnotation(TezCompiler.java:249)
at 
org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:123)
at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:102)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:212)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:240)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:434)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:310)
{noformat}

Fails after importing some databases from regular metastore and running TPCDS 
Q27.
Simple select-where-limit query (not FetchTask) appears to run fine.

With standalone Hbase metastore (might be the same issue):
{noformat}
2015-08-25 14:41:04,793 ERROR [pool-6-thread-53] server.TThreadPoolServer: 
Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Required field 'colStats' is 
unset! Struct:AggrStats(colStats:null, partsFound:0)
at 
org.apache.hadoop.hive.metastore.api.AggrStats.validate(AggrStats.java:393)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.validate(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.write(ThriftHiveMetastore.java)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1655)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

I think I've reported this in the past for regular metastore and it was fixed 
somewhere

  was:
{noformat}
2015-08-24T20:37:22,285 ERROR [main]: ql.Driver 
(SessionState.java:printError(963)) - FAILED: NullPointerException null
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.convertColStats(StatsUtils.java:740)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getTableColumnStats(StatsUtils.java:731)
at 

[jira] [Assigned] (HIVE-10021) Alter index rebuild statements submitted through HiveServer2 fail when Sentry is enabled

2015-08-26 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu reassigned HIVE-10021:
---

Assignee: Aihua Xu

 Alter index rebuild statements submitted through HiveServer2 fail when 
 Sentry is enabled
 --

 Key: HIVE-10021
 URL: https://issues.apache.org/jira/browse/HIVE-10021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, Indexing
Affects Versions: 0.13.1, 2.0.0
 Environment: CDH 5.3.2
Reporter: Richard Williams
Assignee: Aihua Xu
 Attachments: HIVE-10021.patch


 When HiveServer2 is configured to authorize submitted queries and statements 
 through Sentry, any attempt to issue an alter index rebuild statement fails 
 with a SemanticException caused by a NullPointerException. This occurs 
 regardless of whether the index is a compact or bitmap index. 
 The root cause of the problem appears to be the fact that the static 
 createRootTask function in org.apache.hadoop.hive.ql.optimizer.IndexUtils 
 creates a new 
 org.apache.hadoop.hive.ql.Driver object to compile the index builder query, 
 and this new Driver object, unlike the one used by HiveServer2 to compile the 
 submitted statement, is used without having its userName field initialized 
 with the submitting user's username. Adding null checks to the Sentry code is 
 insufficient to solve this problem, because Sentry needs the userName to 
 determine whether or not the submitting user should be able to execute the 
 index rebuild statement.
 Example stack trace from the HiveServer2 logs:
 {noformat}
 FAILED: NullPointerException null
 java.lang.NullPointerException
   at 
 java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:333)
   at 
 java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:988)
   at org.apache.hadoop.security.Groups.getGroups(Groups.java:161)
   at 
 org.apache.sentry.provider.common.HadoopGroupMappingService.getGroups(HadoopGroupMappingService.java:46)
   at 
 org.apache.sentry.binding.hive.authz.HiveAuthzBinding.getGroups(HiveAuthzBinding.java:370)
   at 
 org.apache.sentry.binding.hive.HiveAuthzBindingHook.postAnalyze(HiveAuthzBindingHook.java:314)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:440)
   at 
 org.apache.hadoop.hive.ql.optimizer.IndexUtils.createRootTask(IndexUtils.java:258)
   at 
 org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler.getIndexBuilderMapRedTask(CompactIndexHandler.java:149)
   at 
 org.apache.hadoop.hive.ql.index.TableBasedIndexHandler.generateIndexBuildTaskList(TableBasedIndexHandler.java:67)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.getIndexBuilderMapRed(DDLSemanticAnalyzer.java:1171)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterIndexRebuild(DDLSemanticAnalyzer.java:1117)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:410)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:204)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1026)
   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1019)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:100)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.run(SQLOperation.java:173)
   at 
 org.apache.hive.service.cli.session.HiveSessionImpl.runOperationWithLogCapture(HiveSessionImpl.java:715)
   at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:370)
   at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:357)
   at 
 org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:238)
   at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:393)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1373)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1358)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
   at org.apache.thrift.server.TServlet.doPost(TServlet.java:83)
   at 
 org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:99)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
   at 

[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715177#comment-14715177
 ] 

Andrew Purtell commented on HIVE-10990:
---

bq. I think you meant major release versions Nick Dimiduk. 
Never mind, I reviewed that section of the book and yes the matrix marks minor 
release versions as potentially incompatible also. Note this is the degree of 
freedom the HBase developers have decided to advertise as possible and not a 
guarantee that such breakage would happen. I would be interested in helping you 
address post-1.0 ABI issues if they arise. 

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11654) After latest merge, HBase metastore tests failing

2015-08-26 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715252#comment-14715252
 ] 

Alan Gates commented on HIVE-11654:
---

After more searching around, it isn't the latest merge but HIVE-10289 that 
broke these tests.

The broken tests are: TestHBaseStore, TestHBaseAggregateStatsCache, 
TestHBaseAggrStatsCacheIntegration, TestHBaseMetastoreSql, and 
TestHBaseStoreIntegration

 After latest merge, HBase metastore tests failing
 -

 Key: HIVE-11654
 URL: https://issues.apache.org/jira/browse/HIVE-11654
 Project: Hive
  Issue Type: Bug
  Components: HBase Metastore
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Blocker

 After the latest merge from trunk a number of the HBase unit tests are 
 failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11654) After HIVE-10289, HBase metastore tests failing

2015-08-26 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-11654:
--
Assignee: Daniel Dai  (was: Alan Gates)

 After HIVE-10289, HBase metastore tests failing
 ---

 Key: HIVE-11654
 URL: https://issues.apache.org/jira/browse/HIVE-11654
 Project: Hive
  Issue Type: Bug
  Components: HBase Metastore
Reporter: Alan Gates
Assignee: Daniel Dai
Priority: Blocker

 After the latest merge from trunk a number of the HBase unit tests are 
 failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11654) After HIVE-10289, HBase metastore tests failing

2015-08-26 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-11654:
--
Summary: After HIVE-10289, HBase metastore tests failing  (was: After 
latest merge, HBase metastore tests failing)

 After HIVE-10289, HBase metastore tests failing
 ---

 Key: HIVE-11654
 URL: https://issues.apache.org/jira/browse/HIVE-11654
 Project: Hive
  Issue Type: Bug
  Components: HBase Metastore
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Blocker

 After the latest merge from trunk a number of the HBase unit tests are 
 failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-11656) LLAP: merge master into branch

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-11656.
-
Resolution: Fixed

 LLAP: merge master into branch
 --

 Key: HIVE-11656
 URL: https://issues.apache.org/jira/browse/HIVE-11656
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: llap






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-08-26 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-11634:
-
Attachment: HIVE-11634.3.patch

Added test case. Improved patch#2 to  support optimization where expression 
contain only constants or virtual or partition columns.

Thanks
Hari

 Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
 --

 Key: HIVE-11634
 URL: https://issues.apache.org/jira/browse/HIVE-11634
 Project: Hive
  Issue Type: Bug
  Components: CBO
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
 HIVE-11634.3.patch


 Currently, we do not support partition pruning for the following scenario
 {code}
 create table pcr_t1 (key int, value string) partitioned by (ds string);
 insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
 where key  20 order by key;
 explain extended select ds from pcr_t1 where struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 If we run the above query, we see that all the partitions of table pcr_t1 are 
 present in the filter predicate where as we can prune  partition 
 (ds='2000-04-10'). 
 The optimization is to rewrite the above query into the following.
 {code}
 explain extended select ds from pcr_t1 where  (struct(ds)) IN 
 (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
 is used by partition pruner to prune the columns which otherwise will not be 
 pruned.
 This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11636) NPE in stats conversion with HBase metastore

2015-08-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715269#comment-14715269
 ] 

Sergey Shelukhin commented on HIVE-11636:
-

[~thejas] maybe you can take a look :) small patch

 NPE in stats conversion with HBase metastore
 

 Key: HIVE-11636
 URL: https://issues.apache.org/jira/browse/HIVE-11636
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-11636.01.patch, HIVE-11636.patch


 NO PRECOMMIT TESTS
 {noformat}
 2015-08-24T20:37:22,285 ERROR [main]: ql.Driver 
 (SessionState.java:printError(963)) - FAILED: NullPointerException null
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.convertColStats(StatsUtils.java:740)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.getTableColumnStats(StatsUtils.java:731)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:186)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:139)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:127)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:110)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
 at 
 org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
 at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsAnnotation(TezCompiler.java:249)
 at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:123)
 at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:102)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:212)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:240)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:434)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:310)
 {noformat}
 Fails after importing some databases from regular metastore and running TPCDS 
 Q27.
 Simple select-where-limit query (not FetchTask) appears to run fine.
 With standalone Hbase metastore (might be the same issue):
 {noformat}
 2015-08-25 14:41:04,793 ERROR [pool-6-thread-53] server.TThreadPoolServer: 
 Thrift error occurred during processing of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'colStats' is 
 unset! Struct:AggrStats(colStats:null, partsFound:0)
 at 
 org.apache.hadoop.hive.metastore.api.AggrStats.validate(AggrStats.java:393)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result$get_aggr_stats_for_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_aggr_stats_for_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1655)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 

[jira] [Resolved] (HIVE-11655) clean build on the branch appears to be broken

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-11655.
-
   Resolution: Fixed
Fix Version/s: llap

 clean build on the branch appears to be broken
 --

 Key: HIVE-11655
 URL: https://issues.apache.org/jira/browse/HIVE-11655
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: llap






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11531) Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise

2015-08-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715263#comment-14715263
 ] 

Sergey Shelukhin commented on HIVE-11531:
-

I think it's already assigned to you :) Thanks for the interest and let me know 
if you have questions.

 Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise
 -

 Key: HIVE-11531
 URL: https://issues.apache.org/jira/browse/HIVE-11531
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Hui Zheng

 For any UIs that involve pagination, it is useful to issue queries in the 
 form SELECT ... LIMIT X,Y where X,Y are coordinates inside the result to be 
 paginated (which can be extremely large by itself). At present, ROW_NUMBER 
 can be used to achieve this effect, but optimizations for LIMIT such as TopN 
 in ReduceSink do not apply to ROW_NUMBER. We can add first class support for 
 skip to existing limit, or improve ROW_NUMBER for better performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11649) Hive UPDATE,INSERT,DELETE issue

2015-08-26 Thread Veerendra Nath Jasthi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Veerendra Nath Jasthi updated HIVE-11649:
-
Description: 
 have been trying to implement the UPDATE,INSERT,DELETE operations in hive 
table as per link: 

https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-
 

but whenever I was trying to include the properties which will do our work i.e. 

Configuration Values to Set for INSERT, UPDATE, DELETE 


hive.support.concurrencytrue (default is false) 
hive.enforce.bucketing  true (default is false) 
hive.exec.dynamic.partition.modenonstrict (default is strict) 

after that if I run show tables command on hive shell its taking 65.15 seconds 
which normally runs at 0.18 seconds without the above properties. 

Apart from show tables rest of the commands not giving any output i.e. its keep 
on running until and unless kill the process.

Could you tell me reason for this?

  was:
 have been trying to implement the UPDATE,INSERT,DELETE operations in hive 
table as per link: 

https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-
 

but whenever I was trying to include the properties which will do our work i.e. 

Configuration Values to Set for INSERT, UPDATE, DELETE 


hive.support.concurrencytrue (default is false) 
hive.enforce.bucketing  true (default is false) 
hive.exec.dynamic.partition.modenonstrict (default is strict) 

after that if I run show tables command on hive shell its taking 65.15 seconds 
which normally runs at 0.18 seconds without the above properties. 

Could you tell me reason for this?


 Hive UPDATE,INSERT,DELETE issue
 ---

 Key: HIVE-11649
 URL: https://issues.apache.org/jira/browse/HIVE-11649
 Project: Hive
  Issue Type: Bug
 Environment: Hadoop-2.2.0 , hive-1.2.0 ,operating system 
 ubuntu14.04lts (64-bit) 
Reporter: Veerendra Nath Jasthi

  have been trying to implement the UPDATE,INSERT,DELETE operations in hive 
 table as per link: 
 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-
  
 but whenever I was trying to include the properties which will do our work 
 i.e. 
 Configuration Values to Set for INSERT, UPDATE, DELETE 
 hive.support.concurrency  true (default is false) 
 hive.enforce.bucketingtrue (default is false) 
 hive.exec.dynamic.partition.mode  nonstrict (default is strict) 
 after that if I run show tables command on hive shell its taking 65.15 
 seconds which normally runs at 0.18 seconds without the above properties. 
 Apart from show tables rest of the commands not giving any output i.e. its 
 keep on running until and unless kill the process.
 Could you tell me reason for this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11602) Support Struct with different field types in query

2015-08-26 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712580#comment-14712580
 ] 

Lefty Leverenz commented on HIVE-11602:
---

Should this be documented, or is it just a bug fix?

If it needs doc, please add a TODOC2.0 label.

 Support Struct with different field types in query
 --

 Key: HIVE-11602
 URL: https://issues.apache.org/jira/browse/HIVE-11602
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 2.0.0

 Attachments: HIVE-11602.01.patch, HIVE-11602.01.patch, 
 HIVE-11602.patch


 Table:
 {code}
 create table journal(`journal id1` string) partitioned by (`journal id2` 
 string);
 {code}
 Query:
 {code}
 explain select * from journal where struct(`journal id1`, `journal id2`) IN 
 (struct('2013-1000-0133878664', '3'), struct('2013-1000-0133878695', 1));
 {code}
 Exception:
 {code}
 15/08/18 14:52:55 [main]: ERROR ql.Driver: FAILED: SemanticException [Error 
 10014]: Line 1:108 Wrong arguments '1': The arguments for IN should be the 
 same type! Types are: {structcol1:string,col2:string IN 
 (structcol1:string,col2:string, structcol1:string,col2:int)}
 org.apache.hadoop.hive.ql.parse.SemanticException: Line 1:108 Wrong arguments 
 '1': The arguments for IN should be the same type! Types are: 
 {structcol1:string,col2:string IN (structcol1:string,col2:string, 
 structcol1:string,col2:int)}
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1196)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:195)
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:148)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:10595)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:10551)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:10519)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFilterPlan(SemanticAnalyzer.java:2681)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFilterPlan(SemanticAnalyzer.java:2662)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8841)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9607)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10093)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:417)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1069)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1131)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1006)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:996)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
 at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11649) Hive UPDATE,INSERT,DELETE issue

2015-08-26 Thread Veerendra Nath Jasthi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Veerendra Nath Jasthi updated HIVE-11649:
-
Assignee: Hive QA  (was: Devraj Jaiman)

 Hive UPDATE,INSERT,DELETE issue
 ---

 Key: HIVE-11649
 URL: https://issues.apache.org/jira/browse/HIVE-11649
 Project: Hive
  Issue Type: Bug
 Environment: Hadoop-2.2.0 , hive-1.2.0 ,operating system 
 ubuntu14.04lts (64-bit) 
Reporter: Veerendra Nath Jasthi
Assignee: Hive QA

  have been trying to implement the UPDATE,INSERT,DELETE operations in hive 
 table as per link: 
 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-
  
 but whenever I was trying to include the properties which will do our work 
 i.e. 
 Configuration Values to Set for INSERT, UPDATE, DELETE 
 hive.support.concurrency  true (default is false) 
 hive.enforce.bucketingtrue (default is false) 
 hive.exec.dynamic.partition.mode  nonstrict (default is strict) 
 after that if I run show tables command on hive shell its taking 65.15 
 seconds which normally runs at 0.18 seconds without the above properties. 
 Apart from show tables rest of the commands not giving any output i.e. its 
 keep on running until and unless kill the process.
 Could you tell me reason for this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11614) CBO: Calcite Operator To Hive Operator (Calcite Return Path): ctas after order by has problem

2015-08-26 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-11614:
---
Attachment: HIVE-11614.02.patch

 CBO: Calcite Operator To Hive Operator (Calcite Return Path): ctas after 
 order by has problem
 -

 Key: HIVE-11614
 URL: https://issues.apache.org/jira/browse/HIVE-11614
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-11614.01.patch, HIVE-11614.02.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11649) Hive UPDATE,INSERT,DELETE issue

2015-08-26 Thread Veerendra Nath Jasthi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Veerendra Nath Jasthi updated HIVE-11649:
-
Assignee: Devraj Jaiman

 Hive UPDATE,INSERT,DELETE issue
 ---

 Key: HIVE-11649
 URL: https://issues.apache.org/jira/browse/HIVE-11649
 Project: Hive
  Issue Type: Bug
 Environment: Hadoop-2.2.0 , hive-1.2.0 ,operating system 
 ubuntu14.04lts (64-bit) 
Reporter: Veerendra Nath Jasthi
Assignee: Devraj Jaiman

  have been trying to implement the UPDATE,INSERT,DELETE operations in hive 
 table as per link: 
 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-
  
 but whenever I was trying to include the properties which will do our work 
 i.e. 
 Configuration Values to Set for INSERT, UPDATE, DELETE 
 hive.support.concurrency  true (default is false) 
 hive.enforce.bucketingtrue (default is false) 
 hive.exec.dynamic.partition.mode  nonstrict (default is strict) 
 after that if I run show tables command on hive shell its taking 65.15 
 seconds which normally runs at 0.18 seconds without the above properties. 
 Apart from show tables rest of the commands not giving any output i.e. its 
 keep on running until and unless kill the process.
 Could you tell me reason for this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9544) Error dropping fully qualified partitioned table - Internal error processing get_partition_names

2015-08-26 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714424#comment-14714424
 ] 

Hari Sekhon commented on HIVE-9544:
---

[~rnpridgeon] No it was on Hortonworks so no Sentry.

 Error dropping fully qualified partitioned table - Internal error processing 
 get_partition_names
 

 Key: HIVE-9544
 URL: https://issues.apache.org/jira/browse/HIVE-9544
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Priority: Minor

 When attempting to drop a partitioned table using a fully qualified name I 
 get this error:
 {code}
 hive -e 'drop table myDB.my_table_name;'
 Logging initialized using configuration in 
 file:/etc/hive/conf/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/hdp/2.2.0.0-2041/hive/lib/hive-jdbc-0.14.0.2.2.0.0-2041-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.TApplicationException: Internal error processing 
 get_partition_names
 {code}
 It succeeds if I instead do:
 {code}hive -e 'use myDB; drop table my_table_name;'{code}
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715187#comment-14715187
 ] 

Swarnim Kulkarni commented on HIVE-10990:
-

{quote}
Thus, you just need to recompile hive vs. the correct runtime hbase version; no 
source change required.
{quote}

Yeah that's what I was referring to do with my comment here[1]. Unfortunately 
though like I mentioned, not sure if we can do this in general and then release 
as that would break passivity for consumers  hbase 1.0. Primarily the reason 
why we are choosing to leave hive 1.x stream on hbase 0.98.x as that branch is 
currently maintaining backwards compatibility and then bump hive 2.x to hbase 
1.x.

[1] 
https://issues.apache.org/jira/browse/HIVE-10990?focusedCommentId=14713591page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14713591

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11653) Beeline asks for password even when connecting with Kerberos

2015-08-26 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714510#comment-14714510
 ] 

Lars Francke commented on HIVE-11653:
-

I think I'd like to take a stab at this if it's alright with you? I've been 
annoyed by this ever since beeline was introduced :)

We need to make a decision on what we want to do though.

We could just ignore it if a user doesn't provide username and password. If 
Hive is configured to accept it it'll fail with an authentication error anyway. 
On the other hand it provides a way to not pass in the password in a way that 
ends up in the history or on screen.

Any ideas on how to best handle this?

 Beeline asks for password even when connecting with Kerberos
 

 Key: HIVE-11653
 URL: https://issues.apache.org/jira/browse/HIVE-11653
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 0.14.0
 Environment: Kerberos and Hive
Reporter: Loïc C. Chanel

 When connecting to HiveServer via Beeline, Beeline asks for a password even 
 if Kerberos is enabled and there is a ticket in cache (kinit have been 
 successfully executed, as klist shows the ticket is in cache).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10289) Support filter on non-first partition key and non-string partition key

2015-08-26 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715195#comment-14715195
 ] 

Daniel Dai commented on HIVE-10289:
---

This is no additional user facing doc. The deployment of hive jars to hbase is 
already required before the patch. And the deployment process is captured in 
https://cwiki.apache.org/confluence/display/Hive/HBaseMetastoreDevelopmentGuide.

 Support filter on non-first partition key and non-string partition key
 --

 Key: HIVE-10289
 URL: https://issues.apache.org/jira/browse/HIVE-10289
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore, Metastore
Affects Versions: hbase-metastore-branch
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10289.1.patch, HIVE-10289.2.patch, 
 HIVE-10289.3.patch


 Currently, partition filtering only handles the first partition key and the 
 type for this partition key must be string. In order to break this 
 limitation, several improvements are required:
 1. Change serialization format for partition key. Currently partition keys 
 are serialized into delimited string, which sorted on string order not with 
 regard to the actual type of the partition key. We use BinarySortableSerDe 
 for this purpose.
 2. For filter condition not on the initial partition keys, push it into HBase 
 RowFilter. RowFilter will deserialize the partition key and evaluate the 
 filter condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-08-26 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11383:
---
Attachment: HIVE-11383.13.patch

 Upgrade Hive to Calcite 1.4
 ---

 Key: HIVE-11383
 URL: https://issues.apache.org/jira/browse/HIVE-11383
 Project: Hive
  Issue Type: Bug
Reporter: Julian Hyde
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
 HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
 HIVE-11383.2.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
 HIVE-11383.3.patch, HIVE-11383.4.patch, HIVE-11383.5.patch, 
 HIVE-11383.6.patch, HIVE-11383.7.patch, HIVE-11383.8.patch, 
 HIVE-11383.8.patch, HIVE-11383.9.patch


 CLEAR LIBRARY CACHE
 Upgrade Hive to Calcite 1.4.0-incubating.
 There is currently a snapshot release, which is close to what will be in 1.4. 
 I have checked that Hive compiles against the new snapshot, fixing one issue. 
 The patch is attached.
 Next step is to validate that Hive runs against the new Calcite, and post any 
 issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
 can you please do that.
 [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
 the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-08-26 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715181#comment-14715181
 ] 

Ashutosh Chauhan commented on HIVE-11383:
-

+1

 Upgrade Hive to Calcite 1.4
 ---

 Key: HIVE-11383
 URL: https://issues.apache.org/jira/browse/HIVE-11383
 Project: Hive
  Issue Type: Bug
Reporter: Julian Hyde
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
 HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.2.patch, 
 HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
 HIVE-11383.4.patch, HIVE-11383.5.patch, HIVE-11383.6.patch, 
 HIVE-11383.7.patch, HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch


 CLEAR LIBRARY CACHE
 Upgrade Hive to Calcite 1.4.0-incubating.
 There is currently a snapshot release, which is close to what will be in 1.4. 
 I have checked that Hive compiles against the new snapshot, fixing one issue. 
 The patch is attached.
 Next step is to validate that Hive runs against the new Calcite, and post any 
 issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
 can you please do that.
 [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
 the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11538) Add an option to skip init script while running tests

2015-08-26 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715193#comment-14715193
 ] 

Ashutosh Chauhan commented on HIVE-11538:
-

[~leftylev] Yes. Please add to wikidoc.

 Add an option to skip init script while running tests
 -

 Key: HIVE-11538
 URL: https://issues.apache.org/jira/browse/HIVE-11538
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 2.0.0

 Attachments: HIVE-11538.2.patch, HIVE-11538.3.patch, HIVE-11538.patch


 {{q_test_init.sql}} has grown over time. Now, it takes substantial amount of 
 time. When debugging a particular query which doesn't need such 
 initialization, this delay is annoyance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7483) hive insert overwrite table select from self dead lock

2015-08-26 Thread Furcy Pin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14714475#comment-14714475
 ] 

Furcy Pin commented on HIVE-7483:
-

I confirm, we are having the same issue on Hive 1.1.0, and I reproduced it on 
1.2.1

CREATE TABLE test_db.test_table (id INT) 
PARTITIONED BY (part STRING)
STORED AS ORC ;

INSERT INTO TABLE test_db.test_table PARTITION (part=test)
VALUES (1), (2), (3), (4) 
;

SET hive.exec.dynamic.partition.mode=nonstrict ;
SET hive.support.concurrency=true ;

INSERT OVERWRITE TABLE test_db.test_table PARTITION (part)
SELECT * FROM test_db.test_table ;

nothing happens, and when doing a SHOW LOCKS in another shell we get :

+--+---+++-+--+-+-++
| lockid   | database  | table  | partition  | lock_state  | lock_type| 
transaction_id  | last_heartbeat  |  acquired_at   |
+--+---+++-+--+-+-++
| 3765 | test_db   | test_table | NULL   | WAITING | EXCLUSIVE| 
NULL| 1440603633148   | NULL   |
| 3765 | test_db   | test_table | part=test  | WAITING | SHARED_READ  | 
NULL| 1440603633148   | NULL   |
+--+---+++-+--+-+-++

from looking at the source of 
org.apache.hadoop.hive.ql.lockmgr.EmbeddedLockManager, I would say it is stuck
in the for loop of the method lock(ListHiveLockObj objs, int 
numRetriesForLock, long sleepTime), 
where lockPrimitive() keeps failing on the second lock.







 hive insert overwrite table select from self dead lock
 --

 Key: HIVE-7483
 URL: https://issues.apache.org/jira/browse/HIVE-7483
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.1
Reporter: Xiaoyu Wang

 CREATE TABLE test(
   id int, 
   msg string)
 PARTITIONED BY ( 
   continent string, 
   country string)
 CLUSTERED BY (id) 
 INTO 10 BUCKETS
 STORED AS ORC;
 alter table test add partition(continent='Asia',country='India');
 in hive-site.xml:
 hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
 hive.support.concurrency=true;
 in hive shell:
 set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
 insert into test table some records first.
 then execute sql:
 insert overwrite table test partition(continent='Asia',country='India') 
 select id,msg from test;
 the log stop at :
 INFO log.PerfLogger: PERFLOG method=acquireReadWriteLocks 
 from=org.apache.hadoop.hive.ql.Driver
 i think it has dead lock when insert overwrite table from it self.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715155#comment-14715155
 ] 

Nick Dimiduk commented on HIVE-10990:
-

FYI [~enis], [~apurtell].

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715202#comment-14715202
 ] 

Swarnim Kulkarni commented on HIVE-10990:
-

{quote}
Please don't use any 0.99. This was a developer preview of 1.0 and is not meant 
for use by anyone other than HBase developers, and at this point is an artifact 
of historical interest at best.
{quote}

Good call on this. I wasn't aware of that.

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715152#comment-14715152
 ] 

Nick Dimiduk commented on HIVE-10990:
-

Diff people are posting different stack traces, I see two errors from folks' 
comments:

# {{java.lang.IllegalArgumentException: Family name can not be empty}}
# {{java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V}}

The first seems like an invalid DDL. Somehow you're trying to create a table 
without specifying a column family.

The second is a subtle ABI incompatibility around 
{{HTableDescriptor#addFamily}} introduced after HBASE-12046, a change present 
in HBase 1.0+. See [javadoc from 
master|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html#addFamily(org.apache.hadoop.hbase.HColumnDescriptor)]
 vs. [javadoc from 0.94 
branch|http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html#addFamily(org.apache.hadoop.hbase.HColumnDescriptor)].
 Bottom line is you cannot use a client compiled vs. 0.98 with a runtime using 
1.0+. In this case the difference is only in return type and the [hive 
code|https://github.com/apache/hive/blob/master/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java#L214]
 in question is ignoring it. Thus, you just need to recompile hive vs. the 
correct runtime hbase version; no source change required.

To create a binary that supports either version, you'll need to use reflection 
to invoke the {{addFamily}} method. That will fix this one API change, but 
there are probably others lurking.

Let me also point out that HBase has never guaranteed ABI compatibility between 
minor release versions. For the post-1.0 world, we're calling this out 
explicitly in the [compatibility 
promise|http://hbase.apache.org/book.html#hbase.versioning.post10] (see table 
3, compatibility matrix, there's a row for client binary compatibility). For 
the pre-1.0 releases, we always suggest clients recompile their applications 
vs. the newest hbase version jars after an upgrade.

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   

[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715164#comment-14715164
 ] 

Andrew Purtell commented on HIVE-10990:
---

I think you meant major release versions [~ndimiduk]. If so, that's right, 
across major releases we might introduce changes that affect ABI compatibility. 
_Post_ 1.0 we'd take a harder look than in the past if the breaking change is 
necessary, but leading up to the 1.0 release we made API changes to help with 
long term maintainability once at 1.0.

bq. one thing we could possibly do is bump the version up to 0.99 in this branch
Please don't use any 0.99. This was a developer preview of 1.0 and is not meant 
for use by anyone other than HBase developers, and at this point is an artifact 
of historical interest at best. The next release after 0.98 is 1.0.

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10924) add support for MERGE statement

2015-08-26 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715522#comment-14715522
 ] 

Eugene Koifman commented on HIVE-10924:
---

HIVE-11030 is a key building block for this but the remainder of this is 
stalled at the moment.

 add support for MERGE statement
 ---

 Key: HIVE-10924
 URL: https://issues.apache.org/jira/browse/HIVE-10924
 Project: Hive
  Issue Type: New Feature
  Components: Query Planning, Query Processor
Affects Versions: 1.2.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman

 add support for 
 MERGE INTO tbl USING src ON … WHEN MATCHED THEN ... WHEN NOT MATCHED THEN ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11657) HIVE-2573 introduces some issues

2015-08-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715601#comment-14715601
 ] 

Sergey Shelukhin commented on HIVE-11657:
-

[~gopalv] fyi... I might take a look at (1) at some point, will probably fix 
(2) with it, unless something happens earlier

 HIVE-2573 introduces some issues
 

 Key: HIVE-11657
 URL: https://issues.apache.org/jira/browse/HIVE-11657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Priority: Critical

 HIVE-2573 introduced static reload functions call.
 It has a few problems:
 1) When metastore client is initialized using an externally supplied config, 
 it still gets called during static init using the main service config. In my 
 case, even though I have uris in the supplied config to connect to remote MS 
 (which eventually happens), the static call creates objectstore, which is 
 undesirable.
 2) It breaks compat - old metastores do not support this call so new clients 
 will fail, and there's no workaround like not using a new feature because the 
 static call is always made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11657) HIVE-2573 introduces some issues

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11657:

Description: 
HIVE-2573 introduced static reload functions call.
It has a few problems:
1) When metastore client is initialized using an externally supplied config 
(i.e. Hive.get(HiveConf)), it still gets called during static init using the 
main service config. In my case, even though I have uris in the supplied config 
to connect to remote MS (which eventually happens), the static call creates 
objectstore, which is undesirable.
2) It breaks compat - old metastores do not support this call so new clients 
will fail, and there's no workaround like not using a new feature because the 
static call is always made

  was:
HIVE-2573 introduced static reload functions call.
It has a few problems:
1) When metastore client is initialized using an externally supplied config, it 
still gets called during static init using the main service config. In my case, 
even though I have uris in the supplied config to connect to remote MS (which 
eventually happens), the static call creates objectstore, which is undesirable.
2) It breaks compat - old metastores do not support this call so new clients 
will fail, and there's no workaround like not using a new feature because the 
static call is always made


 HIVE-2573 introduces some issues
 

 Key: HIVE-11657
 URL: https://issues.apache.org/jira/browse/HIVE-11657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Priority: Critical

 HIVE-2573 introduced static reload functions call.
 It has a few problems:
 1) When metastore client is initialized using an externally supplied config 
 (i.e. Hive.get(HiveConf)), it still gets called during static init using the 
 main service config. In my case, even though I have uris in the supplied 
 config to connect to remote MS (which eventually happens), the static call 
 creates objectstore, which is undesirable.
 2) It breaks compat - old metastores do not support this call so new clients 
 will fail, and there's no workaround like not using a new feature because the 
 static call is always made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-08-26 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715471#comment-14715471
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-11634:
--

[~ashutoshc] Yes, I believe so. I just left it to be on the safer side until 
[~jcamachorodriguez] can confirm it.

Thanks
Hari

 Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
 --

 Key: HIVE-11634
 URL: https://issues.apache.org/jira/browse/HIVE-11634
 Project: Hive
  Issue Type: Bug
  Components: CBO
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
 HIVE-11634.3.patch


 Currently, we do not support partition pruning for the following scenario
 {code}
 create table pcr_t1 (key int, value string) partitioned by (ds string);
 insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
 where key  20 order by key;
 explain extended select ds from pcr_t1 where struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 If we run the above query, we see that all the partitions of table pcr_t1 are 
 present in the filter predicate where as we can prune  partition 
 (ds='2000-04-10'). 
 The optimization is to rewrite the above query into the following.
 {code}
 explain extended select ds from pcr_t1 where  (struct(ds)) IN 
 (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
 is used by partition pruner to prune the columns which otherwise will not be 
 pruned.
 This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715469#comment-14715469
 ] 

Lefty Leverenz commented on HIVE-10990:
---

Three places spring to mind:  the Hive HBase Integration doc (definitely) and 
two places that discuss version requirements, Getting Started and the 
Installation doc.  Here are their links:

* [Hive HBase Integration | 
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration] -- add this 
information to the Version information box right after the table of contents
* [Getting Started -- Requirements | 
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-Requirements]
* [Installing Hive | 
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Installation#AdminManualInstallation-InstallingHive]

Another section of the wiki, Hive Versions and Branches, doesn't seem 
appropriate for this information but here's the link in case you disagree:

* [Home -- Hive Versions and Branches | 
https://cwiki.apache.org/confluence/display/Hive/Home#Home-HiveVersionsandBranches]

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11652) Avoid expensive call to removeAll in DefaultGraphWalker

2015-08-26 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715528#comment-14715528
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-11652:
--

[~jcamachorodriguez] You may want to look at the latest patch of HIVE-11341 
where a similar issue is being looked into by me.

{code}
// While all its descendants have not been dispatched,
// we do not move forward
while(!childrenDispatched) {
  for (Node childNode : nd.getChildren()) {
 walk(childNode);
   }
   childrenDispatched = getDispatchedList().containsAll(nd.getChildren());
 }
{code}

In the above change, I see that you are using the system stack here instead of 
toWalk list to traverse the children nodes. 
However a couple of points :
1. Why do you need the 'while(!childrenDispatched)' condition at all. Wont the 
below code work? 
{code}
 if (!getDispatchedList().contains(childNode)) {
  walk(childNode). 
 }
{code}

2. we should make sure that walk() is not called again for a node already 
dispatched, we should make sure that this logic is present in startWalking as 
well
{code}
  public void startWalking(CollectionNode startNodes,
  HashMapNode, Object nodeOutput) throws SemanticException {
for Node (nd : startNodes) {
  // If already dispatched list, continue.
  if (getDispatchedList().contains(nd)) {
continue;
   }
  walk(nd);
  if (nodeOutput != null  getDispatchedList().contains(nd)) {
nodeOutput.put(nd, retMap.get(nd));
  }
}
  }
{code}

In short, if you are using the system stack via recursion you can eliminate the 
use of toWalk data structure all together for DefaultGraphWalker.
Otherwise, you can replace toWalk to be a stack instead of an arraylist.

Thanks
Hari

 Avoid expensive call to removeAll in DefaultGraphWalker
 ---

 Key: HIVE-11652
 URL: https://issues.apache.org/jira/browse/HIVE-11652
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer, Physical Optimizer
Affects Versions: 1.3.0, 2.0.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11652.patch


 When the plan is too large, the removeAll call in DefaultGraphWalker (line 
 140) will take very long as it will have to go through the list looking for 
 each of the nodes. We try to get rid of this call by rewriting the logic in 
 the walker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11544) LazyInteger should avoid throwing NumberFormatException

2015-08-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-11544:
---
Attachment: HIVE-11544.3.patch

Handle NaN, NULL and null

 LazyInteger should avoid throwing NumberFormatException
 ---

 Key: HIVE-11544
 URL: https://issues.apache.org/jira/browse/HIVE-11544
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 1.2.0, 1.3.0, 2.0.0
Reporter: William Slacum
Assignee: Gopal V
Priority: Minor
  Labels: Performance
 Attachments: HIVE-11544.1.patch, HIVE-11544.2.patch, 
 HIVE-11544.3.patch


 {{LazyInteger#parseInt}} will throw a {{NumberFormatException}} under these 
 conditions:
 # bytes are null
 # radix is invalid
 # length is 0
 # the string is '+' or '-'
 # {{LazyInteger#parse}} throws a {{NumberFormatException}}
 Most of the time, such as in {{LazyInteger#init}} and {{LazyByte#init}}, the 
 exception is caught, swallowed, and {{isNull}} is set to {{true}}.
 This is generally a bad workflow, as exception creation is a performance 
 bottleneck, and potentially repeating for many rows in a query can have a 
 drastic performance consequence.
 It would be better if this method returned an {{OptionalInteger}}, which 
 would provide similar functionality with a higher throughput rate.
 I've tested against 0.14.0, and saw that the logic is unchanged in 1.2.0, so 
 I've marked those as affected. Any version in between would also suffer from 
 this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-08-26 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715560#comment-14715560
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-11634:
--

[~ashutoshc] If you see the change in 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/OpProcFactory.java, you can 
see that I use fop.getConf().getPartitionpruningPredicate() as the predicate 
used by the partition pruner.
The test cases added in pcs.q will guarantee that this patch will work. i.e. If 
you run the 'EXPLAIN PLAN EXTENDED ... IN(STRUCT(..) 'queries in pcs.q without 
the patch, the plan will be different. 

However, as you mentioned we should combine OrigPredicate and 
partitionPruningPredicate to one field before this patch can be finally merged. 
I will do that soon in the next upload.

Thanks
Hari

 Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
 --

 Key: HIVE-11634
 URL: https://issues.apache.org/jira/browse/HIVE-11634
 Project: Hive
  Issue Type: Bug
  Components: CBO
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
 HIVE-11634.3.patch


 Currently, we do not support partition pruning for the following scenario
 {code}
 create table pcr_t1 (key int, value string) partitioned by (ds string);
 insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
 where key  20 order by key;
 explain extended select ds from pcr_t1 where struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 If we run the above query, we see that all the partitions of table pcr_t1 are 
 present in the filter predicate where as we can prune  partition 
 (ds='2000-04-10'). 
 The optimization is to rewrite the above query into the following.
 {code}
 explain extended select ds from pcr_t1 where  (struct(ds)) IN 
 (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
 is used by partition pruner to prune the columns which otherwise will not be 
 pruned.
 This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11650) Create LLAP Monitor Daemon class and launch scripts

2015-08-26 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715640#comment-14715640
 ] 

Gopal V commented on HIVE-11650:


[~kaisasak]: you're on the right track, but you don't need to add to the 
existing webservices.

You can have your own WebApps instance kicked off so that you can serve static 
files (jquery/bootstrap/d3) etc required for the UI from the running monitoring 
instance itself.

 Create LLAP Monitor Daemon class and launch scripts
 ---

 Key: HIVE-11650
 URL: https://issues.apache.org/jira/browse/HIVE-11650
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Attachments: HIVE-11650-llap.00.patch, Screen Shot 2015-08-26 at 
 16.54.35.png


 This JIRA for creating LLAP Monitor Daemon class and related launching 
 scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-08-26 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715511#comment-14715511
 ] 

Ashutosh Chauhan commented on HIVE-11634:
-

We need to remove that, otherwise it cant be guaranteed that this patch 
*really* works.

 Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
 --

 Key: HIVE-11634
 URL: https://issues.apache.org/jira/browse/HIVE-11634
 Project: Hive
  Issue Type: Bug
  Components: CBO
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
 HIVE-11634.3.patch


 Currently, we do not support partition pruning for the following scenario
 {code}
 create table pcr_t1 (key int, value string) partitioned by (ds string);
 insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
 where key  20 order by key;
 explain extended select ds from pcr_t1 where struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 If we run the above query, we see that all the partitions of table pcr_t1 are 
 present in the filter predicate where as we can prune  partition 
 (ds='2000-04-10'). 
 The optimization is to rewrite the above query into the following.
 {code}
 explain extended select ds from pcr_t1 where  (struct(ds)) IN 
 (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
 is used by partition pruner to prune the columns which otherwise will not be 
 pruned.
 This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11657) HIVE-2573 introduces some issues

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11657:

Description: 
HIVE-2573 introduced static reload functions call.
It has a few problems:
1) When metastore client is initialized using an externally supplied config, it 
still gets called during static init using the main service config. In my case, 
even though I have uris in the supplied config to connect to remote MS (which 
eventually happens), the static call creates objectstore, which is undesirable.
2) It breaks compat - old metastores do not support this call so new clients 
will fail, and there's no workaround like not using a new feature because the 
static call is always made

  was:
HIVE-2573 introduced static reload functions call.
It has a few problems:
1) When metastore client is initialized using an externally supplied config, it 
still gets called during static init using the main service config. In my case, 
even though I have uris in the supplied config to connect to remote MS (which 
eventually happens), the static call creates objectstore, which is undesirable.
2) It breaks compat - old metastores do not support this call so new clients 
will fail


 HIVE-2573 introduces some issues
 

 Key: HIVE-11657
 URL: https://issues.apache.org/jira/browse/HIVE-11657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Priority: Critical

 HIVE-2573 introduced static reload functions call.
 It has a few problems:
 1) When metastore client is initialized using an externally supplied config, 
 it still gets called during static init using the main service config. In my 
 case, even though I have uris in the supplied config to connect to remote MS 
 (which eventually happens), the static call creates objectstore, which is 
 undesirable.
 2) It breaks compat - old metastores do not support this call so new clients 
 will fail, and there's no workaround like not using a new feature because the 
 static call is always made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11657) HIVE-2573 introduces some issues

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11657:

Description: 
HIVE-2573 introduced static reload functions call.
It has a few problems:
1) When metastore client is initialized using an externally supplied config, it 
still gets called during static init using the main service config. In my case, 
even though I have uris in the supplied config to connect to remote MS (which 
eventually happens), the static call creates objectstore, which is undesirable.
2) It breaks compat - old metastores do not support this call so new clients 
will fail

  was:
HIVE-2573 introduced static reload functions call.
It has a few problems:
1) When metastore client is initialized using an externally supplied config, it 
still gets called during static init using the main service config. In my case, 
even though I have uris in the supplied config to connect to remote MS, the 
static call creates objectstore, which is undesirable.
2) It breaks compat - old metastores do not support this call so new clients 
will fail


 HIVE-2573 introduces some issues
 

 Key: HIVE-11657
 URL: https://issues.apache.org/jira/browse/HIVE-11657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Priority: Critical

 HIVE-2573 introduced static reload functions call.
 It has a few problems:
 1) When metastore client is initialized using an externally supplied config, 
 it still gets called during static init using the main service config. In my 
 case, even though I have uris in the supplied config to connect to remote MS 
 (which eventually happens), the static call creates objectstore, which is 
 undesirable.
 2) It breaks compat - old metastores do not support this call so new clients 
 will fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11658) Load data file format validation does not work with directories

2015-08-26 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715625#comment-14715625
 ] 

Prasanth Jayachandran commented on HIVE-11658:
--

[~hagleitn]/[~ashutoshc] Can someone please review this patch?

 Load data file format validation does not work with directories
 ---

 Key: HIVE-11658
 URL: https://issues.apache.org/jira/browse/HIVE-11658
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.3.0, 2.0.0
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
 Attachments: HIVE-11658.1.patch


 HIVE-8 added file format validation to load statement for ORC tables. It 
 does not work when the path is a directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11553) use basic file metadata cache in ETLSplitStrategy-related paths

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11553:

Attachment: HIVE-11553.01.patch

 use basic file metadata cache in ETLSplitStrategy-related paths
 ---

 Key: HIVE-11553
 URL: https://issues.apache.org/jira/browse/HIVE-11553
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: hbase-metastore-branch

 Attachments: HIVE-11553.01.patch, HIVE-11553.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-08-26 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715447#comment-14715447
 ] 

Ashutosh Chauhan commented on HIVE-11634:
-

As part of this change we should be able to get rid of {{origPredicate}} in 
{{FilterDesc}}, no ?

 Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
 --

 Key: HIVE-11634
 URL: https://issues.apache.org/jira/browse/HIVE-11634
 Project: Hive
  Issue Type: Bug
  Components: CBO
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
 HIVE-11634.3.patch


 Currently, we do not support partition pruning for the following scenario
 {code}
 create table pcr_t1 (key int, value string) partitioned by (ds string);
 insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
 where key  20 order by key;
 insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
 where key  20 order by key;
 explain extended select ds from pcr_t1 where struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 If we run the above query, we see that all the partitions of table pcr_t1 are 
 present in the filter predicate where as we can prune  partition 
 (ds='2000-04-10'). 
 The optimization is to rewrite the above query into the following.
 {code}
 explain extended select ds from pcr_t1 where  (struct(ds)) IN 
 (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
 (struct('2000-04-08',1), struct('2000-04-09',2));
 {code}
 The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
 is used by partition pruner to prune the columns which otherwise will not be 
 pruned.
 This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11553) use basic file metadata cache in ETLSplitStrategy-related paths

2015-08-26 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11553:

Attachment: (was: HIVE-11553.01.patch)

 use basic file metadata cache in ETLSplitStrategy-related paths
 ---

 Key: HIVE-11553
 URL: https://issues.apache.org/jira/browse/HIVE-11553
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: hbase-metastore-branch

 Attachments: HIVE-11553.01.patch, HIVE-11553.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11544) LazyInteger should avoid throwing NumberFormatException

2015-08-26 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715546#comment-14715546
 ] 

Gopal V commented on HIVE-11544:


Full benchmark results are rather interesting BIGINT is faster than INT.

{code}
Run result: 184347.00 ns/op (= 2 samples)


# Run complete. Total time: 00:01:46

Benchmark Mode  Samples 
  Score   Error  Units
o.a.h.b.s.LazySimpleSerDeBench.GoodLazyByte.bench avgt2  
1433448000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.GoodLazyDouble.bench   avgt2   
161712500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.GoodLazyFloat.benchavgt2   
256967500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.GoodLazyInteger.bench  avgt2   
649704000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.GoodLazyLong.bench avgt2
34384500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.GoodLazyShort.benchavgt2  
1464437000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.RandomLazyByte.bench   avgt2  
1403027000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.RandomLazyDouble.bench avgt2  
1708506000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.RandomLazyFloat.bench  avgt2  
1788844000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.RandomLazyInteger.benchavgt2  
1352076000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.RandomLazyLong.bench   avgt2  
1446379500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.RandomLazyShort.bench  avgt2  
1562499500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.WorstLazyByte.benchavgt2  
1650849500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.WorstLazyDouble.bench  avgt2  
2032039000.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.WorstLazyFloat.bench   avgt2  
1948020500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.WorstLazyInteger.bench avgt2  
1862427500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.WorstLazyLong.benchavgt2  
1709320500.000 ±   NaN  ns/op
o.a.h.b.s.LazySimpleSerDeBench.WorstLazyShort.bench   avgt2  
184347.000 ±   NaN  ns/op

$ expelliarmus:hive-jmh 
{code}

 LazyInteger should avoid throwing NumberFormatException
 ---

 Key: HIVE-11544
 URL: https://issues.apache.org/jira/browse/HIVE-11544
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.14.0, 1.2.0, 1.3.0, 2.0.0
Reporter: William Slacum
Assignee: Gopal V
Priority: Minor
  Labels: Performance
 Attachments: HIVE-11544.1.patch, HIVE-11544.2.patch, 
 HIVE-11544.3.patch


 {{LazyInteger#parseInt}} will throw a {{NumberFormatException}} under these 
 conditions:
 # bytes are null
 # radix is invalid
 # length is 0
 # the string is '+' or '-'
 # {{LazyInteger#parse}} throws a {{NumberFormatException}}
 Most of the time, such as in {{LazyInteger#init}} and {{LazyByte#init}}, the 
 exception is caught, swallowed, and {{isNull}} is set to {{true}}.
 This is generally a bad workflow, as exception creation is a performance 
 bottleneck, and potentially repeating for many rows in a query can have a 
 drastic performance consequence.
 It would be better if this method returned an {{OptionalInteger}}, which 
 would provide similar functionality with a higher throughput rate.
 I've tested against 0.14.0, and saw that the logic is unchanged in 1.2.0, so 
 I've marked those as affected. Any version in between would also suffer from 
 this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11642) LLAP: make sure tests pass #3

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715592#comment-14715592
 ] 

Hive QA commented on HIVE-11642:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752547/HIVE-11642.02.patch

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 9416 tests 
executed
*Failed tests:*
{noformat}
TestCliDriver-fileformat_sequencefile.q-repair.q-fouter_join_ppr.q-and-12-more 
- did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_char_mapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_varchar_mapjoin1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_count_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_decimal_mapjoin
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_left_outer_join
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_mapjoin_reduce
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorized_mapjoin
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorized_nested_mapjoin
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorized_ptf
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testWaitQueuePreemption
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5078/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5078/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5078/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12752547 - PreCommit-HIVE-TRUNK-Build

 LLAP: make sure tests pass #3
 -

 Key: HIVE-11642
 URL: https://issues.apache.org/jira/browse/HIVE-11642
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-11642.01.patch, HIVE-11642.02.patch, 
 HIVE-11642.patch


 Tests should pass against the most recent branch and Tez 0.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11658) Load data file format validation does not work with directories

2015-08-26 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-11658:
-
Attachment: HIVE-11658.1.patch

 Load data file format validation does not work with directories
 ---

 Key: HIVE-11658
 URL: https://issues.apache.org/jira/browse/HIVE-11658
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.3.0, 2.0.0
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
 Attachments: HIVE-11658.1.patch


 HIVE-8 added file format validation to load statement for ORC tables. It 
 does not work when the path is a directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11658) Load data file format validation does not work with directories

2015-08-26 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715638#comment-14715638
 ] 

Gunther Hagleitner commented on HIVE-11658:
---

LGTM +1

 Load data file format validation does not work with directories
 ---

 Key: HIVE-11658
 URL: https://issues.apache.org/jira/browse/HIVE-11658
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.3.0, 2.0.0
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
 Attachments: HIVE-11658.1.patch


 HIVE-8 added file format validation to load statement for ORC tables. It 
 does not work when the path is a directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11657) HIVE-2573 introduces some issues

2015-08-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715668#comment-14715668
 ] 

Sergey Shelukhin commented on HIVE-11657:
-

[~navis] fyi if you have some easy fixes in mind :)

Example of logging I get 
{noformat}
2015-08-26 17:52:16,695 INFO [InputInitializer [Map 1] #0] 
metastore.HiveMetaStore: 0: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
2015-08-26 17:52:16,730 INFO [InputInitializer [Map 1] #0] 
metastore.ObjectStore: ObjectStore, initialize called
[snip]
{noformat}

The call stack for the above is:
{noformat}
at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3054)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3074)
at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3299)
at 
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:175)
at org.apache.hadoop.hive.ql.metadata.Hive.clinit(Hive.java:167)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$MetastoreCache.getHive(OrcInputFormat.java:1544)
{noformat}
[snip]
{noformat}
2015-08-26 17:52:16,761 INFO [InputInitializer [Map 1] #0] hive.metastore: 
Trying to connect to metastore with URI thrift://[snip]:9085
2015-08-26 17:52:16,782 INFO [InputInitializer [Map 1] #0] hive.metastore: 
Opened a connection to metastore, current connections: 1
{noformat}


 HIVE-2573 introduces some issues
 

 Key: HIVE-11657
 URL: https://issues.apache.org/jira/browse/HIVE-11657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Priority: Critical

 HIVE-2573 introduced static reload functions call.
 It has a few problems:
 1) When metastore client is initialized using an externally supplied config 
 (i.e. Hive.get(HiveConf)), it still gets called during static init using the 
 main service config. In my case, even though I have uris in the supplied 
 config to connect to remote MS (which eventually happens), the static call 
 creates objectstore, which is undesirable.
 2) It breaks compat - old metastores do not support this call so new clients 
 will fail, and there's no workaround like not using a new feature because the 
 static call is always made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-11657) HIVE-2573 introduces some issues

2015-08-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715668#comment-14715668
 ] 

Sergey Shelukhin edited comment on HIVE-11657 at 8/26/15 10:50 PM:
---

[~navis] fyi if you have some easy fixes in mind :)

Example of logging I get 
{noformat}
2015-08-26 17:52:16,695 INFO [InputInitializer [Map 1] #0] 
metastore.HiveMetaStore: 0: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
2015-08-26 17:52:16,730 INFO [InputInitializer [Map 1] #0] 
metastore.ObjectStore: ObjectStore, initialize called
[snip]
{noformat}

The call stack for the above is:
{noformat}
at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3054)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3074)
at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3299)
at 
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:175)
at org.apache.hadoop.hive.ql.metadata.Hive.clinit(Hive.java:167)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$MetastoreCache.getHive(OrcInputFormat.java:1544)
{noformat}
Then
{noformat}
2015-08-26 17:52:16,761 INFO [InputInitializer [Map 1] #0] hive.metastore: 
Trying to connect to metastore with URI thrift://[snip]:9085
2015-08-26 17:52:16,782 INFO [InputInitializer [Map 1] #0] hive.metastore: 
Opened a connection to metastore, current connections: 1
{noformat}



was (Author: sershe):
[~navis] fyi if you have some easy fixes in mind :)

Example of logging I get 
{noformat}
2015-08-26 17:52:16,695 INFO [InputInitializer [Map 1] #0] 
metastore.HiveMetaStore: 0: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
2015-08-26 17:52:16,730 INFO [InputInitializer [Map 1] #0] 
metastore.ObjectStore: ObjectStore, initialize called
[snip]
{noformat}

The call stack for the above is:
{noformat}
at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3054)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3074)
at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3299)
at 
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:175)
at org.apache.hadoop.hive.ql.metadata.Hive.clinit(Hive.java:167)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$MetastoreCache.getHive(OrcInputFormat.java:1544)
{noformat}
[snip]
{noformat}
2015-08-26 17:52:16,761 INFO [InputInitializer [Map 1] #0] hive.metastore: 
Trying to connect to metastore with URI thrift://[snip]:9085
2015-08-26 17:52:16,782 INFO [InputInitializer [Map 1] #0] hive.metastore: 
Opened a connection to metastore, current connections: 1
{noformat}


 HIVE-2573 introduces some issues
 

 Key: HIVE-11657
 URL: https://issues.apache.org/jira/browse/HIVE-11657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Priority: Critical

 HIVE-2573 introduced static reload functions call.
 It has a few problems:
 1) When metastore client is initialized using an externally supplied config 
 (i.e. Hive.get(HiveConf)), it still gets called during static init using the 
 main service config. In my case, even though I have uris in the supplied 
 config to connect to remote MS (which eventually happens), the static call 
 creates objectstore, which is undesirable.
 2) It breaks compat - old metastores do not support this call so new clients 
 will fail, and there's no workaround like not using a new feature because the 
 static call is always made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread gurmukh singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712733#comment-14712733
 ] 

gurmukh singh commented on HIVE-10990:
--

it is working fine on hive 1.1 and hbase 1.0 but not on hive 1.2 and 
hbase-1.0.1.1

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Kevin Ludwig (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712787#comment-14712787
 ] 

Kevin Ludwig commented on HIVE-10990:
-

I have a hard time believing this works with hbase 1.0 given the method return 
type changed in hbase 0.99.2:
https://github.com/apache/hbase/blob/0.99.2/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java#L789



 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11651) Index creation is failing from beeline when execution engine is set to Tez

2015-08-26 Thread Venkata Srinivasa Rao Kolla (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712892#comment-14712892
 ] 

Venkata Srinivasa Rao Kolla  commented on HIVE-11651:
-

Also tried to run the same set of scripts on Hive 1.2 + Tez 0.7, i saw some 
errors in the job log. 

2015-08-26 09:48:31,178 INFO [Dispatcher thread: Central] 
history.HistoryEventHandler: 
[HISTORY][DAG:dag_1440569395691_0214_1][Event:DAG_FINISHED]: 
dagId=dag_1440569395691_0214_1, startTime=1440582480900, 
finishTime=1440582511086, timeTaken=30186, status=FAILED, diagnostics=Vertex 
failed, vertexName=Map 1, vertexId=vertex_1440569395691_0214_1_00, 
diagnostics=[Task failed, taskId=task_1440569395691_0214_1_00_00, 
diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running 
task:java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:337)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:192)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.init(TezGroupedSplitsInputFormat.java:131)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:97)
at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149)
at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80)
at 
org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:614)
at 
org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:593)
at 
org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:141)
at 
org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:370)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:127)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:147)
... 14 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:270)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:234)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:189)
... 25 more
], TaskAttempt 1 failed, info=[Error: Failure while running 
task:java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:337)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 

[jira] [Commented] (HIVE-11623) CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix the tableAlias for ReduceSink operator

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712653#comment-14712653
 ] 

Hive QA commented on HIVE-11623:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752364/HIVE-11623.03.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9375 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5071/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5071/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5071/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12752364 - PreCommit-HIVE-TRUNK-Build

 CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix the 
 tableAlias for ReduceSink operator
 

 Key: HIVE-11623
 URL: https://issues.apache.org/jira/browse/HIVE-11623
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-11623.01.patch, HIVE-11623.02.patch, 
 HIVE-11623.03.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11175) create function using jar does not work with sql std authorization

2015-08-26 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-11175:

Description: 
{code:sql}create function xxx as 'xxx' using jar 'file://foo.jar' {code} 

gives error code for need of accessing a local foo.jar  resource with ADMIN 
privileges. Same for HDFS (DFS_URI) .

problem is that the semantic analysis enforces the ADMIN privilege for write 
but the jar is clearly input not output. 

Patch und Testcase appendend.

  was:
{{create function xxx as 'xxx' using jar 'file://foo.jar' }} 

gives error code for need of accessing a local foo.jar  resource with ADMIN 
privileges. Same for HDFS (DFS_URI) .

problem is that the semantic analysis enforces the ADMIN privilege for write 
but the jar is clearly input not output. 

Patch und Testcase appendend.


 create function using jar does not work with sql std authorization
 --

 Key: HIVE-11175
 URL: https://issues.apache.org/jira/browse/HIVE-11175
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 1.2.0
Reporter: Olaf Flebbe
 Fix For: 2.0.0

 Attachments: HIVE-11175.1.patch


 {code:sql}create function xxx as 'xxx' using jar 'file://foo.jar' {code} 
 gives error code for need of accessing a local foo.jar  resource with ADMIN 
 privileges. Same for HDFS (DFS_URI) .
 problem is that the semantic analysis enforces the ADMIN privilege for write 
 but the jar is clearly input not output. 
 Patch und Testcase appendend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-08-26 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712809#comment-14712809
 ] 

Jesus Camacho Rodriguez commented on HIVE-11383:


I had some problems in my environment regenerating the q files for Spark; now 
it's done.

FYI, we cannot commit this yet, we are not done with the release process of 
Calcite 1.4 (patch uses RC0).

 Upgrade Hive to Calcite 1.4
 ---

 Key: HIVE-11383
 URL: https://issues.apache.org/jira/browse/HIVE-11383
 Project: Hive
  Issue Type: Bug
Reporter: Julian Hyde
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
 HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.2.patch, 
 HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
 HIVE-11383.4.patch, HIVE-11383.5.patch, HIVE-11383.6.patch, 
 HIVE-11383.7.patch, HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch


 CLEAR LIBRARY CACHE
 Upgrade Hive to Calcite 1.4.0-incubating.
 There is currently a snapshot release, which is close to what will be in 1.4. 
 I have checked that Hive compiles against the new snapshot, fixing one issue. 
 The patch is attached.
 Next step is to validate that Hive runs against the new Calcite, and post any 
 issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
 can you please do that.
 [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
 the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11650) Create LLAP Monitor Daemon class and launch scripts

2015-08-26 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HIVE-11650:
--
Attachment: HIVE-11650-llap.00.patch

 Create LLAP Monitor Daemon class and launch scripts
 ---

 Key: HIVE-11650
 URL: https://issues.apache.org/jira/browse/HIVE-11650
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Attachments: HIVE-11650-llap.00.patch


 This JIRA for creating LLAP Monitor Daemon class and related launching 
 scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11650) Create LLAP Monitor Daemon class and launch scripts

2015-08-26 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HIVE-11650:
--
Summary: Create LLAP Monitor Daemon class and launch scripts  (was: Create 
LLAP Monitor Daemon and launch scripts)

 Create LLAP Monitor Daemon class and launch scripts
 ---

 Key: HIVE-11650
 URL: https://issues.apache.org/jira/browse/HIVE-11650
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Attachments: HIVE-11650-llap.00.patch


 This JIRA for creating LLAP Monitor Daemon class and related launching 
 scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11650) Create LLAP Monitor Daemon class and launch scripts

2015-08-26 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HIVE-11650:
--
Attachment: Screen Shot 2015-08-26 at 16.54.35.png

 Create LLAP Monitor Daemon class and launch scripts
 ---

 Key: HIVE-11650
 URL: https://issues.apache.org/jira/browse/HIVE-11650
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Attachments: HIVE-11650-llap.00.patch, Screen Shot 2015-08-26 at 
 16.54.35.png


 This JIRA for creating LLAP Monitor Daemon class and related launching 
 scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11649) Hive UPDATE,INSERT,DELETE issue

2015-08-26 Thread Veerendra Nath Jasthi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Veerendra Nath Jasthi updated HIVE-11649:
-
Environment: Hadoop-2.2.0 , hive-1.2.0 ,operating system ubuntu14.04lts 
(64-bit)  Java 1.7  (was: Hadoop-2.2.0 , hive-1.2.0 ,operating system 
ubuntu14.04lts (64-bit) )

 Hive UPDATE,INSERT,DELETE issue
 ---

 Key: HIVE-11649
 URL: https://issues.apache.org/jira/browse/HIVE-11649
 Project: Hive
  Issue Type: Bug
 Environment: Hadoop-2.2.0 , hive-1.2.0 ,operating system 
 ubuntu14.04lts (64-bit)  Java 1.7
Reporter: Veerendra Nath Jasthi
Assignee: Hive QA

  have been trying to implement the UPDATE,INSERT,DELETE operations in hive 
 table as per link: 
 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-
  
 but whenever I was trying to include the properties which will do our work 
 i.e. 
 Configuration Values to Set for INSERT, UPDATE, DELETE 
 hive.support.concurrency  true (default is false) 
 hive.enforce.bucketingtrue (default is false) 
 hive.exec.dynamic.partition.mode  nonstrict (default is strict) 
 after that if I run show tables command on hive shell its taking 65.15 
 seconds which normally runs at 0.18 seconds without the above properties. 
 Apart from show tables rest of the commands not giving any output i.e. its 
 keep on running until and unless kill the process.
 Could you tell me reason for this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11614) CBO: Calcite Operator To Hive Operator (Calcite Return Path): ctas after order by has problem

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712776#comment-14712776
 ] 

Hive QA commented on HIVE-11614:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752400/HIVE-11614.02.patch

{color:red}ERROR:{color} -1 due to 122 failed/errored test(s), 9377 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_explain
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_colname
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynamic_rdd_cache
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_fouter_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_self_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_innerjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join18
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join18_multi_distinct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32_lessSize
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join34
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join40
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_alt_syntax
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_merging
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit_pushdown
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit_pushdown_negative
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_louter_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mergejoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mergejoins
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nonmr_fetch
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_optional_outer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_outer_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_gby_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_outer_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_outer_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_outer_join3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_outer_join4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_udf_case
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_print_header
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_regex_col
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_router_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_unionDistinct_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cross_product_check_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cross_product_check_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_2

[jira] [Updated] (HIVE-11650) Create LLAP Monitor Daemon and launch scripts

2015-08-26 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HIVE-11650:
--
Attachment: HIVE-11526.00.patch

 Create LLAP Monitor Daemon and launch scripts
 -

 Key: HIVE-11650
 URL: https://issues.apache.org/jira/browse/HIVE-11650
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Kai Sasaki
Assignee: Kai Sasaki

 This JIRA for creating LLAP Monitor Daemon class and related launching 
 scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11650) Create LLAP Monitor Daemon and launch scripts

2015-08-26 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HIVE-11650:
--
Attachment: (was: HIVE-11526.00.patch)

 Create LLAP Monitor Daemon and launch scripts
 -

 Key: HIVE-11650
 URL: https://issues.apache.org/jira/browse/HIVE-11650
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Kai Sasaki
Assignee: Kai Sasaki

 This JIRA for creating LLAP Monitor Daemon class and related launching 
 scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11640) Shell command doesn't work for new CLI[Beeline-cli branch]

2015-08-26 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-11640:

Attachment: HIVE-11640.2-beeline-cli.patch

Hi [~xuefuz], this patch has one impact for beeline. For commands beginning 
with !(like '!sh ls'), we will use ; to terminate it. And it's not required 
in the previous beeline. 
Given example, 
{code}
beelineshow tables;!ls;show tables;
{code}
It's not supported in old beeline since it will fail to parse the command with 
prefix(it treats ls;show tables as a single one).

 Shell command doesn't work for new CLI[Beeline-cli branch]
 --

 Key: HIVE-11640
 URL: https://issues.apache.org/jira/browse/HIVE-11640
 Project: Hive
  Issue Type: Sub-task
  Components: CLI
Reporter: Ferdinand Xu
Assignee: Ferdinand Xu
 Attachments: HIVE-11640.1-beeline-cli.patch, 
 HIVE-11640.2-beeline-cli.patch


 The shell command doesn't work for the new CLI and Error: Method not 
 supported (state=,code=0) was thrown during the execution for option f and e.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread gurmukh singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712744#comment-14712744
 ] 

gurmukh singh commented on HIVE-10990:
--

I have tested this on hadoop 2.6.0 as well. Getting the same error.
Hive-1.2.0 and hbase-1.0.1.1



 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11651) Index creation is failing from beeline when execution engine is set to Tez

2015-08-26 Thread Venkata Srinivasa Rao Kolla (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712891#comment-14712891
 ] 

Venkata Srinivasa Rao Kolla  commented on HIVE-11651:
-

I have tried this on Hive 0.13 version and Tez 0.4

 Index creation is failing from beeline when execution engine is set to Tez
 --

 Key: HIVE-11651
 URL: https://issues.apache.org/jira/browse/HIVE-11651
 Project: Hive
  Issue Type: Bug
  Components: Beeline, Hive, Tez
Reporter: Venkata Srinivasa Rao Kolla 

 In Hive, with execution engine is set to Tez and when we tried to rebuild 
 index on a table (which is not empty) using beeline, index creation is not 
 happening. It is not throwing any error too on to console. But in the 
 corresponding tez job logs we found below error:
 2015-08-24 18:08:43,999 WARN [AMShutdownThread] 
 org.apache.tez.dag.history.recovery.RecoveryService: Error when closing 
 summary stream 
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
  No lease on 
 /tmp/hive-alti-test-01/_tez_session_dir/573df26e-4a22-4bf5-ad5c-a0d72cdacec6/application_1439396922313_1732/recovery/1/application_1439396922313_1732.summary:
  File does not exist. Holder DFSClient_NONMAPREDUCE_-117450461_1 does not 
 have any open files. at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2956)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3027)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3007)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:641)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:484)
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 Below are the set of commands we have used: CREATE TABLE table02(column1 
 String, column2 string, column3 int, column4 string) ROW FORMAT DELIMITED 
 FIELDS TERMINATED BY ','; LOAD DATA LOCAL INPATH 'posts_us' OVERWRITE INTO 
 TABLE table02; CREATE INDEX table02_index ON TABLE table02 (column3) AS 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD; ALTER INDEX table02_index ON table02 REBUILD;
 Does any one seen this problem?
 Regards, Srinivas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11652) Avoid expensive call to removeAll in DefaultGraphWalker

2015-08-26 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11652:
---
Attachment: HIVE-11652.patch

 Avoid expensive call to removeAll in DefaultGraphWalker
 ---

 Key: HIVE-11652
 URL: https://issues.apache.org/jira/browse/HIVE-11652
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer, Physical Optimizer
Affects Versions: 1.3.0, 2.0.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11652.patch


 When the plan is too large, the removeAll call in DefaultGraphWalker (line 
 140) will take very long as it will have to go through the list looking for 
 each of the nodes. We try to get rid of this call by rewriting the logic in 
 the walker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10924) add support for MERGE statement

2015-08-26 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712986#comment-14712986
 ] 

Damien Carol commented on HIVE-10924:
-

[~ekoifman] Any progress on this one?

 add support for MERGE statement
 ---

 Key: HIVE-10924
 URL: https://issues.apache.org/jira/browse/HIVE-10924
 Project: Hive
  Issue Type: New Feature
  Components: Query Planning, Query Processor
Affects Versions: 1.2.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman

 add support for 
 MERGE INTO tbl USING src ON … WHEN MATCHED THEN ... WHEN NOT MATCHED THEN ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11531) Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise

2015-08-26 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713004#comment-14713004
 ] 

Hui Zheng commented on HIVE-11531:
--

Hi [~sershe]
I'd like to work on this jira.

 Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise
 -

 Key: HIVE-11531
 URL: https://issues.apache.org/jira/browse/HIVE-11531
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Hui Zheng

 For any UIs that involve pagination, it is useful to issue queries in the 
 form SELECT ... LIMIT X,Y where X,Y are coordinates inside the result to be 
 paginated (which can be extremely large by itself). At present, ROW_NUMBER 
 can be used to achieve this effect, but optimizations for LIMIT such as TopN 
 in ReduceSink do not apply to ROW_NUMBER. We can add first class support for 
 skip to existing limit, or improve ROW_NUMBER for better performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread gurmukh singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712933#comment-14712933
 ] 

gurmukh singh commented on HIVE-10990:
--

hive CREATE TABLE hbase_table_1(key int, value string)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES
 (hbase.columns.mapping = :key,cf1:val)
 TBLPROPERTIES (hbase.table.name = xyz);
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. 
org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V




2015-08-26 19:58:58,095 ERROR [main]: exec.DDLTask (DDLTask.java:failed(520)) - 
java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:214)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy9.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:714)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4135)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

2015-08-26 19:58:58,096 ERROR [main]: ql.Driver 
(SessionState.java:printError(957)) - FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. 
org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
2015-08-26 19:58:58,096 INFO  [main]: log.PerfLogger 
(PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=Driver.execute 
start=1440599336948 end=1440599338096 duration=1148 
from=org.apache.hadoop.hive.ql.Driver
2015-08-26 19:58:58,096 INFO  [main]: log.PerfLogger 
(PerfLogger.java:PerfLogBegin(121)) - PERFLOG method=releaseLocks 
from=org.apache.hadoop.hive.ql.Driver
2015-08-26 19:58:58,097 INFO  [main]: log.PerfLogger 
(PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=releaseLocks 
start=1440599338096 end=1440599338097 duration=1 
from=org.apache.hadoop.hive.ql.Driver
2015-08-26 19:58:58,098 INFO  [main]: log.PerfLogger 
(PerfLogger.java:PerfLogBegin(121)) - PERFLOG method=releaseLocks 
from=org.apache.hadoop.hive.ql.Driver
2015-08-26 19:58:58,098 INFO  [main]: log.PerfLogger 
(PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=releaseLocks 
start=1440599338098 end=1440599338098 duration=0 
from=org.apache.hadoop.hive.ql.Driver

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim 

[jira] [Commented] (HIVE-11640) Shell command doesn't work for new CLI[Beeline-cli branch]

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712992#comment-14712992
 ] 

Hive QA commented on HIVE-11640:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752416/HIVE-11640.2-beeline-cli.patch

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 9242 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_join0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables_compact
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hive.beeline.TestBeeLineWithArgs.testNullEmpty
org.apache.hive.beeline.TestBeeLineWithArgs.testNullNonEmpty
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BEELINE-Build/25/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BEELINE-Build/25/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-BEELINE-Build-25/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12752416 - PreCommit-HIVE-BEELINE-Build

 Shell command doesn't work for new CLI[Beeline-cli branch]
 --

 Key: HIVE-11640
 URL: https://issues.apache.org/jira/browse/HIVE-11640
 Project: Hive
  Issue Type: Sub-task
  Components: CLI
Reporter: Ferdinand Xu
Assignee: Ferdinand Xu
 Attachments: HIVE-11640.1-beeline-cli.patch, 
 HIVE-11640.2-beeline-cli.patch


 The shell command doesn't work for the new CLI and Error: Method not 
 supported (state=,code=0) was thrown during the execution for option f and e.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread gurmukh singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713483#comment-14713483
 ] 

gurmukh singh commented on HIVE-10990:
--

I have to test that again, as it was long back. I might be missing something. 

Will test and update.

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Kevin Ludwig (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713354#comment-14713354
 ] 

Kevin Ludwig commented on HIVE-10990:
-

 However it should be noted that the branch 1.x of hive is going to stay on  
 hbase 1.0 still to maintain passivity with older versions of hbase

It seems unfortunate to have ended up in a situation where latest stable 
releases of hbase and hive are incompatible. 

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713435#comment-14713435
 ] 

Hive QA commented on HIVE-11383:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752431/HIVE-11383.12.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9376 tests executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_constprog_partitioner
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5074/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5074/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5074/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12752431 - PreCommit-HIVE-TRUNK-Build

 Upgrade Hive to Calcite 1.4
 ---

 Key: HIVE-11383
 URL: https://issues.apache.org/jira/browse/HIVE-11383
 Project: Hive
  Issue Type: Bug
Reporter: Julian Hyde
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
 HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.2.patch, 
 HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
 HIVE-11383.4.patch, HIVE-11383.5.patch, HIVE-11383.6.patch, 
 HIVE-11383.7.patch, HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch


 CLEAR LIBRARY CACHE
 Upgrade Hive to Calcite 1.4.0-incubating.
 There is currently a snapshot release, which is close to what will be in 1.4. 
 I have checked that Hive compiles against the new snapshot, fixing one issue. 
 The patch is attached.
 Next step is to validate that Hive runs against the new Calcite, and post any 
 issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
 can you please do that.
 [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
 the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11642) LLAP: make sure tests pass #3

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712658#comment-14712658
 ] 

Hive QA commented on HIVE-11642:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752363/HIVE-11642.01.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5072/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5072/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5072/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ 
hive-shims-scheduler ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hive-shims-scheduler ---
[INFO] Compiling 1 source file to 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-shims-scheduler ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-shims-scheduler 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/target/tmp/conf
 [copy] Copying 10 files to 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-shims-scheduler ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ 
hive-shims-scheduler ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-shims-scheduler ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/target/hive-shims-scheduler-2.0.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hive-shims-scheduler ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ 
hive-shims-scheduler ---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/target/hive-shims-scheduler-2.0.0-SNAPSHOT.jar
 to 
/home/hiveptest/.m2/repository/org/apache/hive/shims/hive-shims-scheduler/2.0.0-SNAPSHOT/hive-shims-scheduler-2.0.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/shims/scheduler/pom.xml to 
/home/hiveptest/.m2/repository/org/apache/hive/shims/hive-shims-scheduler/2.0.0-SNAPSHOT/hive-shims-scheduler-2.0.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Shims 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims ---
[INFO] Deleting 
/data/hive-ptest/working/apache-github-source-source/shims/aggregator/target
[INFO] Deleting 
/data/hive-ptest/working/apache-github-source-source/shims/aggregator (includes 
= [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-shims ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-shims ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hive-shims ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/shims/aggregator/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-shims ---
[INFO] Using 

[jira] [Updated] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-08-26 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11383:
---
Attachment: HIVE-11383.12.patch

 Upgrade Hive to Calcite 1.4
 ---

 Key: HIVE-11383
 URL: https://issues.apache.org/jira/browse/HIVE-11383
 Project: Hive
  Issue Type: Bug
Reporter: Julian Hyde
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
 HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.2.patch, 
 HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
 HIVE-11383.4.patch, HIVE-11383.5.patch, HIVE-11383.6.patch, 
 HIVE-11383.7.patch, HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch


 CLEAR LIBRARY CACHE
 Upgrade Hive to Calcite 1.4.0-incubating.
 There is currently a snapshot release, which is close to what will be in 1.4. 
 I have checked that Hive compiles against the new snapshot, fixing one issue. 
 The patch is attached.
 Next step is to validate that Hive runs against the new Calcite, and post any 
 issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
 can you please do that.
 [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
 the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-08-26 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713477#comment-14713477
 ] 

Swarnim Kulkarni commented on HIVE-10990:
-

[~gurmukhd] Did you mention before that it was working on hive 1.1 and hbase 
1.0?

 Compatibility Hive-1.2 an hbase-1.0.1.1
 ---

 Key: HIVE-10990
 URL: https://issues.apache.org/jira/browse/HIVE-10990
 Project: Hive
  Issue Type: Bug
  Components: Beeline, HBase Handler, HiveServer2
Affects Versions: 1.2.0
Reporter: gurmukh singh
Assignee: Swarnim Kulkarni

 Hive external table works fine with Hbase.
 Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
 Not able to create a table from hive in hbase.
 1: jdbc:hive2://edge1.dilithium.com:1/def TBLPROPERTIES 
 (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
  (state=08S01,code=1)
 [hdfs@edge1 cluster]$ hive
 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
 hive.metastore.local does not exist
 Logging initialized using configuration in 
 jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 hive CREATE TABLE hbase_table_1(key int, value string)
  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
  WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
  TBLPROPERTIES (hbase.table.name = xyz);
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
 ===
 scan complete in 1535ms
 14 driver classes found
 Compliant Version Driver Class
 no5.1 com.mysql.jdbc.Driver
 no5.1 com.mysql.jdbc.NonRegisteringDriver
 no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
 no5.1 com.mysql.jdbc.ReplicationDriver
 yes   1.2 org.apache.calcite.avatica.remote.Driver
 yes   1.2 org.apache.calcite.jdbc.Driver
 yes   1.0 org.apache.commons.dbcp.PoolingDriver
 yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
 yes   10.11   org.apache.derby.jdbc.Driver42
 yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
 yes   10.11   org.apache.derby.jdbc.InternalDriver
 no1.2 org.apache.hive.jdbc.HiveDriver
 yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
 no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11652) Avoid expensive call to removeAll in DefaultGraphWalker

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713645#comment-14713645
 ] 

Hive QA commented on HIVE-11652:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752449/HIVE-11652.patch

{color:red}ERROR:{color} -1 due to 635 failed/errored test(s), 9376 tests 
executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization_project
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_char1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_2_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_varchar1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_view_as_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_create_temp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_partitioned_native
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_schema_evolution_native
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_table_bincolserde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_table_colserde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cast_qualified_types
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_gby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_gby_empty
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_gby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_gby_empty
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_semijoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_subq_in
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_subq_not_in
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_udf_udaf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_windowing
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_semijoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_subq_in
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_subq_not_in
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_udf_udaf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_windowing
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_nested_types
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_combine3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cp_sel
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_translate
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cteViews
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_database
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_date_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_6

[jira] [Updated] (HIVE-11659) Make Vectorization use the fast StringExpr everywhere

2015-08-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-11659:
---
Attachment: HIVE-11659.1.patch

 Make Vectorization use the fast StringExpr everywhere
 -

 Key: HIVE-11659
 URL: https://issues.apache.org/jira/browse/HIVE-11659
 Project: Hive
  Issue Type: Improvement
  Components: Vectorization
Affects Versions: 1.3.0, 2.0.0
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-11659.1.patch


 StringExpr::equals() provides a faster path than the simple ::compare() 
 operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11659) Make Vectorization use the fast StringExpr everywhere

2015-08-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-11659:
---
Labels: Performance  (was: )

 Make Vectorization use the fast StringExpr everywhere
 -

 Key: HIVE-11659
 URL: https://issues.apache.org/jira/browse/HIVE-11659
 Project: Hive
  Issue Type: Improvement
  Components: Vectorization
Affects Versions: 1.3.0, 2.0.0
Reporter: Gopal V
Assignee: Gopal V
  Labels: Performance
 Attachments: HIVE-11659.1.patch


 StringExpr::equals() provides a faster path than the simple ::compare() 
 operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11604) HIVE return wrong results in some queries with PTF function

2015-08-26 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715810#comment-14715810
 ] 

Szehon Ho commented on HIVE-11604:
--

+1 from me unless there's a better alternative, the IdentityProjectRemover has 
caused a lot of issues, and had to be workaround in cases other than this one.

 HIVE return wrong results in some queries with PTF function
 ---

 Key: HIVE-11604
 URL: https://issues.apache.org/jira/browse/HIVE-11604
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer
Affects Versions: 1.2.0, 1.1.0, 2.0.0
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-11604.1.patch, HIVE-11604.2.patch


 Following query returns empty result which is not right:
 {noformat}
 select ddd.id, ddd.fkey, aaa.name
 from (
 select id, fkey, 
 row_number() over (partition by id, fkey) as rnum
 from tlb1 group by id, fkey
  ) ddd 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;
 {noformat}
 After remove row_number() over (partition by id, fkey) as rnum from query, 
 the right result returns.
 Reproduce:
 {noformat}
 create table tlb1 (id int, fkey int, val string);
 create table tlb2 (fid int, name string);
 insert into table tlb1 values(100,1,'abc');
 insert into table tlb1 values(200,1,'efg');
 insert into table tlb2 values(1, 'key1');
 select ddd.id, ddd.fkey, aaa.name
 from (
 select id, fkey, 
 row_number() over (partition by id, fkey) as rnum
 from tlb1 group by id, fkey
  ) ddd 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;
 
 INFO  : Ended Job = job_local1070163923_0017
 +-+---+---+--+
 No rows selected (14.248 seconds)
 | ddd.id  | ddd.fkey  | aaa.name  |
 +-+---+---+--+
 +-+---+---+--+
 0: jdbc:hive2://localhost:1 select ddd.id, ddd.fkey, aaa.name
 from (
 select id, fkey 
 from tlb1 group by id, fkey
  ) ddd 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;select ddd.id, ddd.fkey, aaa.name
 0: jdbc:hive2://localhost:1 from (
 0: jdbc:hive2://localhost:1 select id, fkey 
 0: jdbc:hive2://localhost:1 from tlb1 group by id, fkey
 0: jdbc:hive2://localhost:1  ) ddd 
 0: jdbc:hive2://localhost:1 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;
 INFO  : Number of reduce tasks not specified. Estimated from input data size: 
 1
 ...
 INFO  : Ended Job = job_local672340505_0019
 +-+---+---+--+
 2 rows selected (14.383 seconds)
 | ddd.id  | ddd.fkey  | aaa.name  |
 +-+---+---+--+
 | 100 | 1 | key1  |
 | 200 | 1 | key1  |
 +-+---+---+--+
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11573) PointLookupOptimizer can be pessimistic at a low nDV

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715870#comment-14715870
 ] 

Hive QA commented on HIVE-11573:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752557/HIVE-11573.6.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 9379 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5081/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5081/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5081/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12752557 - PreCommit-HIVE-TRUNK-Build

 PointLookupOptimizer can be pessimistic at a low nDV
 

 Key: HIVE-11573
 URL: https://issues.apache.org/jira/browse/HIVE-11573
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.3.0, 2.0.0
Reporter: Gopal V
Assignee: Gopal V
  Labels: TODOC2.0
 Fix For: 1.3.0, 2.0.0

 Attachments: HIVE-11573.1.patch, HIVE-11573.2.patch, 
 HIVE-11573.3.patch, HIVE-11573.4.patch, HIVE-11573.5.patch, HIVE-11573.6.patch


 The PointLookupOptimizer can turn off some of the optimizations due to its 
 use of tuple IN() clauses.
 Limit the application of the optimizer for very low nDV cases and extract the 
 sub-clause as a pre-condition during runtime, to trigger the simple column 
 predicate index lookups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11652) Avoid expensive call to removeAll in DefaultGraphWalker

2015-08-26 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11652:
---
Attachment: (was: HIVE-11652.01.patch)

 Avoid expensive call to removeAll in DefaultGraphWalker
 ---

 Key: HIVE-11652
 URL: https://issues.apache.org/jira/browse/HIVE-11652
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer, Physical Optimizer
Affects Versions: 1.3.0, 2.0.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11652.patch


 When the plan is too large, the removeAll call in DefaultGraphWalker (line 
 140) will take very long as it will have to go through the list looking for 
 each of the nodes. We try to get rid of this call by rewriting the logic in 
 the walker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11652) Avoid expensive call to removeAll in DefaultGraphWalker

2015-08-26 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11652:
---
Attachment: HIVE-11652.01.patch

 Avoid expensive call to removeAll in DefaultGraphWalker
 ---

 Key: HIVE-11652
 URL: https://issues.apache.org/jira/browse/HIVE-11652
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer, Physical Optimizer
Affects Versions: 1.3.0, 2.0.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11652.01.patch, HIVE-11652.patch


 When the plan is too large, the removeAll call in DefaultGraphWalker (line 
 140) will take very long as it will have to go through the list looking for 
 each of the nodes. We try to get rid of this call by rewriting the logic in 
 the walker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10289) Support filter on non-first partition key and non-string partition key

2015-08-26 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715698#comment-14715698
 ] 

Lefty Leverenz commented on HIVE-10289:
---

Thanks Daniel.

 Support filter on non-first partition key and non-string partition key
 --

 Key: HIVE-10289
 URL: https://issues.apache.org/jira/browse/HIVE-10289
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore, Metastore
Affects Versions: hbase-metastore-branch
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10289.1.patch, HIVE-10289.2.patch, 
 HIVE-10289.3.patch


 Currently, partition filtering only handles the first partition key and the 
 type for this partition key must be string. In order to break this 
 limitation, several improvements are required:
 1. Change serialization format for partition key. Currently partition keys 
 are serialized into delimited string, which sorted on string order not with 
 regard to the actual type of the partition key. We use BinarySortableSerDe 
 for this purpose.
 2. For filter condition not on the initial partition keys, push it into HBase 
 RowFilter. RowFilter will deserialize the partition key and evaluate the 
 filter condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11659) Make Vectorization use the fast StringExpr everywhere

2015-08-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-11659:
---
Summary: Make Vectorization use the fast StringExpr everywhere  (was: Make 
VectorizedHashKeyWrapper use the fast StringExpr )

 Make Vectorization use the fast StringExpr everywhere
 -

 Key: HIVE-11659
 URL: https://issues.apache.org/jira/browse/HIVE-11659
 Project: Hive
  Issue Type: Improvement
  Components: Vectorization
Affects Versions: 1.3.0, 2.0.0
Reporter: Gopal V
Assignee: Gopal V

 VectorHashKeyWrapper::equals() provides a faster path than the simple compare 
 operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11659) Make Vectorization use the fast StringExpr everywhere

2015-08-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-11659:
---
Description: StringExpr::equals() provides a faster path than the simple 
::compare() operator.  (was: VectorHashKeyWrapper::equals() provides a faster 
path than the simple compare operator.)

 Make Vectorization use the fast StringExpr everywhere
 -

 Key: HIVE-11659
 URL: https://issues.apache.org/jira/browse/HIVE-11659
 Project: Hive
  Issue Type: Improvement
  Components: Vectorization
Affects Versions: 1.3.0, 2.0.0
Reporter: Gopal V
Assignee: Gopal V

 StringExpr::equals() provides a faster path than the simple ::compare() 
 operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11645) Add in-place updates for dynamic partitions loading

2015-08-26 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715746#comment-14715746
 ] 

Prasanth Jayachandran commented on HIVE-11645:
--

[~ashutoshc] The repositioning of the cursor seems to be missing. Please look 
at TezJobMonitor to see how repositioning of the cursor works. The general idea 
is to remember the number of lines that gets added to the console and 
reposition the cursor to the start and print new set of log lines. For example: 
If you are adding the following line

Loaded: 0/5

There is only one log line that is added. Before updating this log line with 
Loaded: 1/5 reposition the cursor to start and redraw the log line. Make sure 
there are no new log lines gets printed in between. 

 Add in-place updates for dynamic partitions loading
 ---

 Key: HIVE-11645
 URL: https://issues.apache.org/jira/browse/HIVE-11645
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-11645.2.patch, HIVE-11645.patch


 Currently, updates go to log file and on console there is no visible progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10292) Add support for HS2 to use custom authentication class with kerberos environment

2015-08-26 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715837#comment-14715837
 ] 

Thejas M Nair commented on HIVE-10292:
--

[~heesoo] [~raviprak] What the description and title states seems to be similar 
to HIVE-7764. But from a quick look at the patch, this seems to be about 
supporting SSL without kerberos. Is that right ?


 Add support for HS2 to use custom authentication class with kerberos 
 environment
 

 Key: HIVE-10292
 URL: https://issues.apache.org/jira/browse/HIVE-10292
 Project: Hive
  Issue Type: New Feature
  Components: HiveServer2
Affects Versions: 1.2.0
Reporter: Heesoo Kim
Assignee: HeeSoo Kim
 Attachments: HIVE-10292.patch


 In the kerberos environment, Hiveserver2 only supports GSSAPI and DIGEST-MD5 
 authentication mechanism. We would like to add the ability to use custom 
 authentication class in conjunction with Kerberos. 
 This is necessary to connect to HiveServer2 from a machine which cannot 
 authenticate with the KDC used inside the cluster environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11652) Avoid expensive call to removeAll in DefaultGraphWalker

2015-08-26 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715763#comment-14715763
 ] 

Jesus Camacho Rodriguez commented on HIVE-11652:


[~hsubramaniyan], thanks for your comments.

I checked the patch in HIVE-11341, but my intention in this patch is a bit 
different: I'd like to understand fully the logic and dependencies, and get rid 
of the remove call completely.

The patch that I uploaded was WIP and indeed not ready: I just wanted to 
trigger a QA run to check tests where we would possibly hit different issues. I 
just uploaded a new one that I think that moves in the right direction. As part 
of this patch, I would like to get rid of the removeAll call in ForwardWalker, 
ColumnPruner, and ConstantPropagate too, so it is still WIP.

 Avoid expensive call to removeAll in DefaultGraphWalker
 ---

 Key: HIVE-11652
 URL: https://issues.apache.org/jira/browse/HIVE-11652
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer, Physical Optimizer
Affects Versions: 1.3.0, 2.0.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-11652.01.patch, HIVE-11652.patch


 When the plan is too large, the removeAll call in DefaultGraphWalker (line 
 140) will take very long as it will have to go through the list looking for 
 each of the nodes. We try to get rid of this call by rewriting the logic in 
 the walker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11604) HIVE return wrong results in some queries with PTF function

2015-08-26 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14715795#comment-14715795
 ] 

Yongzhi Chen commented on HIVE-11604:
-

[~csun], [~xuefuz], [~szehon], Could you review the patch? Thanks

 HIVE return wrong results in some queries with PTF function
 ---

 Key: HIVE-11604
 URL: https://issues.apache.org/jira/browse/HIVE-11604
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer
Affects Versions: 1.2.0, 1.1.0, 2.0.0
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-11604.1.patch, HIVE-11604.2.patch


 Following query returns empty result which is not right:
 {noformat}
 select ddd.id, ddd.fkey, aaa.name
 from (
 select id, fkey, 
 row_number() over (partition by id, fkey) as rnum
 from tlb1 group by id, fkey
  ) ddd 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;
 {noformat}
 After remove row_number() over (partition by id, fkey) as rnum from query, 
 the right result returns.
 Reproduce:
 {noformat}
 create table tlb1 (id int, fkey int, val string);
 create table tlb2 (fid int, name string);
 insert into table tlb1 values(100,1,'abc');
 insert into table tlb1 values(200,1,'efg');
 insert into table tlb2 values(1, 'key1');
 select ddd.id, ddd.fkey, aaa.name
 from (
 select id, fkey, 
 row_number() over (partition by id, fkey) as rnum
 from tlb1 group by id, fkey
  ) ddd 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;
 
 INFO  : Ended Job = job_local1070163923_0017
 +-+---+---+--+
 No rows selected (14.248 seconds)
 | ddd.id  | ddd.fkey  | aaa.name  |
 +-+---+---+--+
 +-+---+---+--+
 0: jdbc:hive2://localhost:1 select ddd.id, ddd.fkey, aaa.name
 from (
 select id, fkey 
 from tlb1 group by id, fkey
  ) ddd 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;select ddd.id, ddd.fkey, aaa.name
 0: jdbc:hive2://localhost:1 from (
 0: jdbc:hive2://localhost:1 select id, fkey 
 0: jdbc:hive2://localhost:1 from tlb1 group by id, fkey
 0: jdbc:hive2://localhost:1  ) ddd 
 0: jdbc:hive2://localhost:1 
 inner join tlb2 aaa on aaa.fid = ddd.fkey;
 INFO  : Number of reduce tasks not specified. Estimated from input data size: 
 1
 ...
 INFO  : Ended Job = job_local672340505_0019
 +-+---+---+--+
 2 rows selected (14.383 seconds)
 | ddd.id  | ddd.fkey  | aaa.name  |
 +-+---+---+--+
 | 100 | 1 | key1  |
 | 200 | 1 | key1  |
 +-+---+---+--+
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10175) DynamicPartitionPruning lacks a fast-path exit for large IN() queries

2015-08-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-10175:
---
Summary: DynamicPartitionPruning lacks a fast-path exit for large IN() 
queries  (was: PartitionPruning lacks a fast-path exit for large IN() queries)

 DynamicPartitionPruning lacks a fast-path exit for large IN() queries
 -

 Key: HIVE-10175
 URL: https://issues.apache.org/jira/browse/HIVE-10175
 Project: Hive
  Issue Type: Bug
  Components: Physical Optimizer, Tez
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Minor
 Attachments: HIVE-10175.profile.html


 TezCompiler::runDynamicPartitionPruning()  ppr.PartitionPruner() calls the 
 graph walker even if all tables provided to the optimizer are unpartitioned 
 (or temporary) tables.
 This makes it extremely slow as it will walk  inspect a large/complex 
 FilterOperator later in the pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11640) Shell command doesn't work for new CLI[Beeline-cli branch]

2015-08-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14716117#comment-14716117
 ] 

Hive QA commented on HIVE-11640:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12752637/HIVE-11640.3-beeline-cli.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9242 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_join0
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Json
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BEELINE-Build/26/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BEELINE-Build/26/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-BEELINE-Build-26/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12752637 - PreCommit-HIVE-BEELINE-Build

 Shell command doesn't work for new CLI[Beeline-cli branch]
 --

 Key: HIVE-11640
 URL: https://issues.apache.org/jira/browse/HIVE-11640
 Project: Hive
  Issue Type: Sub-task
  Components: CLI
Reporter: Ferdinand Xu
Assignee: Ferdinand Xu
 Attachments: HIVE-11640.1-beeline-cli.patch, 
 HIVE-11640.2-beeline-cli.patch, HIVE-11640.3-beeline-cli.patch


 The shell command doesn't work for the new CLI and Error: Method not 
 supported (state=,code=0) was thrown during the execution for option f and e.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10175) DynamicPartitionPruning lacks a fast-path exit for large IN() queries

2015-08-26 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14716123#comment-14716123
 ] 

Gopal V commented on HIVE-10175:


[~hagleitn]: Review please?

 DynamicPartitionPruning lacks a fast-path exit for large IN() queries
 -

 Key: HIVE-10175
 URL: https://issues.apache.org/jira/browse/HIVE-10175
 Project: Hive
  Issue Type: Bug
  Components: Physical Optimizer, Tez
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Minor
 Attachments: HIVE-10175.1.patch, HIVE-10175.profile.html


 TezCompiler::runDynamicPartitionPruning()  ppr.PartitionPruner() calls the 
 graph walker even if all tables provided to the optimizer are unpartitioned 
 (or temporary) tables.
 This makes it extremely slow as it will walk  inspect a large/complex 
 FilterOperator later in the pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >