[jira] [Commented] (HIVE-13410) PerfLog metrics scopes not closed if there are exceptions on HS2
[ https://issues.apache.org/jira/browse/HIVE-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238646#comment-15238646 ] Szehon Ho commented on HIVE-13410: -- Finally the flaky tests are fixed. [~aihuaxu] can you please review one more time? > PerfLog metrics scopes not closed if there are exceptions on HS2 > > > Key: HIVE-13410 > URL: https://issues.apache.org/jira/browse/HIVE-13410 > Project: Hive > Issue Type: Bug > Components: Diagnosability >Affects Versions: 2.0.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: HIVE-13410.2.patch, HIVE-13410.3.patch, > HIVE-13410.4.patch, HIVE-13410.4.patch, HIVE-13410.5.patch, HIVE-13410.patch > > > If there are errors, the HS2 PerfLog api scopes are not closed. Then there > are sometimes messages like 'java.io.IOException: Scope named api_parse is > not closed, cannot be opened.' > I had simply forgetting to close the dangling scopes if there is an > exception. Doing so now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails
[ https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13458: - Attachment: HIVE-13458.2.patch patch 2 for test > Heartbeater doesn't fail query when heartbeat fails > --- > > Key: HIVE-13458 > URL: https://issues.apache.org/jira/browse/HIVE-13458 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.1.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13458.1.patch, HIVE-13458.2.patch > > > When a heartbeat fails to locate a lock, it should fail the current query. > That doesn't happen, which is a bug. > Another thing is, we need to make sure stopHeartbeat really stops the > heartbeat, i.e. no additional heartbeat will be sent, since that will break > the assumption and cause the query to fail. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12159) Create vectorized readers for the complex types
[ https://issues.apache.org/jira/browse/HIVE-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HIVE-12159: - Attachment: HIVE-12159.patch Matthew figured out that I needed to clear the selectedInUse flag in the VectorRowBatch, so I've reverted the change to the tests. > Create vectorized readers for the complex types > --- > > Key: HIVE-12159 > URL: https://issues.apache.org/jira/browse/HIVE-12159 > Project: Hive > Issue Type: Sub-task >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HIVE-12159.patch, HIVE-12159.patch, HIVE-12159.patch, > HIVE-12159.patch > > > We need vectorized readers for the complex types. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13410) PerfLog metrics scopes not closed if there are exceptions on HS2
[ https://issues.apache.org/jira/browse/HIVE-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238617#comment-15238617 ] Hive QA commented on HIVE-13410: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12798103/HIVE-13410.5.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9976 tests executed *Failed tests:* {noformat} TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.metastore.hbase.TestHBaseImport.org.apache.hadoop.hive.metastore.hbase.TestHBaseImport {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7571/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7571/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7571/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12798103 - PreCommit-HIVE-TRUNK-Build > PerfLog metrics scopes not closed if there are exceptions on HS2 > > > Key: HIVE-13410 > URL: https://issues.apache.org/jira/browse/HIVE-13410 > Project: Hive > Issue Type: Bug > Components: Diagnosability >Affects Versions: 2.0.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: HIVE-13410.2.patch, HIVE-13410.3.patch, > HIVE-13410.4.patch, HIVE-13410.4.patch, HIVE-13410.5.patch, HIVE-13410.patch > > > If there are errors, the HS2 PerfLog api scopes are not closed. Then there > are sometimes messages like 'java.io.IOException: Scope named api_parse is > not closed, cannot be opened.' > I had simply forgetting to close the dangling scopes if there is an > exception. Doing so now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13498) cleardanglingscratchdir does not work if scratchdir is not on defaultFs
[ https://issues.apache.org/jira/browse/HIVE-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238611#comment-15238611 ] Thejas M Nair commented on HIVE-13498: -- +1 > cleardanglingscratchdir does not work if scratchdir is not on defaultFs > --- > > Key: HIVE-13498 > URL: https://issues.apache.org/jira/browse/HIVE-13498 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13498.1.patch > > > The cleardanglingscratchdir utility need a fix to make it work if scratchdir > is not on defaultFs, such as on Azure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13498) cleardanglingscratchdir does not work if scratchdir is not on defaultFs
[ https://issues.apache.org/jira/browse/HIVE-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-13498: -- Attachment: HIVE-13498.1.patch > cleardanglingscratchdir does not work if scratchdir is not on defaultFs > --- > > Key: HIVE-13498 > URL: https://issues.apache.org/jira/browse/HIVE-13498 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13498.1.patch > > > The cleardanglingscratchdir utility need a fix to make it work if scratchdir > is not on defaultFs, such as on Azure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13498) cleardanglingscratchdir does not work if scratchdir is not on defaultFs
[ https://issues.apache.org/jira/browse/HIVE-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-13498: -- Attachment: (was: HIVE-13498.1.patch) > cleardanglingscratchdir does not work if scratchdir is not on defaultFs > --- > > Key: HIVE-13498 > URL: https://issues.apache.org/jira/browse/HIVE-13498 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13498.1.patch > > > The cleardanglingscratchdir utility need a fix to make it work if scratchdir > is not on defaultFs, such as on Azure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13498) cleardanglingscratchdir does not work if scratchdir is not on defaultFs
[ https://issues.apache.org/jira/browse/HIVE-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-13498: -- Attachment: HIVE-13498.1.patch > cleardanglingscratchdir does not work if scratchdir is not on defaultFs > --- > > Key: HIVE-13498 > URL: https://issues.apache.org/jira/browse/HIVE-13498 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13498.1.patch > > > The cleardanglingscratchdir utility need a fix to make it work if scratchdir > is not on defaultFs, such as on Azure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13498) cleardanglingscratchdir does not work if scratchdir is not on defaultFs
[ https://issues.apache.org/jira/browse/HIVE-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-13498: -- Status: Patch Available (was: Open) > cleardanglingscratchdir does not work if scratchdir is not on defaultFs > --- > > Key: HIVE-13498 > URL: https://issues.apache.org/jira/browse/HIVE-13498 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13498.1.patch > > > The cleardanglingscratchdir utility need a fix to make it work if scratchdir > is not on defaultFs, such as on Azure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11484) Fix ObjectInspector for Char and VarChar
[ https://issues.apache.org/jira/browse/HIVE-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated HIVE-11484: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Committed. Thanks [~deepak.barr] > Fix ObjectInspector for Char and VarChar > > > Key: HIVE-11484 > URL: https://issues.apache.org/jira/browse/HIVE-11484 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Reporter: Amareshwari Sriramadasu >Assignee: Deepak Barr > Fix For: 2.1.0 > > Attachments: HIVE-11484.01.patch, HIVE-11484.02.patch > > > The creation of HiveChar and Varchar is not happening through ObjectInspector. > Here is fix we pushed internally : > https://github.com/InMobi/hive/commit/fe95c7850e7130448209141155f28b25d3504216 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13293) Query occurs performance degradation after enabling parallel order by for Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238556#comment-15238556 ] Rui Li commented on HIVE-13293: --- Thanks [~xuefuz] for the review. I mean it can work with queries that have only one ShuffleMapStage. It will definitely work with queries that have multiple ShuffleMapStage too. But as I said in previous comment, what we care about here is just the last ShuffleMapStage because that's what gets re-computed in parallel order by. On the other hand, splitting task that has only one ShuffleMapStage seems weird and may be bad for performance. That's why I chose to cache the RDD. > Query occurs performance degradation after enabling parallel order by for > Hive on Spark > --- > > Key: HIVE-13293 > URL: https://issues.apache.org/jira/browse/HIVE-13293 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.0.0 >Reporter: Lifeng Wang >Assignee: Rui Li > Attachments: HIVE-13293.1.patch > > > I use TPCx-BB to do some performance test on Hive on Spark engine. And found > query 10 has performance degradation when enabling parallel order by. > It seems that sampling cost much time before running the real query. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12064) prevent transactional=false
[ https://issues.apache.org/jira/browse/HIVE-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238456#comment-15238456 ] Qiuzhuang Lian commented on HIVE-12064: --- Yes, code seems to check this condition. It is incompatible with 1.X which seems OK. At least, this should be put into documentation. I did create the ORC table without transaction enabled from txt table and it works. > prevent transactional=false > --- > > Key: HIVE-12064 > URL: https://issues.apache.org/jira/browse/HIVE-12064 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Fix For: 1.3.0, 2.1.0, 2.0.1 > > Attachments: HIVE-12064.2.patch, HIVE-12064.3.patch, > HIVE-12064.4.patch, HIVE-12064.5.patch, HIVE-12064.6.patch, > HIVE-12064.7.patch, HIVE-12064.branch-1.patch, HIVE-12064.branch-2.0.patch, > HIVE-12064.patch > > > currently a tblproperty transactional=true must be set to make a table behave > in ACID compliant way. > This is misleading in that it seems like changing it to transactional=false > makes the table non-acid but on disk layout of acid table is different than > plain tables. So changing this property may cause wrong data to be returned. > Should prevent transactional=false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13424) Refactoring the code to pass a QueryState object rather than HiveConf object
[ https://issues.apache.org/jira/browse/HIVE-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238394#comment-15238394 ] Hive QA commented on HIVE-13424: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12798280/HIVE-13424.4.patch {color:green}SUCCESS:{color} +1 due to 14 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9974 tests executed *Failed tests:* {noformat} TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hive.service.cli.operation.TestOperationLoggingAPIWithMr.testFetchResultsOfLogAsync {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7569/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7569/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7569/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12798280 - PreCommit-HIVE-TRUNK-Build > Refactoring the code to pass a QueryState object rather than HiveConf object > > > Key: HIVE-13424 > URL: https://issues.apache.org/jira/browse/HIVE-13424 > Project: Hive > Issue Type: Sub-task > Components: Query Processor >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13424.1.patch, HIVE-13424.2.patch, > HIVE-13424.3.patch, HIVE-13424.4.patch > > > Step1: to refractor the code by creating the QueryState class and moving > query related info from SessionState. Then during compilation, execution > stages, pass single QueryState object for each query. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13418) HiveServer2 HTTP mode should support X-Forwarded-Host header for authorization/audits
[ https://issues.apache.org/jira/browse/HIVE-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238377#comment-15238377 ] Vaibhav Gumashta commented on HIVE-13418: - +1. Looks like the test failures are unrelated. > HiveServer2 HTTP mode should support X-Forwarded-Host header for > authorization/audits > - > > Key: HIVE-13418 > URL: https://issues.apache.org/jira/browse/HIVE-13418 > Project: Hive > Issue Type: New Feature > Components: Authorization, HiveServer2 >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13418.1.patch > > > Apache Knox acts as a proxy for requests coming from the end users. In these > cases, the IP address that HiveServer2 passes to the authorization/audit > plugins via the HiveAuthzContext object only the IP address of the proxy, and > not the end user. > For auditing purposes, the IP address of the end user and any proxies in > between are useful. > HiveServer2 should pass the information from 'X-Forwarded-Host' header to > the HiveAuthorizer plugins. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13492) TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master
[ https://issues.apache.org/jira/browse/HIVE-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13492: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master. > TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master > - > > Key: HIVE-13492 > URL: https://issues.apache.org/jira/browse/HIVE-13492 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Fix For: 2.1.0 > > Attachments: HIVE-13492.patch > > > Failing for few weeks now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13380) Decimal should have lower precedence than double in type hierachy
[ https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13380: Fix Version/s: 2.1.0 > Decimal should have lower precedence than double in type hierachy > - > > Key: HIVE-13380 > URL: https://issues.apache.org/jira/browse/HIVE-13380 > Project: Hive > Issue Type: Bug > Components: Types >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Fix For: 2.1.0 > > Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, > HIVE-13380.5.patch, HIVE-13380.patch > > > Currently its other way round. Also, decimal should be lower than float. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13380) Decimal should have lower precedence than double in type hierachy
[ https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13380: Resolution: Fixed Hadoop Flags: Incompatible change Release Note: Hive now considers float & double higher in precedence than decimal. Status: Resolved (was: Patch Available) Yeah, added the release note for this. Pushed to master. > Decimal should have lower precedence than double in type hierachy > - > > Key: HIVE-13380 > URL: https://issues.apache.org/jira/browse/HIVE-13380 > Project: Hive > Issue Type: Bug > Components: Types >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, > HIVE-13380.5.patch, HIVE-13380.patch > > > Currently its other way round. Also, decimal should be lower than float. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13149) Remove some unnecessary HMS connections from HS2
[ https://issues.apache.org/jira/browse/HIVE-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238342#comment-15238342 ] Thejas M Nair edited comment on HIVE-13149 at 4/13/16 12:51 AM: This is causing the timeouts seen in last few runs with TestJdbcWithMiniHS2. This seems to be one of the reasons for last few tests runs taking longer. I verified that reverting the patch gets the test working. The test results also complain about not being able to run TestJdbcWithMiniHS2. was (Author: thejas): This is causing the timeouts seen in last few says with TestJdbcWithMiniHS2. This seems to be one of the reasons for last few tests runs taking longer. I verified that reverting the patch gets the test working. The test results also complain about not being able to run TestJdbcWithMiniHS2. > Remove some unnecessary HMS connections from HS2 > - > > Key: HIVE-13149 > URL: https://issues.apache.org/jira/browse/HIVE-13149 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.1.0 > > Attachments: HIVE-13149.1.patch, HIVE-13149.2.patch, > HIVE-13149.3.patch, HIVE-13149.4.patch, HIVE-13149.5.patch, > HIVE-13149.6.patch, HIVE-13149.7.patch > > > In SessionState class, currently we will always try to get a HMS connection > in {{start(SessionState startSs, boolean isAsync, LogHelper console)}} > regardless of if the connection will be used later or not. > When SessionState is accessed by the tasks in TaskRunner.java, although most > of the tasks other than some like StatsTask, don't need to access HMS. > Currently a new HMS connection will be established for each Task thread. If > HiveServer2 is configured to run in parallel and the query involves many > tasks, then the connections are created but unused. > {noformat} > @Override > public void run() { > runner = Thread.currentThread(); > try { > OperationLog.setCurrentOperationLog(operationLog); > SessionState.start(ss); > runSequential(); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13149) Remove some unnecessary HMS connections from HS2
[ https://issues.apache.org/jira/browse/HIVE-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238342#comment-15238342 ] Thejas M Nair commented on HIVE-13149: -- This is causing the timeouts seen in last few says with TestJdbcWithMiniHS2. This seems to be one of the reasons for tests runs taking longer now. I verified that reverting the patch gets the test working. The test results also complain about not being able to run TestJdbcWithMiniHS2. > Remove some unnecessary HMS connections from HS2 > - > > Key: HIVE-13149 > URL: https://issues.apache.org/jira/browse/HIVE-13149 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.1.0 > > Attachments: HIVE-13149.1.patch, HIVE-13149.2.patch, > HIVE-13149.3.patch, HIVE-13149.4.patch, HIVE-13149.5.patch, > HIVE-13149.6.patch, HIVE-13149.7.patch > > > In SessionState class, currently we will always try to get a HMS connection > in {{start(SessionState startSs, boolean isAsync, LogHelper console)}} > regardless of if the connection will be used later or not. > When SessionState is accessed by the tasks in TaskRunner.java, although most > of the tasks other than some like StatsTask, don't need to access HMS. > Currently a new HMS connection will be established for each Task thread. If > HiveServer2 is configured to run in parallel and the query involves many > tasks, then the connections are created but unused. > {noformat} > @Override > public void run() { > runner = Thread.currentThread(); > try { > OperationLog.setCurrentOperationLog(operationLog); > SessionState.start(ss); > runSequential(); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13492) TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master
[ https://issues.apache.org/jira/browse/HIVE-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238331#comment-15238331 ] Szehon Ho commented on HIVE-13492: -- +1 > TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master > - > > Key: HIVE-13492 > URL: https://issues.apache.org/jira/browse/HIVE-13492 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-13492.patch > > > Failing for few weeks now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Status: Open (was: Patch Available) > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13390.1.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Status: Patch Available (was: Open) > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13390.1.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13492) TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master
[ https://issues.apache.org/jira/browse/HIVE-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13492: Attachment: HIVE-13492.patch Regular golden file update. > TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master > - > > Key: HIVE-13492 > URL: https://issues.apache.org/jira/browse/HIVE-13492 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Ashutosh Chauhan > Attachments: HIVE-13492.patch > > > Failing for few weeks now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13492) TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master
[ https://issues.apache.org/jira/browse/HIVE-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13492: Status: Patch Available (was: Open) > TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master > - > > Key: HIVE-13492 > URL: https://issues.apache.org/jira/browse/HIVE-13492 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-13492.patch > > > Failing for few weeks now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13492) TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master
[ https://issues.apache.org/jira/browse/HIVE-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan reassigned HIVE-13492: --- Assignee: Ashutosh Chauhan > TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 is failing on master > - > > Key: HIVE-13492 > URL: https://issues.apache.org/jira/browse/HIVE-13492 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-13492.patch > > > Failing for few weeks now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13460) ANALYZE TABLE COMPUTE STATISTICS FAILED max key length is 1000 bytes
[ https://issues.apache.org/jira/browse/HIVE-13460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Vovchenko updated HIVE-13460: - Attachment: HIVE-13460-branch-1.0.3.patch > ANALYZE TABLE COMPUTE STATISTICS FAILED max key length is 1000 bytes > > > Key: HIVE-13460 > URL: https://issues.apache.org/jira/browse/HIVE-13460 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.0.1 >Reporter: Aleksey Vovchenko >Assignee: Aleksey Vovchenko > Fix For: 1.2.0 > > Attachments: HIVE-13460-branch-1.0.2.patch, > HIVE-13460-branch-1.0.3.patch, HIVE-13460-branch-1.0.patch > > > When Hive configured to Store Statistics in MySQL we have next error: > {noformat} > 2016-04-08 15:53:28,047 ERROR [main]: jdbc.JDBCStatsPublisher > (JDBCStatsPublisher.java:init(316)) - Error during JDBC initialization. > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was > too long; max key length is 767 bytes > {noformat} > If set MySql properties as: > {noformat} > set global innodb_large_prefix = ON; > set global innodb_file_format = BARRACUDA; > {noformat} > Now we have next Error: > {noformat} > 2016-04-08 15:56:05,552 ERROR [main]: jdbc.JDBCStatsPublisher > (JDBCStatsPublisher.java:init(316)) - Error during JDBC initialization. > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was > too long; max key length is 3072 bytes > {noformat} > As a result of my investigation I figured out that MySQL does not allow to > create primary key with size more than 3072 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13496) Create initial test data once across multiple test runs
[ https://issues.apache.org/jira/browse/HIVE-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13496: -- Summary: Create initial test data once across multiple test runs (was: Create initial test data once across multiple runs) > Create initial test data once across multiple test runs > --- > > Key: HIVE-13496 > URL: https://issues.apache.org/jira/browse/HIVE-13496 > Project: Hive > Issue Type: Improvement > Components: Test >Reporter: Siddharth Seth >Assignee: Siddharth Seth > > All TestCliDriver, TezMiniTezCliDriver etc tests create a standard data set > when they start up. When running on a box with SSDs - this step takes over a > minute. > Running a single qtest cannot be faster than this. On the ptest framework - > all batches end up doing this which is a lot of wastage. > Instead, this data generation should be shared across runs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13496) Create initial test data once across multiple runs
[ https://issues.apache.org/jira/browse/HIVE-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238128#comment-15238128 ] Siddharth Seth commented on HIVE-13496: --- Couple of options 1. Checkin the derby file that is generated. (This would create another update step if anyone changes the generation scripts. This may not be a problem, given that q_test_init was last modified in November 2014) 2. [~ashutoshc] - was mentioning some other way to load derby which is cheaper. 3. Eventually - automate this, i.e. look for the existence of the data - and create it only if it does not exist. > Create initial test data once across multiple runs > -- > > Key: HIVE-13496 > URL: https://issues.apache.org/jira/browse/HIVE-13496 > Project: Hive > Issue Type: Improvement > Components: Test >Reporter: Siddharth Seth >Assignee: Siddharth Seth > > All TestCliDriver, TezMiniTezCliDriver etc tests create a standard data set > when they start up. When running on a box with SSDs - this step takes over a > minute. > Running a single qtest cannot be faster than this. On the ptest framework - > all batches end up doing this which is a lot of wastage. > Instead, this data generation should be shared across runs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11427) Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079
[ https://issues.apache.org/jira/browse/HIVE-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238114#comment-15238114 ] Hive QA commented on HIVE-11427: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12798319/HIVE-11427.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9974 tests executed *Failed tests:* {noformat} TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7567/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7567/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7567/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12798319 - PreCommit-HIVE-TRUNK-Build > Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079 > > > Key: HIVE-11427 > URL: https://issues.apache.org/jira/browse/HIVE-11427 > Project: Hive > Issue Type: Bug >Reporter: Grisha Trubetskoy >Assignee: Yongzhi Chen > Attachments: HIVE-11427.1.patch > > > If a user _does not_ have HDFS write permissions to the _default_ database, > and attempts to create a table in a _private_ database to which the user > _does_ have permissions using CREATE TABLE AS SELECT from a table in the > default database, the following happens: > {code} > use default; > create table grisha.blahblah as select * from some_table; > FAILED: SemanticException 0:0 Error creating temporary folder on: > hdfs://nn.example.com/user/hive/warehouse. Error encountered near token > 'TOK_TMP_FILE’ > {code} > I've edited this issue because my initial explanation was completely bogus. A > more likely explanation is in > https://github.com/apache/hive/commit/1614314ef7bd0c3b8527ee32a434ababf7711278 > {code} > -fname = ctx.getExternalTmpPath( > +fname = ctx.getExtTmpPathRelTo( > // and then something incorrect happens in getExtTmpPathRelTo() > {code} > In any event - the bug is that the location chosen for the temporary storage > is not in the same place as the target table. It should be same as the target > table (/user/hive/warehouse/grisha.db in the above example) because this is > where presumably the user running the query would have write permissions to. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13473) upgrade Apache Directory Server version
[ https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238056#comment-15238056 ] Gabor Liptak commented on HIVE-13473: - I rebased and attached the patch (although there were no changes ...) > upgrade Apache Directory Server version > --- > > Key: HIVE-13473 > URL: https://issues.apache.org/jira/browse/HIVE-13473 > Project: Hive > Issue Type: Improvement >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-13473.2.patch, HIVE-13473.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13473) upgrade Apache Directory Server version
[ https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Liptak updated HIVE-13473: Attachment: HIVE-13473.3.patch > upgrade Apache Directory Server version > --- > > Key: HIVE-13473 > URL: https://issues.apache.org/jira/browse/HIVE-13473 > Project: Hive > Issue Type: Improvement >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-13473.2.patch, HIVE-13473.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10293) enabling travis-ci build?
[ https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238050#comment-15238050 ] Gabor Liptak commented on HIVE-10293: - The Travis build failed on a JUnit test at 35 minutes. https://travis-ci.org/gliptak/hive/builds/122405719 If it tops 50 minutes, we can continue filtering out tests (and even possibly move them into a parallel matrix build ...) > enabling travis-ci build? > - > > Key: HIVE-10293 > URL: https://issues.apache.org/jira/browse/HIVE-10293 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-10293.1.patch, HIVE-10293.2.diff > > > I would like to contribute a .travis.yml for Hive. > In particular, this would allow contributors working through Github, to > validate their own commits on their own branches. > Please comment. > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10293) enabling travis-ci build?
[ https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238043#comment-15238043 ] Gabor Liptak commented on HIVE-10293: - I uploaded the latest diff for your review (I couldn't make a format patch on two commits merged ... :( ) Without https://issues.apache.org/jira/browse/HIVE-13473, jars are not found in Maven repos, with the patch I'm seeing a JUnit fail around LDAP (likely related). Could we clean the workspace in Jenkins and run a build to validate of all jars can be pulled from Maven repos? Thanks > enabling travis-ci build? > - > > Key: HIVE-10293 > URL: https://issues.apache.org/jira/browse/HIVE-10293 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-10293.1.patch, HIVE-10293.2.diff > > > I would like to contribute a .travis.yml for Hive. > In particular, this would allow contributors working through Github, to > validate their own commits on their own branches. > Please comment. > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13473) upgrade Apache Directory Server version
[ https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238029#comment-15238029 ] Gabor Liptak commented on HIVE-13473: - The patch changes a single line pom.xml https://issues.apache.org/jira/browse/HIVE-13472 didn't change pom.xml > upgrade Apache Directory Server version > --- > > Key: HIVE-13473 > URL: https://issues.apache.org/jira/browse/HIVE-13473 > Project: Hive > Issue Type: Improvement >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-13473.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13473) upgrade Apache Directory Server version
[ https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238024#comment-15238024 ] Gabor Liptak commented on HIVE-13473: - I'm not following these results. The build listed: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7557/testReport points to HIVE-13472 (the wrong JIRA issue/patch ...) > upgrade Apache Directory Server version > --- > > Key: HIVE-13473 > URL: https://issues.apache.org/jira/browse/HIVE-13473 > Project: Hive > Issue Type: Improvement >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-13473.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10293) enabling travis-ci build?
[ https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238011#comment-15238011 ] Ashutosh Chauhan commented on HIVE-10293: - [~gliptak] Will build with latest patch finished under 50 mins? > enabling travis-ci build? > - > > Key: HIVE-10293 > URL: https://issues.apache.org/jira/browse/HIVE-10293 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-10293.1.patch, HIVE-10293.2.diff > > > I would like to contribute a .travis.yml for Hive. > In particular, this would allow contributors working through Github, to > validate their own commits on their own branches. > Please comment. > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13473) upgrade Apache Directory Server version
[ https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238008#comment-15238008 ] Ashutosh Chauhan commented on HIVE-13473: - seems like patch didnt apply. Need to rebase ? > upgrade Apache Directory Server version > --- > > Key: HIVE-13473 > URL: https://issues.apache.org/jira/browse/HIVE-13473 > Project: Hive > Issue Type: Improvement >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-13473.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10293) enabling travis-ci build?
[ https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Liptak updated HIVE-10293: Attachment: HIVE-10293.2.diff > enabling travis-ci build? > - > > Key: HIVE-10293 > URL: https://issues.apache.org/jira/browse/HIVE-10293 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-10293.1.patch, HIVE-10293.2.diff > > > I would like to contribute a .travis.yml for Hive. > In particular, this would allow contributors working through Github, to > validate their own commits on their own branches. > Please comment. > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13495) Add timeout for individual tests
[ https://issues.apache.org/jira/browse/HIVE-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13495: Status: Patch Available (was: Open) > Add timeout for individual tests > > > Key: HIVE-13495 > URL: https://issues.apache.org/jira/browse/HIVE-13495 > Project: Hive > Issue Type: Improvement > Components: Tests >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-13495.patch > > > Some of the tests may get into hang state or may take long time to execute. > We shall make test infra robust to that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13495) Add timeout for individual tests
[ https://issues.apache.org/jira/browse/HIVE-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13495: Attachment: HIVE-13495.patch > Add timeout for individual tests > > > Key: HIVE-13495 > URL: https://issues.apache.org/jira/browse/HIVE-13495 > Project: Hive > Issue Type: Improvement > Components: Tests >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-13495.patch > > > Some of the tests may get into hang state or may take long time to execute. > We shall make test infra robust to that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13349) Metastore Changes : API calls for retrieving primary keys and foreign keys information
[ https://issues.apache.org/jira/browse/HIVE-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237968#comment-15237968 ] Hive QA commented on HIVE-13349: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12798369/HIVE-13349.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 4 tests passed Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-METASTORE-Test/137/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-METASTORE-Test/137/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-METASTORE-Test-137/ Messages: {noformat} LXC derby found. LXC derby is not started. Starting container... Container started. Preparing derby container... Container prepared. Calling /hive/testutils/metastore/dbs/derby/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/derby/execute.sh ... Tests executed. LXC mysql found. LXC mysql is not started. Starting container... Container started. Preparing mysql container... Container prepared. Calling /hive/testutils/metastore/dbs/mysql/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/mysql/execute.sh ... Tests executed. LXC oracle found. LXC oracle is not started. Starting container... Container started. Preparing oracle container... Container prepared. Calling /hive/testutils/metastore/dbs/oracle/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/oracle/execute.sh ... Tests executed. LXC postgres found. LXC postgres is not started. Starting container... Container started. Preparing postgres container... Container prepared. Calling /hive/testutils/metastore/dbs/postgres/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/postgres/execute.sh ... Tests executed. {noformat} This message is automatically generated. ATTACHMENT ID: 12798369 - PreCommit-HIVE-METASTORE-Test > Metastore Changes : API calls for retrieving primary keys and foreign keys > information > -- > > Key: HIVE-13349 > URL: https://issues.apache.org/jira/browse/HIVE-13349 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: 13449.2.patch, HIVE-13349.1.patch, HIVE-13349.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13349) Metastore Changes : API calls for retrieving primary keys and foreign keys information
[ https://issues.apache.org/jira/browse/HIVE-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13349: - Attachment: HIVE-13349.3.patch > Metastore Changes : API calls for retrieving primary keys and foreign keys > information > -- > > Key: HIVE-13349 > URL: https://issues.apache.org/jira/browse/HIVE-13349 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: 13449.2.patch, HIVE-13349.1.patch, HIVE-13349.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13349) Metastore Changes : API calls for retrieving primary keys and foreign keys information
[ https://issues.apache.org/jira/browse/HIVE-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237957#comment-15237957 ] Hive QA commented on HIVE-13349: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12798366/HIVE-13349.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 4 tests passed Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-METASTORE-Test/136/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-METASTORE-Test/136/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-METASTORE-Test-136/ Messages: {noformat} LXC derby found. LXC derby is not started. Starting container... Container started. Preparing derby container... Container prepared. Calling /hive/testutils/metastore/dbs/derby/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/derby/execute.sh ... Tests executed. LXC mysql found. LXC mysql is not started. Starting container... Container started. Preparing mysql container... Container prepared. Calling /hive/testutils/metastore/dbs/mysql/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/mysql/execute.sh ... Tests executed. LXC oracle found. LXC oracle is not started. Starting container... Container started. Preparing oracle container... Container prepared. Calling /hive/testutils/metastore/dbs/oracle/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/oracle/execute.sh ... Tests executed. LXC postgres found. LXC postgres is not started. Starting container... Container started. Preparing postgres container... Container prepared. Calling /hive/testutils/metastore/dbs/postgres/prepare.sh ... Server prepared. Calling /hive/testutils/metastore/dbs/postgres/execute.sh ... Tests executed. {noformat} This message is automatically generated. ATTACHMENT ID: 12798366 - PreCommit-HIVE-METASTORE-Test > Metastore Changes : API calls for retrieving primary keys and foreign keys > information > -- > > Key: HIVE-13349 > URL: https://issues.apache.org/jira/browse/HIVE-13349 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: 13449.2.patch, HIVE-13349.1.patch, HIVE-13349.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13349) Metastore Changes : API calls for retrieving primary keys and foreign keys information
[ https://issues.apache.org/jira/browse/HIVE-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13349: - Attachment: HIVE-13349.3.patch > Metastore Changes : API calls for retrieving primary keys and foreign keys > information > -- > > Key: HIVE-13349 > URL: https://issues.apache.org/jira/browse/HIVE-13349 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: 13449.2.patch, HIVE-13349.1.patch, HIVE-13349.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13400) Following up HIVE-12481, add retry for Zookeeper service discovery
[ https://issues.apache.org/jira/browse/HIVE-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237888#comment-15237888 ] Chaoyu Tang commented on HIVE-13400: +1. Yeah, that may be in a follow up JIRA if it is possible. > Following up HIVE-12481, add retry for Zookeeper service discovery > -- > > Key: HIVE-13400 > URL: https://issues.apache.org/jira/browse/HIVE-13400 > Project: Hive > Issue Type: Improvement > Components: JDBC >Affects Versions: 2.1.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13400.1.patch, HIVE-13400.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13400) Following up HIVE-12481, add retry for Zookeeper service discovery
[ https://issues.apache.org/jira/browse/HIVE-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237885#comment-15237885 ] Aihua Xu commented on HIVE-13400: - Right. I think we need some additional work to tell what should be considered unrecoverable or recoverable failures. > Following up HIVE-12481, add retry for Zookeeper service discovery > -- > > Key: HIVE-13400 > URL: https://issues.apache.org/jira/browse/HIVE-13400 > Project: Hive > Issue Type: Improvement > Components: JDBC >Affects Versions: 2.1.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13400.1.patch, HIVE-13400.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13400) Following up HIVE-12481, add retry for Zookeeper service discovery
[ https://issues.apache.org/jira/browse/HIVE-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237858#comment-15237858 ] Chaoyu Tang commented on HIVE-13400: LGTM, so with the change, in ZK SD cases, the total retries could be maxRetries * number of HS2 instances, and another round retry will still try all nodes in SD since we are not able to distinguish the intermittent failure from other unrecoverable ones (e.g. dead node), right? > Following up HIVE-12481, add retry for Zookeeper service discovery > -- > > Key: HIVE-13400 > URL: https://issues.apache.org/jira/browse/HIVE-13400 > Project: Hive > Issue Type: Improvement > Components: JDBC >Affects Versions: 2.1.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13400.1.patch, HIVE-13400.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11368) Hive Metastore process always shows alert in Ambari UI on machines with 64 CPU cores
[ https://issues.apache.org/jira/browse/HIVE-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237833#comment-15237833 ] Poojan Khanpara commented on HIVE-11368: I am getting the same alerts, especially when I try to drop a big schema with cascade. I have a six node cluster and it was working perfectly before I upgraded latest version. > Hive Metastore process always shows alert in Ambari UI on machines with 64 > CPU cores > > > Key: HIVE-11368 > URL: https://issues.apache.org/jira/browse/HIVE-11368 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 1.2.0 > Environment: 64 CPU Core. >Reporter: Ayappan > > I am running Ambari with hadoop full stack installed on a cluster setup with > machines having 64 CPU cores. All the services are up and running. But the > Hive Metastore process always shows alert.Checking into the alert definition > , it says Hive command was killed due timeout after 30 seconds. > This is below command. > /var/lib/ambari-agent/ambari-sudo.sh su ambari-qa -l -s /bin/bash -c export > PATH='/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/sbin/:/usr/iop/current/hive-metastore/bin' > ; ulimit -s 10240 ; export > HIVE_CONF_DIR='/usr/iop/current/hive-metastore/conf/conf.server' ; hive > --hiveconf hive.metastore.uris=thrift://birhel17.rtp.raleigh.ibm.com:9083 > --hiveconf hive.metastore.client.connect.retry.delay=1s > --hiveconf hive.metastore.failure.retries=1 --hiveconf > hive.metastore.connect.retries=1 --hiveconf > hive.metastore.client.socket.timeout=14s --hiveconf > hive.execution.engine=mr -e 'show databases;' > And the alert-metastore python script has a timeout of 30 seconds but the > above Hive command takes more than 30 seconds on a 64 core machine. So it > always shows the alert. > Even manually running the command from command line takes lot of time (around > 27 secs) in 64 core compared to 8 core machine (takes only 3 secs) > Do we need to change some hive parameters ( like worker.threads ) for 64 core > machines ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13342) Improve logging in llap decider for llap
[ https://issues.apache.org/jira/browse/HIVE-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237832#comment-15237832 ] Siddharth Seth commented on HIVE-13342: --- +1 > Improve logging in llap decider for llap > > > Key: HIVE-13342 > URL: https://issues.apache.org/jira/browse/HIVE-13342 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13342.1.patch, HIVE-13342.2.patch, > HIVE-13342.3.patch > > > Currently we do not log our decisions with respect to llap. Are we running > everything in llap mode or only parts of the plan. We need more logging. > Also, if llap mode is all but for some reason, we cannot run the work in llap > mode, fail and throw an exception advise the user to change the mode to auto. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-13491: - Resolution: Fixed Fix Version/s: 2.1.0 1.3.0 Status: Resolved (was: Patch Available) Patch committed to branch-1 and master. Thanks for the review [~szehon]! > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237785#comment-15237785 ] Thejas M Nair commented on HIVE-13491: -- [~szehon] Yes, I agree, a restart is worth trying out. Meanwhile, I will go ahead and commit this. > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6090) Audit logs for HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237768#comment-15237768 ] Thiruvel Thirumoolan commented on HIVE-6090: [~heesoo] - Lemme check, don't think the failure was related to my patch. > Audit logs for HiveServer2 > -- > > Key: HIVE-6090 > URL: https://issues.apache.org/jira/browse/HIVE-6090 > Project: Hive > Issue Type: Improvement > Components: Diagnosability, HiveServer2 >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Labels: audit, hiveserver > Attachments: HIVE-6090.1.WIP.patch, HIVE-6090.1.patch, > HIVE-6090.3.patch, HIVE-6090.4.patch, HIVE-6090.patch > > > HiveMetastore has audit logs and would like to audit all queries or requests > to HiveServer2 also. This will help in understanding how the APIs were used, > queries submitted, users etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11793) SHOW LOCKS with DbTxnManager ignores filter options
[ https://issues.apache.org/jira/browse/HIVE-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-11793: - Attachment: HIVE-11793.2.patch patch 2 for test > SHOW LOCKS with DbTxnManager ignores filter options > --- > > Key: HIVE-11793 > URL: https://issues.apache.org/jira/browse/HIVE-11793 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Minor > Attachments: HIVE-11793.1.patch, HIVE-11793.2.patch > > > https://cwiki.apache.org/confluence/display/Hive/Locking and > https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowLocks > list various options that can be used with SHOW LOCKS, e.g. > When ACID is enabled, all these options are ignored and a full list is > returned. > (also only ext lock id is shown, int lock id is not). > see DDLTask.showLocks() and TxnHandler.showLocks() > requires extending ShowLocksRequest which is a Thrift object -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13473) upgrade Apache Directory Server version
[ https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237714#comment-15237714 ] Hive QA commented on HIVE-13473: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797930/HIVE-13473.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7557/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7557/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7557/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Tests exited with: InterruptedException: null {noformat} This message is automatically generated. ATTACHMENT ID: 12797930 - PreCommit-HIVE-TRUNK-Build > upgrade Apache Directory Server version > --- > > Key: HIVE-13473 > URL: https://issues.apache.org/jira/browse/HIVE-13473 > Project: Hive > Issue Type: Improvement >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-13473.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237675#comment-15237675 ] Szehon Ho commented on HIVE-13491: -- I thinking to restart PTest server, which should trigger auto-generation of new test slaves fresh from the image, does anyone mind me doing that? > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237667#comment-15237667 ] Szehon Ho commented on HIVE-13491: -- I was also thinking the other day that maybe the machines are getting loaded or somewhat slow, hence HMS cannot start up in time. But this will tell us for certain. I will also take a look at that if I get a chance. > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237653#comment-15237653 ] Szehon Ho commented on HIVE-13491: -- Thanks +1 > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11427) Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079
[ https://issues.apache.org/jira/browse/HIVE-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-11427: Status: Patch Available (was: Open) Need code review. > Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079 > > > Key: HIVE-11427 > URL: https://issues.apache.org/jira/browse/HIVE-11427 > Project: Hive > Issue Type: Bug >Reporter: Grisha Trubetskoy >Assignee: Yongzhi Chen > Attachments: HIVE-11427.1.patch > > > If a user _does not_ have HDFS write permissions to the _default_ database, > and attempts to create a table in a _private_ database to which the user > _does_ have permissions using CREATE TABLE AS SELECT from a table in the > default database, the following happens: > {code} > use default; > create table grisha.blahblah as select * from some_table; > FAILED: SemanticException 0:0 Error creating temporary folder on: > hdfs://nn.example.com/user/hive/warehouse. Error encountered near token > 'TOK_TMP_FILE’ > {code} > I've edited this issue because my initial explanation was completely bogus. A > more likely explanation is in > https://github.com/apache/hive/commit/1614314ef7bd0c3b8527ee32a434ababf7711278 > {code} > -fname = ctx.getExternalTmpPath( > +fname = ctx.getExtTmpPathRelTo( > // and then something incorrect happens in getExtTmpPathRelTo() > {code} > In any event - the bug is that the location chosen for the temporary storage > is not in the same place as the target table. It should be same as the target > table (/user/hive/warehouse/grisha.db in the above example) because this is > where presumably the user running the query would have write permissions to. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12064) prevent transactional=false
[ https://issues.apache.org/jira/browse/HIVE-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237626#comment-15237626 ] Eugene Koifman commented on HIVE-12064: --- Text tables do not support transactional=true. This is what the "message:The table must be bucketed and stored using an ACID compliant format (such as ORC))" is trying to convey. > prevent transactional=false > --- > > Key: HIVE-12064 > URL: https://issues.apache.org/jira/browse/HIVE-12064 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Fix For: 1.3.0, 2.1.0, 2.0.1 > > Attachments: HIVE-12064.2.patch, HIVE-12064.3.patch, > HIVE-12064.4.patch, HIVE-12064.5.patch, HIVE-12064.6.patch, > HIVE-12064.7.patch, HIVE-12064.branch-1.patch, HIVE-12064.branch-2.0.patch, > HIVE-12064.patch > > > currently a tblproperty transactional=true must be set to make a table behave > in ACID compliant way. > This is misleading in that it seems like changing it to transactional=false > makes the table non-acid but on disk layout of acid table is different than > plain tables. So changing this property may cause wrong data to be returned. > Should prevent transactional=false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10249) ACID: show locks should show who the lock is waiting for
[ https://issues.apache.org/jira/browse/HIVE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237636#comment-15237636 ] Wei Zheng commented on HIVE-10249: -- HIVE-11793 will include the fix for the missing column issue > ACID: show locks should show who the lock is waiting for > > > Key: HIVE-10249 > URL: https://issues.apache.org/jira/browse/HIVE-10249 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-10249.2.patch, HIVE-10249.3.patch, HIVE-10249.patch > > > instead of just showing state WAITING, we should include what the lock is > waiting for. It will make diagnostics easier. > It would also be useful to add QueryPlan.getQueryId() so it's easy to see > which query the lock belongs to. > # need to store this in HIVE_LOCKS (additional field); this has a perf hit to > do another update on failed attempt and to clear filed on successful attempt. > (Actually on success, we update anyway). How exactly would this be > displayed? Each lock can block but we acquire all parts of external lock at > once. Since we stop at first one that blocked, we’d only update that one… > # This needs a matching Thrift change to pass to client: ShowLocksResponse > # Perhaps we can start updating this info after lock was in W state for some > time to reduce perf hit. > # This is mostly useful for “Why is my query stuck” -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11427) Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079
[ https://issues.apache.org/jira/browse/HIVE-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-11427: Attachment: HIVE-11427.1.patch > Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079 > > > Key: HIVE-11427 > URL: https://issues.apache.org/jira/browse/HIVE-11427 > Project: Hive > Issue Type: Bug >Reporter: Grisha Trubetskoy >Assignee: Yongzhi Chen > Attachments: HIVE-11427.1.patch > > > If a user _does not_ have HDFS write permissions to the _default_ database, > and attempts to create a table in a _private_ database to which the user > _does_ have permissions using CREATE TABLE AS SELECT from a table in the > default database, the following happens: > {code} > use default; > create table grisha.blahblah as select * from some_table; > FAILED: SemanticException 0:0 Error creating temporary folder on: > hdfs://nn.example.com/user/hive/warehouse. Error encountered near token > 'TOK_TMP_FILE’ > {code} > I've edited this issue because my initial explanation was completely bogus. A > more likely explanation is in > https://github.com/apache/hive/commit/1614314ef7bd0c3b8527ee32a434ababf7711278 > {code} > -fname = ctx.getExternalTmpPath( > +fname = ctx.getExtTmpPathRelTo( > // and then something incorrect happens in getExtTmpPathRelTo() > {code} > In any event - the bug is that the location chosen for the temporary storage > is not in the same place as the target table. It should be same as the target > table (/user/hive/warehouse/grisha.db in the above example) because this is > where presumably the user running the query would have write permissions to. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11427) Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079
[ https://issues.apache.org/jira/browse/HIVE-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237613#comment-15237613 ] Yongzhi Chen commented on HIVE-11427: - TOK_TMP_FILE does not have a database before it, so Utilities.getDbTableName provide current working db as its db which is not right for CTAS. Should use the database where the table to be created belongs to. Attach patch for the fix. > Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079 > > > Key: HIVE-11427 > URL: https://issues.apache.org/jira/browse/HIVE-11427 > Project: Hive > Issue Type: Bug >Reporter: Grisha Trubetskoy >Assignee: Yongzhi Chen > > If a user _does not_ have HDFS write permissions to the _default_ database, > and attempts to create a table in a _private_ database to which the user > _does_ have permissions using CREATE TABLE AS SELECT from a table in the > default database, the following happens: > {code} > use default; > create table grisha.blahblah as select * from some_table; > FAILED: SemanticException 0:0 Error creating temporary folder on: > hdfs://nn.example.com/user/hive/warehouse. Error encountered near token > 'TOK_TMP_FILE’ > {code} > I've edited this issue because my initial explanation was completely bogus. A > more likely explanation is in > https://github.com/apache/hive/commit/1614314ef7bd0c3b8527ee32a434ababf7711278 > {code} > -fname = ctx.getExternalTmpPath( > +fname = ctx.getExtTmpPathRelTo( > // and then something incorrect happens in getExtTmpPathRelTo() > {code} > In any event - the bug is that the location chosen for the temporary storage > is not in the same place as the target table. It should be same as the target > table (/user/hive/warehouse/grisha.db in the above example) because this is > where presumably the user running the query would have write permissions to. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12064) prevent transactional=false
[ https://issues.apache.org/jira/browse/HIVE-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237610#comment-15237610 ] Wei Zheng commented on HIVE-12064: -- Can you be more specific? It may be helpful if you provide the steps and ddl/dml statements here > prevent transactional=false > --- > > Key: HIVE-12064 > URL: https://issues.apache.org/jira/browse/HIVE-12064 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Fix For: 1.3.0, 2.1.0, 2.0.1 > > Attachments: HIVE-12064.2.patch, HIVE-12064.3.patch, > HIVE-12064.4.patch, HIVE-12064.5.patch, HIVE-12064.6.patch, > HIVE-12064.7.patch, HIVE-12064.branch-1.patch, HIVE-12064.branch-2.0.patch, > HIVE-12064.patch > > > currently a tblproperty transactional=true must be set to make a table behave > in ACID compliant way. > This is misleading in that it seems like changing it to transactional=false > makes the table non-acid but on disk layout of acid table is different than > plain tables. So changing this property may cause wrong data to be returned. > Should prevent transactional=false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10293) enabling travis-ci build?
[ https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237536#comment-15237536 ] Hive QA commented on HIVE-10293: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797925/HIVE-10293.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 9975 tests executed *Failed tests:* {noformat} TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_acidvec_mapwork_part org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_dyn_part_max {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7556/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7556/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7556/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12797925 - PreCommit-HIVE-TRUNK-Build > enabling travis-ci build? > - > > Key: HIVE-10293 > URL: https://issues.apache.org/jira/browse/HIVE-10293 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-10293.1.patch > > > I would like to contribute a .travis.yml for Hive. > In particular, this would allow contributors working through Github, to > validate their own commits on their own branches. > Please comment. > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237509#comment-15237509 ] Thejas M Nair commented on HIVE-13491: -- [~sershe] [~szehon] [~ashutoshc] [~sseth] Can someone please review this change ? It should help nail down the problem with metastore startup in large number of tests. This change impacts only the tests. We have 30+ patches in the queue, and many runs are taking 3+hrs to finish. Putting this in asap could help in reducing the number of failures in those tests and might also give more clues on why the runs are taking so long. > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236763#comment-15236763 ] Thejas M Nair edited comment on HIVE-13491 at 4/12/16 4:31 PM: --- Also increased the frequency of checks for metastore startup from every 10 sec to every sec. 1 sec pause should be more than enough to not consume too much of cpu resources on the machine, and it will help shave off a few seconds from the test runtime. was (Author: thejas): Also increased the frequency of checks for metastore startup from every 10 sec to every sec. 1 sec pause should be more than enough to not consume too much of cpu resources on the machine. > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing
[ https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13472: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, [~sarutak] > Replace primitive wrapper's valueOf method with parse* method to avoid > unnecessary boxing/unboxing > -- > > Key: HIVE-13472 > URL: https://issues.apache.org/jira/browse/HIVE-13472 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 2.1.0 >Reporter: Kousuke Saruta >Assignee: Kousuke Saruta > Fix For: 2.1.0 > > Attachments: HIVE-13472.0.patch > > > There are lots of primitive wrapper's valueOf method which should be replaced > with parseXX method. > For example, Integer.valueOf(String) returns Integer type but > Integer.parseInt(String) returns primitive int type so we can avoid > unnecessary boxing/unboxing by replacing some of them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13475) Allow aggregate functions in over clause
[ https://issues.apache.org/jira/browse/HIVE-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237469#comment-15237469 ] Ashutosh Chauhan commented on HIVE-13475: - yeah, correct. +1 pending tests. > Allow aggregate functions in over clause > > > Key: HIVE-13475 > URL: https://issues.apache.org/jira/browse/HIVE-13475 > Project: Hive > Issue Type: New Feature > Components: Parser >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13475.patch > > > Support to reference aggregate functions within the over clause needs to be > added. For instance, currently the following query will fail: > {noformat} > select rank() over (order by sum(ws.c_int)) as return_rank > from cbo_t3 ws > group by ws.key; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12977) Pass credentials in the current UGI while creating Tez session
[ https://issues.apache.org/jira/browse/HIVE-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinoth Sathappan updated HIVE-12977: Attachment: HIVE-12977.3.patch > Pass credentials in the current UGI while creating Tez session > -- > > Key: HIVE-12977 > URL: https://issues.apache.org/jira/browse/HIVE-12977 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Vinoth Sathappan >Assignee: Vinoth Sathappan > Attachments: HIVE-12977.1.patch, HIVE-12977.1.patch, > HIVE-12977.2.patch, HIVE-12977.3.patch > > > The credentials present in the current UGI i.e. > UserGroupInformation.getCurrentUser().getCredentials() isn't passed to the > Tez session. It is instantiated with null credentials. > session = TezClient.create("HIVE-" + sessionId, tezConfig, true, > commonLocalResources, null); > In this case, tokens added using hive execution hooks, aren't available to > Tez even if they are available in memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13293) Query occurs performance degradation after enabling parallel order by for Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237434#comment-15237434 ] Xuefu Zhang commented on HIVE-13293: [~lirui], thanks for the investigation and the patch, which seems simple and straightforward. One question: what do you mean by "only works queries that have only one ShuffleMapStage"? In your previous example, there are actually a few such stages. Isn't your patch supposed to help that as well? > Query occurs performance degradation after enabling parallel order by for > Hive on Spark > --- > > Key: HIVE-13293 > URL: https://issues.apache.org/jira/browse/HIVE-13293 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.0.0 >Reporter: Lifeng Wang >Assignee: Rui Li > Attachments: HIVE-13293.1.patch > > > I use TPCx-BB to do some performance test on Hive on Spark engine. And found > query 10 has performance degradation when enabling parallel order by. > It seems that sampling cost much time before running the real query. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13424) Refactoring the code to pass a QueryState object rather than HiveConf object
[ https://issues.apache.org/jira/browse/HIVE-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13424: Attachment: HIVE-13424.4.patch Attached the patch-4: minor change which will always initializes queryState in the test to solve NPE issue. > Refactoring the code to pass a QueryState object rather than HiveConf object > > > Key: HIVE-13424 > URL: https://issues.apache.org/jira/browse/HIVE-13424 > Project: Hive > Issue Type: Sub-task > Components: Query Processor >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13424.1.patch, HIVE-13424.2.patch, > HIVE-13424.3.patch, HIVE-13424.4.patch > > > Step1: to refractor the code by creating the QueryState class and moving > query related info from SessionState. Then during compilation, execution > stages, pass single QueryState object for each query. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11351) Column Found in more than One Tables/Subqueries
[ https://issues.apache.org/jira/browse/HIVE-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237217#comment-15237217 ] Alina Abramova commented on HIVE-11351: --- After local testing I see that my fix was the cause of regression and I create the new patch > Column Found in more than One Tables/Subqueries > --- > > Key: HIVE-11351 > URL: https://issues.apache.org/jira/browse/HIVE-11351 > Project: Hive > Issue Type: Bug > Environment: HIVE 1.1.0 >Reporter: MK >Assignee: Alina Abramova > Attachments: HIVE-11351-branch-1.0.patch, > HIVE-11351.2-branch-1.0.patch > > > when execute a script: > INSERT overwrite TABLE tmp.tmp_dim_cpttr_categ1 >SELECT DISTINCT cur.categ_id AS categ_id, >cur.categ_code AS categ_code, >cur.categ_name AS categ_name, >cur.categ_parnt_id AS categ_parnt_id, >par.categ_name AS categ_parnt_name, >cur.mc_site_id AS mc_site_id >FROM tmp.tmp_dim_cpttr_categ cur >LEFT OUTER JOIN tmp.tmp_dim_cpttr_categ par >ON cur.categ_parnt_id = par.categ_id; > error occur : SemanticException Column categ_name Found in more than One > Tables/Subqueries > when modify the alias categ_name to categ_name_cur, it will be execute > successfully. > INSERT overwrite TABLE tmp.tmp_dim_cpttr_categ1 >SELECT DISTINCT cur.categ_id AS categ_id, >cur.categ_code AS categ_code, >cur.categ_name AS categ_name_cur, >cur.categ_parnt_id AS categ_parnt_id, >par.categ_name AS categ_parnt_name, >cur.mc_site_id AS mc_site_id >FROM tmp.tmp_dim_cpttr_categ cur >LEFT OUTER JOIN tmp.tmp_dim_cpttr_categ par >ON cur.categ_parnt_id = par.categ_id; > this happen when we upgrade hive from 0.10 to 1.1.0 . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11351) Column Found in more than One Tables/Subqueries
[ https://issues.apache.org/jira/browse/HIVE-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alina Abramova updated HIVE-11351: -- Attachment: HIVE-11351.2-branch-1.0.patch > Column Found in more than One Tables/Subqueries > --- > > Key: HIVE-11351 > URL: https://issues.apache.org/jira/browse/HIVE-11351 > Project: Hive > Issue Type: Bug > Environment: HIVE 1.1.0 >Reporter: MK >Assignee: Alina Abramova > Attachments: HIVE-11351-branch-1.0.patch, > HIVE-11351.2-branch-1.0.patch > > > when execute a script: > INSERT overwrite TABLE tmp.tmp_dim_cpttr_categ1 >SELECT DISTINCT cur.categ_id AS categ_id, >cur.categ_code AS categ_code, >cur.categ_name AS categ_name, >cur.categ_parnt_id AS categ_parnt_id, >par.categ_name AS categ_parnt_name, >cur.mc_site_id AS mc_site_id >FROM tmp.tmp_dim_cpttr_categ cur >LEFT OUTER JOIN tmp.tmp_dim_cpttr_categ par >ON cur.categ_parnt_id = par.categ_id; > error occur : SemanticException Column categ_name Found in more than One > Tables/Subqueries > when modify the alias categ_name to categ_name_cur, it will be execute > successfully. > INSERT overwrite TABLE tmp.tmp_dim_cpttr_categ1 >SELECT DISTINCT cur.categ_id AS categ_id, >cur.categ_code AS categ_code, >cur.categ_name AS categ_name_cur, >cur.categ_parnt_id AS categ_parnt_id, >par.categ_name AS categ_parnt_name, >cur.mc_site_id AS mc_site_id >FROM tmp.tmp_dim_cpttr_categ cur >LEFT OUTER JOIN tmp.tmp_dim_cpttr_categ par >ON cur.categ_parnt_id = par.categ_id; > this happen when we upgrade hive from 0.10 to 1.1.0 . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-11427) Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079
[ https://issues.apache.org/jira/browse/HIVE-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen reassigned HIVE-11427: --- Assignee: Yongzhi Chen > Location of temporary table for CREATE TABLE SELECT broken by HIVE-7079 > > > Key: HIVE-11427 > URL: https://issues.apache.org/jira/browse/HIVE-11427 > Project: Hive > Issue Type: Bug >Reporter: Grisha Trubetskoy >Assignee: Yongzhi Chen > > If a user _does not_ have HDFS write permissions to the _default_ database, > and attempts to create a table in a _private_ database to which the user > _does_ have permissions using CREATE TABLE AS SELECT from a table in the > default database, the following happens: > {code} > use default; > create table grisha.blahblah as select * from some_table; > FAILED: SemanticException 0:0 Error creating temporary folder on: > hdfs://nn.example.com/user/hive/warehouse. Error encountered near token > 'TOK_TMP_FILE’ > {code} > I've edited this issue because my initial explanation was completely bogus. A > more likely explanation is in > https://github.com/apache/hive/commit/1614314ef7bd0c3b8527ee32a434ababf7711278 > {code} > -fname = ctx.getExternalTmpPath( > +fname = ctx.getExtTmpPathRelTo( > // and then something incorrect happens in getExtTmpPathRelTo() > {code} > In any event - the bug is that the location chosen for the temporary storage > is not in the same place as the target table. It should be same as the target > table (/user/hive/warehouse/grisha.db in the above example) because this is > where presumably the user running the query would have write permissions to. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13425) Fix partition addition in MSCK REPAIR TABLE command
[ https://issues.apache.org/jira/browse/HIVE-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237167#comment-15237167 ] Hive QA commented on HIVE-13425: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797912/HIVE-13425.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 214 failed/errored test(s), 9974 tests executed *Failed tests:* {noformat} TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_change_col org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_cascade org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_partition_drop org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_table_null_partition org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_partitioned org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_partitioned_native org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_schema_evolution_native org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_combine2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constprog_dp org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cp_sel org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_query4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_all_partitioned org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_where_partitioned org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_whole_partition org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynamic_partition_insert org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynamic_partition_skip_default org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_merge org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_bucketing org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_vectorization org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization_acid org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_full org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_partial org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_partial_ndv org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_filter_numeric org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_implicit_cast_during_insert org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert0 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_acid_dynamic_partition org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into_with_schema org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into_with_schema2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_dynamic_partitioned org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_7 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_llap_acid org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_llap_partitioned org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part10 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part11 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part14 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part15 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part9
[jira] [Commented] (HIVE-10293) enabling travis-ci build?
[ https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237060#comment-15237060 ] Gabor Liptak commented on HIVE-10293: - [~spena] Thank you for the script, I implemented it. Here is the latest result: https://travis-ci.org/gliptak/hive/builds/122405719 So without the LDAP version update https://issues.apache.org/jira/browse/HIVE-13473 Maven repo pull fails, otherwise I see this error: Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.341 sec <<< FAILURE! - in org.apache.hive.service.auth.TestLdapAtnProviderWithMiniDS org.apache.hive.service.auth.TestLdapAtnProviderWithMiniDS Time elapsed: 1.341 sec <<< ERROR! java.lang.NoClassDefFoundError: org/apache/directory/shared/ldap/entry/ServerEntry at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at org.apache.directory.server.core.integ.FrameworkRunner.run(FrameworkRunner.java:107) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) Could you validate/correct https://issues.apache.org/jira/browse/HIVE-13473 ? Thanks > enabling travis-ci build? > - > > Key: HIVE-10293 > URL: https://issues.apache.org/jira/browse/HIVE-10293 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Gabor Liptak >Assignee: Gabor Liptak >Priority: Minor > Attachments: HIVE-10293.1.patch > > > I would like to contribute a .travis.yml for Hive. > In particular, this would allow contributors working through Github, to > validate their own commits on their own branches. > Please comment. > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13183) More logs in operation logs
[ https://issues.apache.org/jira/browse/HIVE-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated HIVE-13183: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed. Thanks [~prongs] > More logs in operation logs > --- > > Key: HIVE-13183 > URL: https://issues.apache.org/jira/browse/HIVE-13183 > Project: Hive > Issue Type: Improvement >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > Fix For: 2.1.0 > > Attachments: HIVE-13183.02.patch, HIVE-13183.03.patch, > HIVE-13183.04.patch, HIVE-13183.05.patch, HIVE-13183.06.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12467) Add number of dynamic partitions to error message
[ https://issues.apache.org/jira/browse/HIVE-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Francke updated HIVE-12467: Attachment: HIVE-12467.2.patch Thanks for taking a look Prasanth. I have attached a new version with the suggested changes. And will commit in a couple of days if there are no more objections. > Add number of dynamic partitions to error message > - > > Key: HIVE-12467 > URL: https://issues.apache.org/jira/browse/HIVE-12467 > Project: Hive > Issue Type: Improvement >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Minor > Attachments: HIVE-12467.2.patch, HIVE-12467.patch > > > Currently when using dynamic partition insert we get an error message saying > that the client tried to create too many dynamic partitions ("Maximum was set > to"). I'll extend the error message to specify the number of dynamic > partitions which can be helpful for debugging. > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12467) Add number of dynamic partitions to error message
[ https://issues.apache.org/jira/browse/HIVE-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Francke updated HIVE-12467: Status: Open (was: Patch Available) > Add number of dynamic partitions to error message > - > > Key: HIVE-12467 > URL: https://issues.apache.org/jira/browse/HIVE-12467 > Project: Hive > Issue Type: Improvement >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Minor > Attachments: HIVE-12467.patch > > > Currently when using dynamic partition insert we get an error message saying > that the client tried to create too many dynamic partitions ("Maximum was set > to"). I'll extend the error message to specify the number of dynamic > partitions which can be helpful for debugging. > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12467) Add number of dynamic partitions to error message
[ https://issues.apache.org/jira/browse/HIVE-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Francke updated HIVE-12467: Description: Currently when using dynamic partition insert we get an error message saying that the client tried to create too many dynamic partitions ("Maximum was set to"). I'll extend the error message to specify the number of dynamic partitions which can be helpful for debugging. NO PRECOMMIT TESTS was:Currently when using dynamic partition insert we get an error message saying that the client tried to create too many dynamic partitions ("Maximum was set to"). I'll extend the error message to specify the number of dynamic partitions which can be helpful for debugging. > Add number of dynamic partitions to error message > - > > Key: HIVE-12467 > URL: https://issues.apache.org/jira/browse/HIVE-12467 > Project: Hive > Issue Type: Improvement >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Minor > Attachments: HIVE-12467.patch > > > Currently when using dynamic partition insert we get an error message saying > that the client tried to create too many dynamic partitions ("Maximum was set > to"). I'll extend the error message to specify the number of dynamic > partitions which can be helpful for debugging. > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing
[ https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236961#comment-15236961 ] Hive QA commented on HIVE-13472: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797914/HIVE-13472.0.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9972 tests executed *Failed tests:* {noformat} TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_dyn_part_max org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7554/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7554/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7554/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12797914 - PreCommit-HIVE-TRUNK-Build > Replace primitive wrapper's valueOf method with parse* method to avoid > unnecessary boxing/unboxing > -- > > Key: HIVE-13472 > URL: https://issues.apache.org/jira/browse/HIVE-13472 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 2.1.0 >Reporter: Kousuke Saruta >Assignee: Kousuke Saruta > Attachments: HIVE-13472.0.patch > > > There are lots of primitive wrapper's valueOf method which should be replaced > with parseXX method. > For example, Integer.valueOf(String) returns Integer type but > Integer.parseInt(String) returns primitive int type so we can avoid > unnecessary boxing/unboxing by replacing some of them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-4841) Add partition level hook to HiveMetaHook
[ https://issues.apache.org/jira/browse/HIVE-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236881#comment-15236881 ] Amey Barve commented on HIVE-4841: -- so is fix ready and for which release? > Add partition level hook to HiveMetaHook > > > Key: HIVE-4841 > URL: https://issues.apache.org/jira/browse/HIVE-4841 > Project: Hive > Issue Type: Improvement > Components: StorageHandler >Reporter: Navis >Assignee: Navis >Priority: Minor > Attachments: HIVE-4841.4.patch.txt, HIVE-4841.D11673.1.patch, > HIVE-4841.D11673.2.patch, HIVE-4841.D11673.3.patch > > > Current HiveMetaHook provides hooks for tables only. With partition level > hook, external storages also could be revised to exploit PPR. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13475) Allow aggregate functions in over clause
[ https://issues.apache.org/jira/browse/HIVE-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236839#comment-15236839 ] Jesus Camacho Rodriguez commented on HIVE-13475: I just did; it is missing the gby expression right? > Allow aggregate functions in over clause > > > Key: HIVE-13475 > URL: https://issues.apache.org/jira/browse/HIVE-13475 > Project: Hive > Issue Type: New Feature > Components: Parser >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13475.patch > > > Support to reference aggregate functions within the over clause needs to be > added. For instance, currently the following query will fail: > {noformat} > select rank() over (order by sum(ws.c_int)) as return_rank > from cbo_t3 ws > group by ws.key; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13475) Allow aggregate functions in over clause
[ https://issues.apache.org/jira/browse/HIVE-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13475: --- Attachment: (was: HIVE-13475.patch) > Allow aggregate functions in over clause > > > Key: HIVE-13475 > URL: https://issues.apache.org/jira/browse/HIVE-13475 > Project: Hive > Issue Type: New Feature > Components: Parser >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13475.patch > > > Support to reference aggregate functions within the over clause needs to be > added. For instance, currently the following query will fail: > {noformat} > select rank() over (order by sum(ws.c_int)) as return_rank > from cbo_t3 ws > group by ws.key; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13475) Allow aggregate functions in over clause
[ https://issues.apache.org/jira/browse/HIVE-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13475: --- Attachment: HIVE-13475.patch > Allow aggregate functions in over clause > > > Key: HIVE-13475 > URL: https://issues.apache.org/jira/browse/HIVE-13475 > Project: Hive > Issue Type: New Feature > Components: Parser >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13475.patch > > > Support to reference aggregate functions within the over clause needs to be > added. For instance, currently the following query will fail: > {noformat} > select rank() over (order by sum(ws.c_int)) as return_rank > from cbo_t3 ws > group by ws.key; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236763#comment-15236763 ] Thejas M Nair commented on HIVE-13491: -- Also increased the frequency of checks for metastore startup from every 10 sec to every sec. 1 sec pause should be more than enough to not consume too much of cpu resources on the machine. > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236757#comment-15236757 ] ASF GitHub Bot commented on HIVE-13491: --- GitHub user thejasmn opened a pull request: https://github.com/apache/hive/pull/71 HIVE-13491 - print thread dumps You can merge this pull request into a Git repository by running: $ git pull https://github.com/thejasmn/hive HIVE-13491 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/71.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #71 commit 9648db8c3cdc97a9a6449a6e801c9670d796de86 Author: Thejas NairDate: 2016-04-12T07:30:32Z HIVE-13491 - print thread dumps > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-13491: - Status: Patch Available (was: Open) > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-13491: - Attachment: HIVE-13491.1.patch > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-13491.1.patch > > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13491) Testing : log thread stacks when metastore fails to start
[ https://issues.apache.org/jira/browse/HIVE-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-13491: - Description: Many tests are failing in ptest2 because metastore fails to startup in the expected time. There is not enough information to figure out why the metastore startup failed/got hung in the hive.log file. Printing the thread dumps when that happens would be useful for finding the root cause. The stack in test failure looks like this - {code} java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) at org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) at org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) at org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) at org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) {code} was: Many tests are failing in ptest2 because metastore fails to startup in the expected time. There is not enough information to figure out why the metastore startup failed/got hung in the hive.log file. Printing the thread dumps when that happens would be useful. The stack in test failure looks like this - {code} java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) at org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) at org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) at org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) at org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) {code} > Testing : log thread stacks when metastore fails to start > -- > > Key: HIVE-13491 > URL: https://issues.apache.org/jira/browse/HIVE-13491 > Project: Hive > Issue Type: Bug > Components: Test, Testing Infrastructure >Reporter: Thejas M Nair >Assignee: Thejas M Nair > > Many tests are failing in ptest2 because metastore fails to startup in the > expected time. > There is not enough information to figure out why the metastore startup > failed/got hung in the hive.log file. Printing the thread dumps when that > happens would be useful for finding the root cause. > The stack in test failure looks like this - > {code} > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.loopUntilHMSReady(MetaStoreUtils.java:1208) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1195) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.startMetaStore(MetaStoreUtils.java:1177) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.setup(TestHadoopAuthBridge23.java:153) > at > org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser(TestHadoopAuthBridge23.java:241) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9422) LLAP: row-level vectorized SARGs
[ https://issues.apache.org/jira/browse/HIVE-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236711#comment-15236711 ] Yohei Abe commented on HIVE-9422: - [~sershe] I uploaded patch(and made RB link), some unit tests are still incomplete. Would you comment and review this patch? > LLAP: row-level vectorized SARGs > > > Key: HIVE-9422 > URL: https://issues.apache.org/jira/browse/HIVE-9422 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Sergey Shelukhin >Assignee: Yohei Abe > Attachments: HIVE-9422.2.patch, HIVE-9422.WIP1.patch > > > When VRBs are built from encoded data, sargs can be applied on low level to > reduce the number of rows to process. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12342) Set default value of hive.optimize.index.filter to true
[ https://issues.apache.org/jira/browse/HIVE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236708#comment-15236708 ] Hive QA commented on HIVE-12342: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797943/HIVE-12342.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 797 failed/errored test(s), 9973 tests executed *Failed tests:* {noformat} TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_allcolref_in_udf org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ambiguous_col org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_deep_filters org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join_pkfk org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join10 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join11 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join14 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join15 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join16 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join17 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19_inclause org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join21 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join23 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join24 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join26 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join27 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join28 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join29 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join32 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join33 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join7 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join9 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_reordering_values org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_smb_mapjoin_14 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_10 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_7 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_groupby
[jira] [Updated] (HIVE-9422) LLAP: row-level vectorized SARGs
[ https://issues.apache.org/jira/browse/HIVE-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yohei Abe updated HIVE-9422: Attachment: HIVE-9422.2.patch > LLAP: row-level vectorized SARGs > > > Key: HIVE-9422 > URL: https://issues.apache.org/jira/browse/HIVE-9422 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Sergey Shelukhin >Assignee: Yohei Abe > Attachments: HIVE-9422.2.patch, HIVE-9422.WIP1.patch > > > When VRBs are built from encoded data, sargs can be applied on low level to > reduce the number of rows to process. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11615) Create test for max thrift message setting
[ https://issues.apache.org/jira/browse/HIVE-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-11615: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, [~jdere] > Create test for max thrift message setting > -- > > Key: HIVE-11615 > URL: https://issues.apache.org/jira/browse/HIVE-11615 > Project: Hive > Issue Type: Test > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: 2.1.0 > > Attachments: HIVE-11615.1.patch > > > Create a test case for HIVE-8680 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11806) Create test for HIVE-11174
[ https://issues.apache.org/jira/browse/HIVE-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-11806: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, [~vikram.dixit] > Create test for HIVE-11174 > -- > > Key: HIVE-11806 > URL: https://issues.apache.org/jira/browse/HIVE-11806 > Project: Hive > Issue Type: Bug > Components: Tests >Affects Versions: 1.2.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K >Priority: Minor > Fix For: 2.1.0 > > Attachments: HIVE-11806.1.patch > > > We are lacking tests for HIVE-11174. Adding one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13029) NVDIMM support for LLAP Cache
[ https://issues.apache.org/jira/browse/HIVE-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-13029: --- Release Note: NVDIMM support for LLAP cache Status: Patch Available (was: Open) > NVDIMM support for LLAP Cache > - > > Key: HIVE-13029 > URL: https://issues.apache.org/jira/browse/HIVE-13029 > Project: Hive > Issue Type: New Feature > Components: llap >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Critical > Attachments: HIVE-13029.1.patch, HIVE-13029.2.patch > > > LLAP cache has been designed so that the cache can be offloaded easily to a > pmem API without restart coherence. > The tricky part about NVDIMMs are restart coherence, while most of the cache > gains can be obtained without keeping state across refreshes, since LLAP is > not the system of record, HDFS is. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13029) NVDIMM support for LLAP Cache
[ https://issues.apache.org/jira/browse/HIVE-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-13029: --- Attachment: HIVE-13029.2.patch > NVDIMM support for LLAP Cache > - > > Key: HIVE-13029 > URL: https://issues.apache.org/jira/browse/HIVE-13029 > Project: Hive > Issue Type: New Feature > Components: llap >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Critical > Attachments: HIVE-13029.1.patch, HIVE-13029.2.patch > > > LLAP cache has been designed so that the cache can be offloaded easily to a > pmem API without restart coherence. > The tricky part about NVDIMMs are restart coherence, while most of the cache > gains can be obtained without keeping state across refreshes, since LLAP is > not the system of record, HDFS is. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13488) Restore dag summary when tez exec print summary enabled and in-place updates disabled
[ https://issues.apache.org/jira/browse/HIVE-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13488: - Summary: Restore dag summary when tez exec print summary enabled and in-place updates disabled (was: Restore methods summary when tez exec print summary is enabled) > Restore dag summary when tez exec print summary enabled and in-place updates > disabled > - > > Key: HIVE-13488 > URL: https://issues.apache.org/jira/browse/HIVE-13488 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13488.1.patch > > > Restore the old way of printing methods summary when file redirection is > enabled. This may be used by some tools which will break because of the > change introduced by HIVE-13226 > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13488) Restore methods summary when tez exec print summary is enabled
[ https://issues.apache.org/jira/browse/HIVE-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236653#comment-15236653 ] Prasanth Jayachandran commented on HIVE-13488: -- [~sershe] Can you plz review this patch? > Restore methods summary when tez exec print summary is enabled > -- > > Key: HIVE-13488 > URL: https://issues.apache.org/jira/browse/HIVE-13488 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13488.1.patch > > > Restore the old way of printing methods summary when file redirection is > enabled. This may be used by some tools which will break because of the > change introduced by HIVE-13226 > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)