[jira] [Commented] (HIVE-12964) TestOperationLoggingAPIWithMr,TestOperationLoggingAPIWithTez fail on branch-2.0 (with Java 7, at least)

2016-01-31 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125495#comment-15125495
 ] 

Prasanth Jayachandran commented on HIVE-12964:
--

[~sershe] Could you review the new patch? The changes in the initial patches 
are no longer required. If user wants perf logs to be redirected then they can 
always do so from log4j2.properties file. By default, perf logs go to hive.log 
at DEBUG level. 

> TestOperationLoggingAPIWithMr,TestOperationLoggingAPIWithTez fail on 
> branch-2.0 (with Java 7, at least)
> ---
>
> Key: HIVE-12964
> URL: https://issues.apache.org/jira/browse/HIVE-12964
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12964.2.patch, HIVE-12964.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8176) Close of FSDataOutputStream in OrcRecordUpdater ctor should be in finally clause

2016-01-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HIVE-8176:
-
Attachment: HIVE-8176.v1.patch

> Close of FSDataOutputStream in OrcRecordUpdater ctor should be in finally 
> clause
> 
>
> Key: HIVE-8176
> URL: https://issues.apache.org/jira/browse/HIVE-8176
> Project: Hive
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: SUYEON LEE
>Priority: Minor
> Attachments: HIVE-8176.patch, HIVE-8176.v1.patch
>
>
> {code}
> try {
>   FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
>   strm.writeInt(ORC_ACID_VERSION);
>   strm.close();
> } catch (IOException ioe) {
> {code}
> If strm.writeInt() throws IOE, strm would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12970) Add total open connections in HiveServer2

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125486#comment-15125486
 ] 

Ashutosh Chauhan commented on HIVE-12970:
-

I think metric is better termed as : {{CUMULATIVE_CONNECTION_COUNT}} cc: 
[~szehon]

> Add total open connections in HiveServer2
> -
>
> Key: HIVE-12970
> URL: https://issues.apache.org/jira/browse/HIVE-12970
> Project: Hive
>  Issue Type: Improvement
>  Components: Diagnosability
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HIVE-12970.1.patch
>
>
> I add the metrics to HiveServer2 in order to confirm the change per unit 
> time. I will be able to use the information at the time of monitoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-01-31 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125484#comment-15125484
 ] 

Jesus Camacho Rodriguez commented on HIVE-12839:


Yes, I completely agree that this physical optimization would be useful.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125467#comment-15125467
 ] 

Ashutosh Chauhan commented on HIVE-12839:
-

With upgrade to 1.6 limit join transpose rule is kicking in. But for Hive it 
may generate a worse plan. See, changes in limit_join_transpose.q where because 
limit was pushed through join, we end up generating an extra MR job, since to 
force limit we blindly force all rows to go through one reducer, thus requiring 
extra job, which here is unnecessary. We need to make changes in Hive physical 
compiler not to force extra MR stage for such case. See related : HIVE-12963 
cc: [~jcamachorodriguez]

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-01-31 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125475#comment-15125475
 ] 

Jesus Camacho Rodriguez commented on HIVE-12839:


[~ashutoshc], [~pxiong], I am wondering what is the reason for limit join 
transpose rule to kick in now?
The optimization is configured by an enabling boolean plus a minimum 
proportion/number of rows that the input needs to get reduced; otherwise it 
does not trigger.
Hence, if now it is kicking in and before it was not, are Calcite metadata 
providers working properly with this patch (I saw there was some changes in the 
way we instantiate them)? Otherwise I do not understand why this is happening.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12893) Sorted dynamic partition does not work if subset of partition columns are constant folded

2016-01-31 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12893:
-
Attachment: HIVE-12893.3.patch

> Sorted dynamic partition does not work if subset of partition columns are 
> constant folded
> -
>
> Key: HIVE-12893
> URL: https://issues.apache.org/jira/browse/HIVE-12893
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Yi Zhang
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12893.1.patch, HIVE-12893.2.patch, 
> HIVE-12893.2.patch, HIVE-12893.3.patch
>
>
> If all partition columns are constant folded then sorted dynamic partitioning 
> should not be used as it is similar to static partitioning. But if only 
> subset of partition columns are constant folded sorted dynamic partition 
> optimization will be helpful. Currently, this optimization is disabled if 
> atleast one partition column constant folded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125482#comment-15125482
 ] 

Ashutosh Chauhan commented on HIVE-12839:
-

[~pxiong] Can you take a look why rule got triggered. It seems like test is 
written for a case when it shouldn't be triggered.

However, [~jcamachorodriguez] in cases when it does get triggered, we can 
generate a better plan of not adding an extra MR step to enforce limit. For 
query in test, pushing limits in previous reduce stage will be sufficient, 
since we don't need exact limit to be enforced at that stage. Depending on 
data, this extra job actually may slow down the job. It seems like there is a 
possibility of improvement here.  

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12966) Change some ZooKeeperHiveLockManager logs to debug

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125490#comment-15125490
 ] 

Ashutosh Chauhan commented on HIVE-12966:
-

+1

> Change some ZooKeeperHiveLockManager logs to debug
> --
>
> Key: HIVE-12966
> URL: https://issues.apache.org/jira/browse/HIVE-12966
> Project: Hive
>  Issue Type: Bug
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
> Attachments: HIVE-12966.patch
>
>
> {{ZooKeeperHiveLockManager}} prints a info level log every time it is 
> acquiring or releasing a lock. For a table with 10K partitions, that's 20K+ 
> log lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125497#comment-15125497
 ] 

Hive QA commented on HIVE-12839:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785264/HIVE-12839.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 10031 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-unionDistinct_1.q-insert_values_non_partitioned.q-insert_update_delete.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_SortUnionTransposeRule
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_lineage2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit_join_transpose
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit_pushdown
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadataOnlyOptimizer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_offset_limit_ppd_optimizer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_limit
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_limit_pushdown
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_limit
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_pushdown
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6816/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6816/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6816/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 22 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785264 - PreCommit-HIVE-TRUNK-Build

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12956) run CBO in tests with mapred.mode=strict

2016-01-31 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12956:

Attachment: HIVE12956.3.patch

Golden file updates for few tests where cbo is now triggered. Patch is ready 
for review. 

> run CBO in tests with mapred.mode=strict
> 
>
> Key: HIVE-12956
> URL: https://issues.apache.org/jira/browse/HIVE-12956
> Project: Hive
>  Issue Type: Test
>Reporter: Sergey Shelukhin
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12956.2.patch, HIVE-12956.patch, HIVE12956.3.patch
>
>
> There's a strange condition in CBO check that prevents CBO from running in 
> Hive tests (in tests specifically) when mapred.mode is set to strict. I 
> remember seeing it before, and noticed it again recently.
> It is both surprising that we wouldn't test CBO in strict mode, and also 
> problematic for some q files because strict mode is going to be deprecated in 
> HIVE-12727. This needs to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12877) Hive use index for queries will lose some data if the Query file is compressed.

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125499#comment-15125499
 ] 

Ashutosh Chauhan commented on HIVE-12877:
-

[~yangfang] Can you add .q test for this ?

> Hive use index for queries will lose some data if the Query file is 
> compressed.
> ---
>
> Key: HIVE-12877
> URL: https://issues.apache.org/jira/browse/HIVE-12877
> Project: Hive
>  Issue Type: Bug
>  Components: Indexing
>Affects Versions: 1.2.1
> Environment: This problem exists in all Hive versions.no matter what 
> platform
>Reporter: yangfang
> Attachments: HIVE-12877.patch
>
>
> Hive created the index using the extracted file length when the file is  the 
> compressed,
> but when to divide the data into pieces in MapReduce,Hive use the file length 
> to compare with the extracted file length,if
> If it found that these two lengths are not matched, It filters out the 
> file.So the query will lose some data.
> I modified the source code and make hive index can be used when the files is 
> compressed,please test it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-12877) Hive use index for queries will lose some data if the Query file is compressed.

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125499#comment-15125499
 ] 

Ashutosh Chauhan edited comment on HIVE-12877 at 1/31/16 8:52 PM:
--

[~yangfang] Can you add test in the patch ?


was (Author: ashutoshc):
[~yangfang] Can you add .q test for this ?

> Hive use index for queries will lose some data if the Query file is 
> compressed.
> ---
>
> Key: HIVE-12877
> URL: https://issues.apache.org/jira/browse/HIVE-12877
> Project: Hive
>  Issue Type: Bug
>  Components: Indexing
>Affects Versions: 1.2.1
> Environment: This problem exists in all Hive versions.no matter what 
> platform
>Reporter: yangfang
> Attachments: HIVE-12877.patch
>
>
> Hive created the index using the extracted file length when the file is  the 
> compressed,
> but when to divide the data into pieces in MapReduce,Hive use the file length 
> to compare with the extracted file length,if
> If it found that these two lengths are not matched, It filters out the 
> file.So the query will lose some data.
> I modified the source code and make hive index can be used when the files is 
> compressed,please test it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8177) Wrong parameter order in ExplainTask#getJSONLogicalPlan()

2016-01-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125561#comment-15125561
 ] 

Ted Yu commented on HIVE-8177:
--

I don't think the test failure was related to the patch.

> Wrong parameter order in ExplainTask#getJSONLogicalPlan()
> -
>
> Key: HIVE-8177
> URL: https://issues.apache.org/jira/browse/HIVE-8177
> Project: Hive
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: SUYEON LEE
>Priority: Minor
> Attachments: HIVE-8177.patch
>
>
> {code}
>   JSONObject jsonPlan = outputMap(work.getParseContext().getTopOps(), 
> true,
>   out, jsonOutput, work.getExtended(), 0);
> {code}
> The order of 4th and 5th parameters is reverted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12946) alter table should also add default scheme and authority for the location similar to create table

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125424#comment-15125424
 ] 

Hive QA commented on HIVE-12946:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785245/HIVE-12946.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 77 failed/errored test(s), 10033 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testAlterPartition
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testColumnStatistics
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testComplexTable
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testComplexTypeApi
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testConcurrentMetastores
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testDBOwner
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testDropTable
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testFunctionWithResources
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testGetTableObjects
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testListPartitionNames
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testListPartitions
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testNameMethods
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testPartitionFilter
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testRenamePartition
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testSimpleFunction
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testStatsFastTrivial
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testTableDatabase
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testAlterPartition
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testColumnStatistics
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testComplexTable
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testComplexTypeApi
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testConcurrentMetastores
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testDBOwner
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testDropTable
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testFunctionWithResources
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testGetTableObjects
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testListPartitionNames
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testListPartitions
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testNameMethods
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testPartitionFilter
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testRenamePartition
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testSimpleFunction
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testStatsFastTrivial
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testTableDatabase
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testAlterPartition
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testColumnStatistics
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testComplexTable
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testComplexTypeApi
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testConcurrentMetastores
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testDBOwner
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testDropTable
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testFunctionWithResources
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testGetTableObjects
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testListPartitionNames
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testListPartitions
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testNameMethods
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testPartitionFilter
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testRenamePartition
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testSimpleFunction
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testStatsFastTrivial

[jira] [Commented] (HIVE-12834) Fix to accept the arrow keys in BeeLine CLI

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125500#comment-15125500
 ] 

Ashutosh Chauhan commented on HIVE-12834:
-

+1

> Fix to accept the arrow keys in BeeLine CLI
> ---
>
> Key: HIVE-12834
> URL: https://issues.apache.org/jira/browse/HIVE-12834
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
> Environment: CentOS 6.7
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HIVE-12834.1.patch
>
>
> BeeLine in the master doesn't accept the arrow keys as follows (e.g. ^[[A is 
> up arrow key).
> {code}
> [root@hadoop ~]# beeline
> which: no hbase in 
> (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/hadoop/bin:/usr/local/hive/bin:/usr/pgsql-9.4/bin:/root/bin)
> Beeline version 2.1.0-SNAPSHOT by Apache Hive
> beeline> ^[[A^[[B^[[C^[[D
> {code}
> Because UnsupportedTerminal is set in the same way as background. we can 
> check with the ps command.
> {code}
> [root@hadoop ~]# ps -ef | grep beeline
> root   5799   1433  1 12:05 pts/000:00:02 /usr/lib/jvm/java/bin/java 
> -Xmx256m (snip) -Djline.terminal=jline.UnsupportedTerminal (snip) 
> org.apache.hive.beeline.BeeLine
> {code}
> I think the HIVE-6758 affected this behavior. I will fix to accept the arrow 
> keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12375) ensure hive.compactor.check.interval cannot be set too low

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125538#comment-15125538
 ] 

Ashutosh Chauhan commented on HIVE-12375:
-

Are reported test failures relevant?

> ensure hive.compactor.check.interval cannot be set too low
> --
>
> Key: HIVE-12375
> URL: https://issues.apache.org/jira/browse/HIVE-12375
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-12375.2.patch, HIVE-12375.3.patch, HIVE-12375.patch
>
>
> hive.compactor.check.interval can currently be set to as low as 0, which 
> makes Initiator spin needlessly feeling up logs, etc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12880) spark-assembly causes Hive class version problems

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125546#comment-15125546
 ] 

Hive QA commented on HIVE-12880:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785296/HIVE-12880.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10046 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6817/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6817/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6817/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785296 - PreCommit-HIVE-TRUNK-Build

> spark-assembly causes Hive class version problems
> -
>
> Key: HIVE-12880
> URL: https://issues.apache.org/jira/browse/HIVE-12880
> Project: Hive
>  Issue Type: Bug
>Reporter: Hui Zheng
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12880.01.patch, HIVE-12880.02.patch, 
> HIVE-12880.patch
>
>
> It looks like spark-assembly contains versions of Hive classes (e.g. 
> HiveConf), and these sometimes (always?) come from older versions of Hive.
> We've seen problems where depending on classpath perturbations, NoSuchField 
> errors may be thrown for recently added ConfVars because the HiveConf class 
> comes from spark-assembly.
> Would making sure spark-assembly comes last in the classpath solve the 
> problem?
> Otherwise, can we depend on something that does not package older Hive 
> classes?
> Currently, HIVE-12179 provides a workaround (in non-Spark use case, at least; 
> I am assuming this issue can also affect Hive-on-Spark).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12964) TestOperationLoggingAPIWithMr,TestOperationLoggingAPIWithTez fail on branch-2.0 (with Java 7, at least)

2016-01-31 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12964:
-
Affects Version/s: 2.1.0
   2.0.0

> TestOperationLoggingAPIWithMr,TestOperationLoggingAPIWithTez fail on 
> branch-2.0 (with Java 7, at least)
> ---
>
> Key: HIVE-12964
> URL: https://issues.apache.org/jira/browse/HIVE-12964
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12964.2.patch, HIVE-12964.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12964) TestOperationLoggingAPIWithMr,TestOperationLoggingAPIWithTez fail on branch-2.0 (with Java 7, at least)

2016-01-31 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12964:
-
Attachment: HIVE-12964.2.patch

The actual reason was not setting a state flag during filtering the messages. 
So one PerfLogger msg got into OperationLog causing the failure. The changing 
of the state flag depends upon the arrival of the messages. In Master, some msg 
arrived early and flipped the flag but in branch-2.0 PerfLogger msg arrived 
early and the first instance of PerfLogger msg did not flip the excludeMatches 
flag. We should commit this patch both to master and branch-2.0.

> TestOperationLoggingAPIWithMr,TestOperationLoggingAPIWithTez fail on 
> branch-2.0 (with Java 7, at least)
> ---
>
> Key: HIVE-12964
> URL: https://issues.apache.org/jira/browse/HIVE-12964
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12964.2.patch, HIVE-12964.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12956) run CBO in tests with mapred.mode=strict

2016-01-31 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125520#comment-15125520
 ] 

Jesus Camacho Rodriguez commented on HIVE-12956:


+1

> run CBO in tests with mapred.mode=strict
> 
>
> Key: HIVE-12956
> URL: https://issues.apache.org/jira/browse/HIVE-12956
> Project: Hive
>  Issue Type: Test
>Reporter: Sergey Shelukhin
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12956.2.patch, HIVE-12956.patch, HIVE12956.3.patch
>
>
> There's a strange condition in CBO check that prevents CBO from running in 
> Hive tests (in tests specifically) when mapred.mode is set to strict. I 
> remember seeing it before, and noticed it again recently.
> It is both surprising that we wouldn't test CBO in strict mode, and also 
> problematic for some q files because strict mode is going to be deprecated in 
> HIVE-12727. This needs to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125471#comment-15125471
 ] 

Ashutosh Chauhan commented on HIVE-12839:
-

We can track above issue as separate task.
+1 for rest of the patch.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12950) get rid of the NullScan emptyFile madness (part 1 - at least for Tez and LLAP)

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125481#comment-15125481
 ] 

Ashutosh Chauhan commented on HIVE-12950:
-

Failures look relevant so need to be looked at. Other than that patch LGTM. 
Also, you may consider putting NullFileSystem in package  
ql/src/java/org/apache/hadoop/hive/ql/io/

> get rid of the NullScan emptyFile madness (part 1 - at least for Tez and LLAP)
> --
>
> Key: HIVE-12950
> URL: https://issues.apache.org/jira/browse/HIVE-12950
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12950.01.patch, HIVE-12950.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12374) Improve setting of JVM configs for HS2 and Metastore shell scripts

2016-01-31 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125545#comment-15125545
 ] 

Ashutosh Chauhan commented on HIVE-12374:
-

Patch LGTM. 
However, I think default Xmx value should be 8GB. 2GB looks too small for 
daemon process for usual current hardware spec.

> Improve setting of JVM configs for HS2 and Metastore shell scripts 
> ---
>
> Key: HIVE-12374
> URL: https://issues.apache.org/jira/browse/HIVE-12374
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-12374.1.patch
>
>
> Adding {{HIVESERVER2_JVM_OPTS}} and {{METASTORE_JVM_OPTS}} env variables, 
> which will eventually set {{HADOOP_CLIENT_OPTS}} (since we start the 
> processes using hadoop jar ...). Also setting these defaults:{{-Xms128m 
> -Xmx2048m -XX:MaxPermSize=128m}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7862) close of InputStream in Utils#copyToZipStream() should be placed in finally block

2016-01-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125552#comment-15125552
 ] 

Ted Yu commented on HIVE-7862:
--

QA still didn't pick up the patch.

> close of InputStream in Utils#copyToZipStream() should be placed in finally 
> block
> -
>
> Key: HIVE-7862
> URL: https://issues.apache.org/jira/browse/HIVE-7862
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: patch
> Attachments: HIVE-7862.1.patch, HIVE-7862_001.txt
>
>
> In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java , 
> line 278 :
> {code}
>   private static void copyToZipStream(InputStream is, ZipEntry entry, 
> ZipOutputStream zos)
>   throws IOException {
> zos.putNextEntry(entry);
> byte[] arr = new byte[4096];
> int read = is.read(arr);
> while (read > -1) {
>   zos.write(arr, 0, read);
>   read = is.read(arr);
> }
> is.close();
> {code}
> If read() throws IOException, is would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12877) Hive use index for queries will lose some data if the Query file is compressed.

2016-01-31 Thread yangfang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yangfang updated HIVE-12877:

Attachment: 19-index_compressed_file.gz
index_query_compressed_file_failure.q
HIVE-12877.1.patch

> Hive use index for queries will lose some data if the Query file is 
> compressed.
> ---
>
> Key: HIVE-12877
> URL: https://issues.apache.org/jira/browse/HIVE-12877
> Project: Hive
>  Issue Type: Bug
>  Components: Indexing
>Affects Versions: 1.2.1
> Environment: This problem exists in all Hive versions.no matter what 
> platform
>Reporter: yangfang
> Attachments: 19-index_compressed_file.gz, HIVE-12877.1.patch, 
> HIVE-12877.patch, index_query_compressed_file_failure.q
>
>
> Hive created the index using the extracted file length when the file is  the 
> compressed,
> but when to divide the data into pieces in MapReduce,Hive use the file length 
> to compare with the extracted file length,if
> If it found that these two lengths are not matched, It filters out the 
> file.So the query will lose some data.
> I modified the source code and make hive index can be used when the files is 
> compressed,please test it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12934) Refactor llap module structure to allow for a usable client

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125899#comment-15125899
 ] 

Hive QA commented on HIVE-12934:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785390/HIVE-12934.03.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10032 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6822/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6822/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6822/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785390 - PreCommit-HIVE-TRUNK-Build

> Refactor llap module structure to allow for a usable client
> ---
>
> Key: HIVE-12934
> URL: https://issues.apache.org/jira/browse/HIVE-12934
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-12934.01.patch, HIVE-12934.01.patch, 
> HIVE-12934.02.patch, HIVE-12934.02.review.patch, HIVE-12934.03.patch, 
> HIVE-12934.03.review.patch, HIVE-12934.1.patch, HIVE-12934.1.review.txt, 
> HIVE-12934.1.txt
>
>
> The client isn't really usable at the moment, and all of the code resides in 
> the llap-server module. Restructure this so that the daemon execution code 
> and cache code remains in server, common components move to a different 
> module and relevant client pieces sit in the client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12968) genNotNullFilterForJoinSourcePlan: needs to merge predicates into the multi-AND

2016-01-31 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125846#comment-15125846
 ] 

Gopal V commented on HIVE-12968:


This change seems to be triggering many hidden optimizations.

{code}
ql/src/test/results/clientpositive/spark/spark_dynamic_partition_pruning.q.out  
  | 2313 
+++---
 
ql/src/test/results/clientpositive/spark/spark_vectorized_dynamic_partition_pruning.q.out
 | 2515 
+--
{code}

Join orders are changing due to the constant folding.

> genNotNullFilterForJoinSourcePlan: needs to merge predicates into the 
> multi-AND
> ---
>
> Key: HIVE-12968
> URL: https://issues.apache.org/jira/browse/HIVE-12968
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Minor
> Attachments: HIVE-12968.1.patch
>
>
> {code}
> predicate: ((cbigint is not null and cint is not null) and cint BETWEEN 
> 100 AND 300) (type: boolean)
> {code}
> does not fold the IS_NULL on cint, because of the structure of the AND clause.
> For example, see {{tez_dynpart_hashjoin_1.q}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12877) Hive use index for queries will lose some data if the Query file is compressed.

2016-01-31 Thread yangfang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125875#comment-15125875
 ] 

yangfang commented on HIVE-12877:
-

I have add .q test in index_query_compressed_file_failure.q, 
19-index_compressed_file.gz is my compressed file which used for test.

> Hive use index for queries will lose some data if the Query file is 
> compressed.
> ---
>
> Key: HIVE-12877
> URL: https://issues.apache.org/jira/browse/HIVE-12877
> Project: Hive
>  Issue Type: Bug
>  Components: Indexing
>Affects Versions: 1.2.1
> Environment: This problem exists in all Hive versions.no matter what 
> platform
>Reporter: yangfang
> Attachments: 19-index_compressed_file.gz, HIVE-12877.1.patch, 
> HIVE-12877.patch, index_query_compressed_file_failure.q
>
>
> Hive created the index using the extracted file length when the file is  the 
> compressed,
> but when to divide the data into pieces in MapReduce,Hive use the file length 
> to compare with the extracted file length,if
> If it found that these two lengths are not matched, It filters out the 
> file.So the query will lose some data.
> I modified the source code and make hive index can be used when the files is 
> compressed,please test it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6147) Support avro data stored in HBase columns

2016-01-31 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125676#comment-15125676
 ] 

Swarnim Kulkarni commented on HIVE-6147:


{quote}
It is pretty common to use schema-less avro objects in HBase.
{quote}

I am not sure if that is true(if possible at all). As far as my understanding 
goes, you will have to almost always provide the exact schema that was used 
while persisting the data when attempting to deserialize it and the best way to 
do that would be to store alongside the schema itself. Plus schema evolution is 
going to be a mess. Imagine writing a billion rows in HBase with one schema 
which evolves and then you write another billion rows with new schema. How do 
you ensure the first billion rows are still correctly readable?

{quote}
(if there are billions of rows with objects of the same type, it is not 
reasonable to store the same schema in all of them) and it is not convenient to 
write a customer schema retriever for each such case.
{quote}

Correct. I agree it is inefficient to store it for every single cell. Although 
IMO that isn't a good excuse to not write the schema at all. A better design in 
this case is to use some kind of schema registry, use a custom serializer, 
write the schema to the schema registry, generate a id of some kind and persist 
the id along with the data. Then when you are reading the data, use the id to 
pull the schema from the store and read the data. That is also where a custom 
implementation of an AvroSchemaRetriever makes sense where your custom 
implementation would know how to read your schema from the schema registry and 
get that to hive and let hive handle the deserialization from there on.  

> Support avro data stored in HBase columns
> -
>
> Key: HIVE-6147
> URL: https://issues.apache.org/jira/browse/HIVE-6147
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.12.0, 0.13.0
>Reporter: Swarnim Kulkarni
>Assignee: Swarnim Kulkarni
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-6147.1.patch.txt, HIVE-6147.2.patch.txt, 
> HIVE-6147.3.patch.txt, HIVE-6147.3.patch.txt, HIVE-6147.4.patch.txt, 
> HIVE-6147.5.patch.txt, HIVE-6147.6.patch.txt
>
>
> Presently, the HBase Hive integration supports querying only primitive data 
> types in columns. It would be nice to be able to store and query Avro objects 
> in HBase columns by making them visible as structs to Hive. This will allow 
> Hive to perform ad hoc analysis of HBase data which can be deeply structured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12966) Change some ZooKeeperHiveLockManager logs to debug

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125604#comment-15125604
 ] 

Hive QA commented on HIVE-12966:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785305/HIVE-12966.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9960 tests executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
TestMiniTezCliDriver-schema_evol_orc_acidvec_mapwork_part.q-vector_partitioned_date_time.q-vector_non_string_partition.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testTempTable
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6818/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6818/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6818/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785305 - PreCommit-HIVE-TRUNK-Build

> Change some ZooKeeperHiveLockManager logs to debug
> --
>
> Key: HIVE-12966
> URL: https://issues.apache.org/jira/browse/HIVE-12966
> Project: Hive
>  Issue Type: Bug
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
> Attachments: HIVE-12966.patch
>
>
> {{ZooKeeperHiveLockManager}} prints a info level log every time it is 
> acquiring or releasing a lock. For a table with 10K partitions, that's 20K+ 
> log lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12730) MetadataUpdater: provide a mechanism to edit the basic statistics of a table (or a partition)

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125670#comment-15125670
 ] 

Hive QA commented on HIVE-12730:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785311/HIVE-12730.03.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 10049 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_create_temp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnStatsUpdateForStatsOptimizer_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_tmp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_tmp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_analyze
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_display_colstats_tbllvl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_options1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_precedence
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_windowing_expressions
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_truncate_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_delete_tmp_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_values_tmp_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_analyze
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_temp_table_rename
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_temp_table_join1
org.apache.hadoop.hive.ql.TestTxnCommands.testDeleteIn
org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testTempTable
org.apache.hive.jdbc.TestJdbcWithMiniMr.testTempTable
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
org.apache.hive.spark.client.TestSparkClient.testRemoteClient
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6819/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6819/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6819/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 31 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785311 - PreCommit-HIVE-TRUNK-Build

> MetadataUpdater: provide a mechanism to edit the basic statistics of a table 
> (or a partition)
> -
>
> Key: HIVE-12730
> URL: https://issues.apache.org/jira/browse/HIVE-12730
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12730.01.patch, HIVE-12730.02.patch, 
> HIVE-12730.03.patch
>
>
> We would like to provide a way for developers/users to modify the numRows and 
> dataSize for a table/partition. Right now although they are part of the table 
> properties, they will be set to -1 when the task is not coming from a 
> statsTask. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12952) Show query sub-pages on webui

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125746#comment-15125746
 ] 

Hive QA commented on HIVE-12952:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785327/HIVE-12952.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9990 tests executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6820/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6820/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6820/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785327 - PreCommit-HIVE-TRUNK-Build

> Show query sub-pages on webui
> -
>
> Key: HIVE-12952
> URL: https://issues.apache.org/jira/browse/HIVE-12952
> Project: Hive
>  Issue Type: Sub-task
>  Components: Diagnosability
>Reporter: Szehon Ho
>Assignee: Szehon Ho
> Attachments: Error - Query Info Expired.png, HIVE-12952.patch, Query 
> Drilldown link.png, Tab 3 - Stages.png, Tab 4 - Perf Logging.png, Tab1- Base 
> Profile.png, Tab2 - Query Plan.png
>
>
> Today the queries showing in Running and Completed lists have some basic 
> information like query string, elapsed time, state, user, etc.
> It may be helpful to have even more information like:
> * Job URL's, job status
> * Explain plan (configurable)
> * Error message (if failure)
> * Dynamic metrics, incl:
>  ** Number of Tables/partitions fetched
>  ** Time taken in each method via perf-logger.
> These should go in another page, so as not to clog the summary hiveserver.jsp 
> page.
> This JIRA aims to tackle some of those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12943) Use default doesnot work in Hive 1.2.1

2016-01-31 Thread Kashish Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashish Jain updated HIVE-12943:

Attachment: HIVE-12943.patch

Attaching the patch for it

> Use default doesnot work in Hive 1.2.1
> --
>
> Key: HIVE-12943
> URL: https://issues.apache.org/jira/browse/HIVE-12943
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 1.2.1
>Reporter: Kashish Jain
> Fix For: 1.3.0, 1.2.1, 1.2.2
>
> Attachments: HIVE-12943.patch
>
>
> "USE Default" does not work with the latest hive 1.2.1
> The message is 
> "
>Cannot recognize input near 'default' '' '' in switch database 
> statement; line 1 pos 4
>NoViableAltException(81@[])
>at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11577)
>at 
> org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:46055)
> "



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12908) Improve dynamic partition loading III

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125255#comment-15125255
 ] 

Hive QA commented on HIVE-12908:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785234/HIVE-12908.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 89 failed/errored test(s), 10046 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_concatenate_indexed_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_2_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_numbuckets_partitioned_table2_h23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_cascade
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_merge_compressed
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_fs_overwrite
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_overwrite
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nonreserved_keywords_insert_into1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_llap
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge_incompat1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge_incompat2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_union_remove_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_truncate_column_merge
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_remove_19
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_remove_22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_varchar_1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge5
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge6
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge7
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge8
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_incompat1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_incompat2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_2_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_create_merge_compressed
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge10
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge11

[jira] [Commented] (HIVE-11752) Pre-materializing complex CTE queries

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125271#comment-15125271
 ] 

Hive QA commented on HIVE-11752:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785161/HIVE-11752.03.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_analyze1
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_dyn_part1
org.apache.hadoop.hive.ql.parse.TestGenTezWork.testCreateMap
org.apache.hadoop.hive.ql.parse.TestGenTezWork.testCreateReduce
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6811/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6811/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6811/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785161 - PreCommit-HIVE-TRUNK-Build

> Pre-materializing complex CTE queries
> -
>
> Key: HIVE-11752
> URL: https://issues.apache.org/jira/browse/HIVE-11752
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-11752.03.patch, HIVE-11752.1.patch.txt, 
> HIVE-11752.2.patch.txt
>
>
> Currently, hive regards CTE clauses as a simple alias to the query block, 
> which makes redundant works if it's used multiple times in a query. This 
> introduces a reference threshold for pre-materializing the CTE clause as a 
> volatile table (which is not exists in any form of metastore and just 
> accessible from QB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12963) LIMIT statement with SORT BY creates additional MR job with hardcoded only one reducer

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125272#comment-15125272
 ] 

Hive QA commented on HIVE-12963:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785214/HIVE-12963.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6812/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6812/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6812/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] 
[INFO] 
[INFO] Building Hive ORC 2.1.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-orc ---
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/orc/target
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/orc 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-orc ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-orc ---
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/orc/src/gen/protobuf-java 
added.
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-orc ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-orc 
---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/orc/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-orc ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-orc ---
[INFO] Compiling 60 source files to 
/data/hive-ptest/working/apache-github-source-source/orc/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-orc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/orc/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-orc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/orc/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/orc/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/orc/target/tmp/conf
 [copy] Copying 16 files to 
/data/hive-ptest/working/apache-github-source-source/orc/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-orc ---
[INFO] Compiling 12 source files to 
/data/hive-ptest/working/apache-github-source-source/orc/target/test-classes
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/orc/src/test/org/apache/orc/impl/TestRunLengthIntegerReader.java:
 Some input files use or override a deprecated API.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/orc/src/test/org/apache/orc/impl/TestRunLengthIntegerReader.java:
 Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-orc ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-orc ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/orc/target/hive-orc-2.1.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hive-orc ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-orc ---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/orc/target/hive-orc-2.1.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/hive-orc/2.1.0-SNAPSHOT/hive-orc-2.1.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/orc/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/hive-orc/2.1.0-SNAPSHOT/hive-orc-2.1.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Common 2.1.0-SNAPSHOT
[INFO] 
[INFO] 

[jira] [Commented] (HIVE-12945) Bucket pruning: bucketing for -ve hashcodes have historical issues

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125306#comment-15125306
 ] 

Hive QA commented on HIVE-12945:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785228/HIVE-12945.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10046 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6813/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6813/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6813/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785228 - PreCommit-HIVE-TRUNK-Build

> Bucket pruning: bucketing for -ve hashcodes have historical issues
> --
>
> Key: HIVE-12945
> URL: https://issues.apache.org/jira/browse/HIVE-12945
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Critical
> Attachments: HIVE-12945.02.patch, HIVE-12945.1.patch
>
>
> The different ETL pathways differed in reducer choice slightly for -ve 
> hashcodes.
> {code}
> (hashCode & Integer.MAX_VALUE) % numberOfBuckets;
> !=
> Math.abs(hashCode) % numberOfBuckets
> {code}
> Add a backwards compat option, which can be used to protect against old data 
> left over from 0.13.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12968) genNotNullFilterForJoinSourcePlan: needs to merge predicates into the multi-AND

2016-01-31 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125304#comment-15125304
 ] 

Jesus Camacho Rodriguez commented on HIVE-12968:


I will add support for folding _is not null_ if there is a BETWEEN clause in 
HIVE-12543; thus, we would fold this in the Calcite side.

> genNotNullFilterForJoinSourcePlan: needs to merge predicates into the 
> multi-AND
> ---
>
> Key: HIVE-12968
> URL: https://issues.apache.org/jira/browse/HIVE-12968
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Minor
> Attachments: HIVE-12968.1.patch
>
>
> {code}
> predicate: ((cbigint is not null and cint is not null) and cint BETWEEN 
> 100 AND 300) (type: boolean)
> {code}
> does not fold the IS_NULL on cint, because of the structure of the AND clause.
> For example, see {{tez_dynpart_hashjoin_1.q}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-01-31 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125350#comment-15125350
 ] 

Hive QA commented on HIVE-12892:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785229/HIVE-12892.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6814/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6814/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6814/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6814/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 208ab35 HIVE-12727 : refactor Hive strict checks to be more 
granular, allow order by no limit and no partition filter by default for now 
(Sergey Shelukhin, reviewed by Xuefu Zhang) ADDENDUM2
+ git clean -f -d
Removing common/src/java/org/apache/hadoop/hive/conf/HiveConf.java.orig
Removing itests/src/test/resources/testconfiguration.properties.orig
Removing ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java.orig
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 208ab35 HIVE-12727 : refactor Hive strict checks to be more 
granular, allow order by no limit and no partition filter by default for now 
(Sergey Shelukhin, reviewed by Xuefu Zhang) ADDENDUM2
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785229 - PreCommit-HIVE-TRUNK-Build

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.nogen.patch, HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)