[jira] [Commented] (HIVE-17607) remove ColumnStatsDesc usage from columnstatsupdatetask

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204486#comment-16204486
 ] 

Hive QA commented on HIVE-17607:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12892074/HIVE-17607.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 11233 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=108)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7288/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7288/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7288/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 13 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12892074 - PreCommit-HIVE-Build

> remove ColumnStatsDesc usage from columnstatsupdatetask
> ---
>
> Key: HIVE-17607
> URL: https://issues.apache.org/jira/browse/HIVE-17607
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Gergely Hajós
> Attachments: HIVE-17607.1.patch, HIVE-17607.2.patch, 
> HIVE-17607.3.patch
>
>
> it's not entirely connected to this task...it should either has its own 
> descriptor; or work sould take on the: tablename/coltype/colname payload



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17633) Make it possible to override the query results directory in TestBeeLineDriver

2017-10-13 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204473#comment-16204473
 ] 

Lefty Leverenz commented on HIVE-17633:
---

Should this be documented in the wiki?

> Make it possible to override the query results directory in TestBeeLineDriver
> -
>
> Key: HIVE-17633
> URL: https://issues.apache.org/jira/browse/HIVE-17633
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Affects Versions: 3.0.0
>Reporter: Peter Vary
>Assignee: Peter Vary
> Fix For: 3.0.0
>
> Attachments: HIVE-17633.patch
>
>
> It would be good to have the possibility to override where the 
> TestBeeLineDriver looks for the golden files



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17391) Compaction fails if there is an empty value in tblproperties

2017-10-13 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204470#comment-16204470
 ] 

Steve Yeom commented on HIVE-17391:
---

[~eugene.koifman] 
Hey Eugene, 
Please review patch 02. 
Thanks, 
Steve. 

> Compaction fails if there is an empty value in tblproperties
> 
>
> Key: HIVE-17391
> URL: https://issues.apache.org/jira/browse/HIVE-17391
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Transactions
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Ashutosh Chauhan
>Assignee: Steve Yeom
> Attachments: HIVE-17391.01.patch, HIVE-17391.02.patch
>
>
> create table t1 (a int) tblproperties ('serialization.null.format'='');
> alter table t1 compact 'major';
> fails



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17620) Use the default MR scratch directory (HDFS) in the only case when hive.blobstore.optimizations.enabled=true AND isFinalJob=true

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204459#comment-16204459
 ] 

Hive QA commented on HIVE-17620:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12892063/HIVE-17620.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11233 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[write_final_output_blobstore]
 (batchId=243)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=101)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=108)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[wrong_distinct2]
 (batchId=239)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7287/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7287/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7287/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12892063 - PreCommit-HIVE-Build

> Use the default MR scratch directory (HDFS) in the only case when 
> hive.blobstore.optimizations.enabled=true AND isFinalJob=true
> ---
>
> Key: HIVE-17620
> URL: https://issues.apache.org/jira/browse/HIVE-17620
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.2.0, 2.3.0, 3.0.0
>Reporter: Gergely Hajós
>Assignee: Gergely Hajós
> Attachments: HIVE-17620.1.patch, HIVE-17620.2.patch, 
> HIVE-17620.3.patch, HIVE-17620.4.patch
>
>
> Introduced in HIVE-15121. Context::getTempDirForPath tries to use temporary 
> MR directory instead of blobstore directory in three cases:
> {code}
> if (!isFinalJob && BlobStorageUtils.areOptimizationsEnabled(conf)) {
> {code}
> while the only valid case for using a temporary MR dir is when optimization 
> is enabled and the job is not final:
> {code}
> if (BlobStorageUtils.areOptimizationsEnabled(conf) && !isFinalJob) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17607) remove ColumnStatsDesc usage from columnstatsupdatetask

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204438#comment-16204438
 ] 

Hive QA commented on HIVE-17607:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12892074/HIVE-17607.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11233 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_gby_empty] 
(batchId=79)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=145)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=108)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query23] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7286/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7286/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7286/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12892074 - PreCommit-HIVE-Build

> remove ColumnStatsDesc usage from columnstatsupdatetask
> ---
>
> Key: HIVE-17607
> URL: https://issues.apache.org/jira/browse/HIVE-17607
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Gergely Hajós
> Attachments: HIVE-17607.1.patch, HIVE-17607.2.patch, 
> HIVE-17607.3.patch
>
>
> it's not entirely connected to this task...it should either has its own 
> descriptor; or work sould take on the: tablename/coltype/colname payload



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17811) LLAP: Use NUMA interleaved allocations for memory cache on POWER cpus

2017-10-13 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V reassigned HIVE-17811:
--

Assignee: Gopal V

> LLAP: Use NUMA interleaved allocations for memory cache on POWER cpus
> -
>
> Key: HIVE-17811
> URL: https://issues.apache.org/jira/browse/HIVE-17811
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Gopal V
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17736) ObjectStore transaction handling can be simplified

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17736:
--
Attachment: HIVE-17736.01.patch

> ObjectStore transaction handling can be simplified
> --
>
> Key: HIVE-17736
> URL: https://issues.apache.org/jira/browse/HIVE-17736
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17736.01.patch
>
>
> There are many places in ObjectStore that do something like this:
> {code}
> boolean commited = false;
>try {
>   openTransaction();
>   commited = commitTransaction();
> } finally {
>   if (!commited) {
> rollbackTransaction();
>   }
> }
> {code}
> We can simplify this in two ways:
> 1) Create a wrapper that calls given piece of code inside the block of code 
> above. This is similar to TransactionManager in Sentry.
> 2) Create a special auto-closeable object that does the check and rollback on 
> close.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17736) ObjectStore transaction handling can be simplified

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17736:
--
Status: Patch Available  (was: Open)

> ObjectStore transaction handling can be simplified
> --
>
> Key: HIVE-17736
> URL: https://issues.apache.org/jira/browse/HIVE-17736
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17736.01.patch
>
>
> There are many places in ObjectStore that do something like this:
> {code}
> boolean commited = false;
>try {
>   openTransaction();
>   commited = commitTransaction();
> } finally {
>   if (!commited) {
> rollbackTransaction();
>   }
> }
> {code}
> We can simplify this in two ways:
> 1) Create a wrapper that calls given piece of code inside the block of code 
> above. This is similar to TransactionManager in Sentry.
> 2) Create a special auto-closeable object that does the check and rollback on 
> close.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17756) Enable subquery related Qtests for Hive on Spark

2017-10-13 Thread Dapeng Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204412#comment-16204412
 ] 

Dapeng Sun commented on HIVE-17756:
---

Thank Xuefu for your review and comments.

> Enable subquery related Qtests for Hive on Spark
> 
>
> Key: HIVE-17756
> URL: https://issues.apache.org/jira/browse/HIVE-17756
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
> Fix For: 3.0.0
>
> Attachments: HIVE-17756.001.patch
>
>
> HIVE-15456 and HIVE-15192 using Calsite to decorrelate and plan subqueries. 
> This JIRA is to indroduce subquery test and verify the subqueries plan for 
> Hive on Spark



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17620) Use the default MR scratch directory (HDFS) in the only case when hive.blobstore.optimizations.enabled=true AND isFinalJob=true

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204411#comment-16204411
 ] 

Hive QA commented on HIVE-17620:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12892063/HIVE-17620.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 11233 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[write_final_output_blobstore]
 (batchId=243)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=101)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=108)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7285/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7285/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7285/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12892063 - PreCommit-HIVE-Build

> Use the default MR scratch directory (HDFS) in the only case when 
> hive.blobstore.optimizations.enabled=true AND isFinalJob=true
> ---
>
> Key: HIVE-17620
> URL: https://issues.apache.org/jira/browse/HIVE-17620
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.2.0, 2.3.0, 3.0.0
>Reporter: Gergely Hajós
>Assignee: Gergely Hajós
> Attachments: HIVE-17620.1.patch, HIVE-17620.2.patch, 
> HIVE-17620.3.patch, HIVE-17620.4.patch
>
>
> Introduced in HIVE-15121. Context::getTempDirForPath tries to use temporary 
> MR directory instead of blobstore directory in three cases:
> {code}
> if (!isFinalJob && BlobStorageUtils.areOptimizationsEnabled(conf)) {
> {code}
> while the only valid case for using a temporary MR dir is when optimization 
> is enabled and the job is not final:
> {code}
> if (BlobStorageUtils.areOptimizationsEnabled(conf) && !isFinalJob) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17534) Add a config to turn off parquet vectorization

2017-10-13 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-17534:
---
Attachment: HIVE-17534.04-branch-2.patch

Attaching patch for branch-2. branch-2 does not have {{row.serde.excludes}} 
config so had to make some changes to q files.

> Add a config to turn off parquet vectorization
> --
>
> Key: HIVE-17534
> URL: https://issues.apache.org/jira/browse/HIVE-17534
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-17534.01.patch, HIVE-17534.02.patch, 
> HIVE-17534.03.patch, HIVE-17534.04-branch-2.patch
>
>
> It should be a good addition to give an option for users to turn off parquet 
> vectorization without affecting vectorization on other file formats. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17672) Upgrade Calcite version to 1.14

2017-10-13 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204385#comment-16204385
 ] 

Ashutosh Chauhan commented on HIVE-17672:
-

Patch looks good. Some stylistic comments on RB.

> Upgrade Calcite version to 1.14
> ---
>
> Key: HIVE-17672
> URL: https://issues.apache.org/jira/browse/HIVE-17672
> Project: Hive
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-17672.01.patch, HIVE-17672.02.patch, 
> HIVE-17672.03.patch, HIVE-17672.04.patch
>
>
> Calcite 1.14.0 has been recently released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17434) Using "add jar " from viewFs always occurred hdfs mismatch error

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204380#comment-16204380
 ] 

Hive QA commented on HIVE-17434:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12892021/HIVE-17434.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7284/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7284/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7284/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-10-14 00:10:42.497
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-7284/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-10-14 00:10:42.499
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at ea89de7 HIVE-17782: Inconsistent cast behavior from string to 
numeric types with regards to leading/trailing spaces (Jason Dere, reviewed by 
Ashutosh Chauhan)
+ git clean -f -d
Removing 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/functions/HiveSqlSumEmptyIsZeroAggFunction.java
Removing 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEpochMilli.java
Removing ql/src/test/queries/clientpositive/timestamptz_3.q
Removing ql/src/test/results/clientpositive/timestamptz_3.q.out
Removing standalone-metastore/src/gen/org/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at ea89de7 HIVE-17782: Inconsistent cast behavior from string to 
numeric types with regards to leading/trailing spaces (Jason Dere, reviewed by 
Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-10-14 00:10:45.995
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: corrupt patch at line 22
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12892021 - PreCommit-HIVE-Build

> Using "add jar " from viewFs always occurred hdfs mismatch error
> 
>
> Key: HIVE-17434
> URL: https://issues.apache.org/jira/browse/HIVE-17434
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: shenxianqiang
>Assignee: Bang Xiao
>Priority: Minor
> Fix For: 1.2.1
>
> Attachments: HIVE-17434.patch
>
>
> add jar viewfs://nsX//lib/common.jar
> always occure mismatch error



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-13 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15016:

Attachment: HIVE-15016.5.patch

patch-5: fix some unit test failures.

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.patch, 
> Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17404) Orc split generation cache does not handle files without file tail

2017-10-13 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-17404:
-
Priority: Critical  (was: Blocker)

> Orc split generation cache does not handle files without file tail
> --
>
> Key: HIVE-17404
> URL: https://issues.apache.org/jira/browse/HIVE-17404
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
>
> Some old files do not have Orc FileTail. If file tail does not exist, split 
> generation should fallback to old way of storing footers. 
> This can result in exceptions like below
> {code}
> ORC split generation failed with exception: Malformed ORC file. Invalid 
> postscript length 9
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1735)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1822)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:450)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:569)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.orc.FileFormatException: Malformed ORC file. Invalid 
> postscript length 9
>   at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:297)
>   at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:470)
>   at 
> org.apache.hadoop.hive.ql.io.orc.LocalCache.getAndValidate(LocalCache.java:103)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.getSplits(OrcInputFormat.java:804)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.runGetSplitsSync(OrcInputFormat.java:922)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.generateSplitWork(OrcInputFormat.java:891)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.scheduleSplits(OrcInputFormat.java:1763)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1707)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17508) Implement global execution triggers based on counters

2017-10-13 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-17508:
-
Attachment: HIVE-17508.12.patch

Addressed review comments

> Implement global execution triggers based on counters
> -
>
> Key: HIVE-17508
> URL: https://issues.apache.org/jira/browse/HIVE-17508
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17508.1.patch, HIVE-17508.10.patch, 
> HIVE-17508.11.patch, HIVE-17508.12.patch, HIVE-17508.2.patch, 
> HIVE-17508.3.patch, HIVE-17508.3.patch, HIVE-17508.4.patch, 
> HIVE-17508.5.patch, HIVE-17508.6.patch, HIVE-17508.7.patch, 
> HIVE-17508.8.patch, HIVE-17508.9.patch, HIVE-17508.WIP.2.patch, 
> HIVE-17508.WIP.patch
>
>
> Workload management can defined Triggers that are bound to a resource plan. 
> Each trigger can have a trigger expression and an action associated with it. 
> Trigger expressions are evaluated at runtime after configurable check 
> interval, based on which actions like killing a query, moving a query to 
> different pool etc. will get invoked. Simple execution trigger could be 
> something like
> {code}
> CREATE TRIGGER slow_query IN global
> WHEN execution_time_ms > 1
> MOVE TO slow_queue
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17214) check/fix conversion of unbucketed non-acid to acid

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17214:
--
Status: Patch Available  (was: Open)

> check/fix conversion of unbucketed non-acid to acid
> ---
>
> Key: HIVE-17214
> URL: https://issues.apache.org/jira/browse/HIVE-17214
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Minor
> Attachments: HIVE-17214.01.patch
>
>
> bucketed tables have stricter rules for file layout on disk - bucket files 
> are direct children of a partition directory.
> for un-bucketed tables I'm not sure there are any rules
> for example, CTAS with Tez + Union operator creates 1 directory for each leg 
> of the union
> Supposedly Hive can read table by picking all files recursively.  
> Can it also write (other than CTAS example above) arbitrarily?
> Does it mean Acid write can also write anywhere?
> Figure out what can be supported and how can existing layout can be checked?  
> Examining a full "ls -l -R" for a large table could be expensive. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17672) Upgrade Calcite version to 1.14

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204332#comment-16204332
 ] 

Hive QA commented on HIVE-17672:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12891996/HIVE-17672.04.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 11235 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_functions] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz] (batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=77)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=108)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query36] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query67] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query70] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query86] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query22] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query36] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query67] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query70] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query86] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7283/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7283/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7283/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 24 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12891996 - PreCommit-HIVE-Build

> Upgrade Calcite version to 1.14
> ---
>
> Key: HIVE-17672
> URL: https://issues.apache.org/jira/browse/HIVE-17672
> Project: Hive
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-17672.01.patch, HIVE-17672.02.patch, 
> HIVE-17672.03.patch, HIVE-17672.04.patch
>
>
> Calcite 1.14.0 has been recently released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17508) Implement global execution triggers based on counters

2017-10-13 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204333#comment-16204333
 ] 

Prasanth Jayachandran commented on HIVE-17508:
--

For some reason I cannot post some comments in RB. Posting it here

bq. Hive object is not threadsafe, passing it to runnable may not be valid. It 
should be obtained on the thread that's going to use it.
Hive object is not passed to validator thread here. 
MetastoreGlobalTriggersFetcher gets executed in the same thread fo 
TezSessionPoolManager. After fetching the triggers it will be passed to 
validator thread.

> Implement global execution triggers based on counters
> -
>
> Key: HIVE-17508
> URL: https://issues.apache.org/jira/browse/HIVE-17508
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17508.1.patch, HIVE-17508.10.patch, 
> HIVE-17508.11.patch, HIVE-17508.2.patch, HIVE-17508.3.patch, 
> HIVE-17508.3.patch, HIVE-17508.4.patch, HIVE-17508.5.patch, 
> HIVE-17508.6.patch, HIVE-17508.7.patch, HIVE-17508.8.patch, 
> HIVE-17508.9.patch, HIVE-17508.WIP.2.patch, HIVE-17508.WIP.patch
>
>
> Workload management can defined Triggers that are bound to a resource plan. 
> Each trigger can have a trigger expression and an action associated with it. 
> Trigger expressions are evaluated at runtime after configurable check 
> interval, based on which actions like killing a query, moving a query to 
> different pool etc. will get invoked. Simple execution trigger could be 
> something like
> {code}
> CREATE TRIGGER slow_query IN global
> WHEN execution_time_ms > 1
> MOVE TO slow_queue
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17214) check/fix conversion of unbucketed non-acid to acid

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17214:
--
Attachment: HIVE-17214.01.patch

> check/fix conversion of unbucketed non-acid to acid
> ---
>
> Key: HIVE-17214
> URL: https://issues.apache.org/jira/browse/HIVE-17214
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Minor
> Attachments: HIVE-17214.01.patch
>
>
> bucketed tables have stricter rules for file layout on disk - bucket files 
> are direct children of a partition directory.
> for un-bucketed tables I'm not sure there are any rules
> for example, CTAS with Tez + Union operator creates 1 directory for each leg 
> of the union
> Supposedly Hive can read table by picking all files recursively.  
> Can it also write (other than CTAS example above) arbitrarily?
> Does it mean Acid write can also write anywhere?
> Figure out what can be supported and how can existing layout can be checked?  
> Examining a full "ls -l -R" for a large table could be expensive. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17730) Queries can be closed automatically

2017-10-13 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204329#comment-16204329
 ] 

Alexander Kolbasov commented on HIVE-17730:
---

The QueryWrapper was introduced with HIVE-10895.

> Queries can be closed automatically
> ---
>
> Key: HIVE-17730
> URL: https://issues.apache.org/jira/browse/HIVE-17730
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17730.05.patch
>
>
> HIVE-16213 made QueryWrapper AutoCloseable, but queries are still closed 
> manually and not by using try-with-resource. And now Query itself is auto 
> closeable, so we don't need the wrapper at all.
> So we should get rid of QueryWrapper and use try-with-resource to create 
> queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-14731) Use Tez cartesian product edge in Hive (unpartitioned case only)

2017-10-13 Thread Zhiyuan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyuan Yang updated HIVE-14731:

Attachment: HIVE-14731.22.patch

> Use Tez cartesian product edge in Hive (unpartitioned case only)
> 
>
> Key: HIVE-14731
> URL: https://issues.apache.org/jira/browse/HIVE-14731
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: HIVE-14731.1.patch, HIVE-14731.10.patch, 
> HIVE-14731.11.patch, HIVE-14731.12.patch, HIVE-14731.13.patch, 
> HIVE-14731.14.patch, HIVE-14731.15.patch, HIVE-14731.16.patch, 
> HIVE-14731.17.patch, HIVE-14731.18.patch, HIVE-14731.19.patch, 
> HIVE-14731.2.patch, HIVE-14731.20.patch, HIVE-14731.21.patch, 
> HIVE-14731.22.patch, HIVE-14731.3.patch, HIVE-14731.4.patch, 
> HIVE-14731.5.patch, HIVE-14731.6.patch, HIVE-14731.7.patch, 
> HIVE-14731.8.patch, HIVE-14731.9.patch
>
>
> Given cartesian product edge is available in Tez now (see TEZ-3230), let's 
> integrate it into Hive on Tez. This allows us to have more than one reducer 
> in cross product queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17138) FileSinkOperator doesn't create empty files for acid path

2017-10-13 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168645#comment-16168645
 ] 

Eugene Koifman edited comment on HIVE-17138 at 10/13/17 11:03 PM:
--

in acid1 arguably only the base/ has to have a full compliment of bucket files
in acid2 all insert deltas should as well
*In particular Compactor should make sure to produce empty buckets which it 
doesn't currently*
for example, delete events filtered out everything from a given bucket


was (Author: ekoifman):
in acid1 arguably only the base/ has to have a full compliment of bucket files
in acid2 all insert deltas should as well
In particular Compactor should make sure to produce empty buckets which it 
doesn't currently

> FileSinkOperator doesn't create empty files for acid path
> -
>
> Key: HIVE-17138
> URL: https://issues.apache.org/jira/browse/HIVE-17138
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> For bucketed tables, FileSinkOperator is expected (in some cases)  to produce 
> a specific number of files even if they are empty.
> FileSinkOperator.closeOp(boolean abort) has logic to create files even if 
> empty.
> This doesn't property work for Acid path.  For Insert, the 
> OrcRecordUpdater(s) is set up in createBucketForFileIdx() which creates the 
> actual bucketN file (as of HIVE-14007, it does it regardless of whether 
> RecordUpdater sees any rows).  This causes empty (i.e.ORC metadata only) 
> bucket files to be created for multiFileSpray=true if a particular 
> FileSinkOperator.process() sees at least 1 row.  For example,
> {noformat}
> create table fourbuckets (a int, b int) clustered by (a) into 4 buckets 
> stored as orc TBLPROPERTIES ('transactional'='true');
> insert into fourbuckets values(0,1),(1,1);
> with mapreduce.job.reduces = 1 or 2 
> {noformat}
> For Update/Delete path, OrcRecordWriter is created lazily when the 1st row 
> that needs to land there is seen.  Thus it never creates empty buckets no 
> mater what the value of _skipFiles_ in closeOp(boolean).
> Once Split Update does the split early (in operator pipeline) only the Insert 
> path will matter since base and delta are the only files split computation, 
> etc looks at.  delete_delta is only for Acid internals so there is never any 
> reason for create empty files there.
> Also make sure to close RecordUpdaters in FileSinkOperator.abortWriters()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17138) FileSinkOperator/Compactor doesn't create empty files for acid path

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17138:
--
Summary: FileSinkOperator/Compactor doesn't create empty files for acid 
path  (was: FileSinkOperator doesn't create empty files for acid path)

> FileSinkOperator/Compactor doesn't create empty files for acid path
> ---
>
> Key: HIVE-17138
> URL: https://issues.apache.org/jira/browse/HIVE-17138
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> For bucketed tables, FileSinkOperator is expected (in some cases)  to produce 
> a specific number of files even if they are empty.
> FileSinkOperator.closeOp(boolean abort) has logic to create files even if 
> empty.
> This doesn't property work for Acid path.  For Insert, the 
> OrcRecordUpdater(s) is set up in createBucketForFileIdx() which creates the 
> actual bucketN file (as of HIVE-14007, it does it regardless of whether 
> RecordUpdater sees any rows).  This causes empty (i.e.ORC metadata only) 
> bucket files to be created for multiFileSpray=true if a particular 
> FileSinkOperator.process() sees at least 1 row.  For example,
> {noformat}
> create table fourbuckets (a int, b int) clustered by (a) into 4 buckets 
> stored as orc TBLPROPERTIES ('transactional'='true');
> insert into fourbuckets values(0,1),(1,1);
> with mapreduce.job.reduces = 1 or 2 
> {noformat}
> For Update/Delete path, OrcRecordWriter is created lazily when the 1st row 
> that needs to land there is seen.  Thus it never creates empty buckets no 
> mater what the value of _skipFiles_ in closeOp(boolean).
> Once Split Update does the split early (in operator pipeline) only the Insert 
> path will matter since base and delta are the only files split computation, 
> etc looks at.  delete_delta is only for Acid internals so there is never any 
> reason for create empty files there.
> Also make sure to close RecordUpdaters in FileSinkOperator.abortWriters()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Work started] (HIVE-17806) Create directory for metrics file if it doesn't exist

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-17806 started by Alexander Kolbasov.
-
> Create directory for metrics file if it doesn't exist
> -
>
> Key: HIVE-17806
> URL: https://issues.apache.org/jira/browse/HIVE-17806
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17806.01.patch
>
>
> HIVE-17563 changed metrics code to use local file system operations instead 
> of Hadoop local file system operations. There is an unintended side effect - 
> hadoop file systems create the directory if it doesn't exist and java nio 
> interfaces don't. The purpose of this fix is to revert the behavior to the 
> original one to avoid surprises.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17806) Create directory for metrics file if it doesn't exist

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17806:
--
Status: Patch Available  (was: In Progress)

> Create directory for metrics file if it doesn't exist
> -
>
> Key: HIVE-17806
> URL: https://issues.apache.org/jira/browse/HIVE-17806
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17806.01.patch
>
>
> HIVE-17563 changed metrics code to use local file system operations instead 
> of Hadoop local file system operations. There is an unintended side effect - 
> hadoop file systems create the directory if it doesn't exist and java nio 
> interfaces don't. The purpose of this fix is to revert the behavior to the 
> original one to avoid surprises.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17806) Create directory for metrics file if it doesn't exist

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17806:
--
Attachment: HIVE-17806.01.patch

> Create directory for metrics file if it doesn't exist
> -
>
> Key: HIVE-17806
> URL: https://issues.apache.org/jira/browse/HIVE-17806
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17806.01.patch
>
>
> HIVE-17563 changed metrics code to use local file system operations instead 
> of Hadoop local file system operations. There is an unintended side effect - 
> hadoop file systems create the directory if it doesn't exist and java nio 
> interfaces don't. The purpose of this fix is to revert the behavior to the 
> original one to avoid surprises.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17810) Creating a table through HCatClient without specifying columns throws a NullPointerException on the server

2017-10-13 Thread Stephen Patel (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204312#comment-16204312
 ] 

Stephen Patel commented on HIVE-17810:
--

I put this in the TestHCatClient class:
{code}
/**
   * This test tests that a Create Table statement without columns works
   * @throws Exception
   */
  @Test
  public void testNoColumnTableInstantiation() throws Exception {
HCatClient client = HCatClient.create(new Configuration(hcatConf));


String dbName = "default";
String tblName = "testNoColumnTableInstantiation";
ArrayList cols = new ArrayList();
HCatTable table = new HCatTable(dbName, tblName)
.cols(cols)
.serdeLib(AvroSerDe.class.getName())

.tblProps(ImmutableMap.of("avro.schema.literal", "{\"type\": \"record\"," +
"\"namespace\": \"com.example\"," +
"\"name\": \"FullName\"," +
"\"fields\": [{ \"name\": \"first\", 
\"type\": \"string\" }] }"))

.inputFileFormat(AvroContainerInputFormat.class.getName())

.outputFileFormat(AvroContainerOutputFormat.class.getName());

client.dropTable(dbName, tblName, true);
try {
  // Create an avro table with no columns
  client.createTable(HCatCreateTableDesc
  .create(table, false)
  .build());
}catch (Throwable e){
  fail("An error occurred creating Columnless table: "+e.getMessage());
}
  }
{code}

> Creating a table through HCatClient without specifying columns throws a 
> NullPointerException on the server
> --
>
> Key: HIVE-17810
> URL: https://issues.apache.org/jira/browse/HIVE-17810
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: Stephen Patel
>Priority: Minor
>
> I've attached a simple test case using the AvroSerde (which generates it's 
> own columns) that, when run will throw this error:
> {noformat}
> 2017-10-13T15:49:17,697 ERROR [pool-6-thread-2] metastore.RetryingHMSHandler: 
> MetaException(message:java.lang.NullPointerException)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:6560)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1635)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
>   at com.sun.proxy.$Proxy30.create_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:11710)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:11694)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
>   at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
>   at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.validateTblColumns(MetaStoreUtils.java:621)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1433)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1420)

[jira] [Updated] (HIVE-17810) Creating a table through HCatClient without specifying columns throws a NullPointerException on the server

2017-10-13 Thread Stephen Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Patel updated HIVE-17810:
-
Description: 
I've attached a simple test case using the AvroSerde (which generates it's own 
columns) that, when run will throw this error:
{noformat}
2017-10-13T15:49:17,697 ERROR [pool-6-thread-2] metastore.RetryingHMSHandler: 
MetaException(message:java.lang.NullPointerException)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:6560)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1635)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy30.create_table_with_environment_context(Unknown 
Source)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:11710)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:11694)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.validateTblColumns(MetaStoreUtils.java:621)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1433)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1420)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1621)
... 20 more
{noformat}

By default the StorageDescriptor in the HCatTable class has a null column list. 
 When calling hCatTable.cols(emptyList), the hCatTable will determine that the 
list is equal to it's current column list and won't set the empty column list 
on the StorageDescriptor, thus leading to the NullPointerException.

A workaround is to call HCatTable.cols with a list that contains a fake field, 
and then call HCatTable.cols with an empty list.  This will set the column list 
on the StorageDescriptor to the empty list, and allow the table to be created.

  was:
I've attached a simple test case using the AvroSerde (which generates it's own 
columns) that, when run will throw this error:
{noformat}
2017-10-13T15:49:17,697 ERROR [pool-6-thread-2] metastore.RetryingHMSHandler: 
MetaException(message:java.lang.NullPointerException)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:6560)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1635)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy30.create_table_with_environment_context(Unknown 
Source)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:11710)

[jira] [Assigned] (HIVE-17809) Implement per pool trigger validation

2017-10-13 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-17809:



> Implement per pool trigger validation
> -
>
> Key: HIVE-17809
> URL: https://issues.apache.org/jira/browse/HIVE-17809
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> HIVE-17508 trigger validation is applied for all pools at once. This is 
> follow up to implement trigger validation at per pool level. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17808) Change System.currentTimeMillis to System.nanoTime for elapsed time

2017-10-13 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-17808:


Assignee: Prasanth Jayachandran

> Change System.currentTimeMillis to System.nanoTime for elapsed time
> ---
>
> Key: HIVE-17808
> URL: https://issues.apache.org/jira/browse/HIVE-17808
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> There are many places in QueryInfo and TezJobMonitor that uses 
> System.currentTimeMillis() for finding elapsed time. Since currentTimeMillis 
> depends on system clock this can cause issues in ntpd environments. Replace 
> System.currentTimeMillis() with System.nanoTime() everywhere. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17804) Vectorization: Bug erroneously causes match for 1st row in batch (SelectStringColLikeStringScalar)

2017-10-13 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204300#comment-16204300
 ] 

Jason Dere commented on HIVE-17804:
---

Nice catch .. +1 pending tests.

> Vectorization: Bug erroneously causes match for 1st row in batch 
> (SelectStringColLikeStringScalar)
> --
>
> Key: HIVE-17804
> URL: https://issues.apache.org/jira/browse/HIVE-17804
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17804.01.patch
>
>
> Code setting output value to LongColumnVector.NULL_VALUE for null candidate 
> sets the 0th entry instead of the i'th.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17804) Vectorization: Bug erroneously causes match for 1st row in batch (SelectStringColLikeStringScalar)

2017-10-13 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-17804:

Status: Patch Available  (was: Open)

> Vectorization: Bug erroneously causes match for 1st row in batch 
> (SelectStringColLikeStringScalar)
> --
>
> Key: HIVE-17804
> URL: https://issues.apache.org/jira/browse/HIVE-17804
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17804.01.patch
>
>
> Code setting output value to LongColumnVector.NULL_VALUE for null candidate 
> sets the 0th entry instead of the i'th.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17804) Vectorization: Bug erroneously causes match for 1st row in batch (SelectStringColLikeStringScalar)

2017-10-13 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-17804:

Attachment: HIVE-17804.01.patch

> Vectorization: Bug erroneously causes match for 1st row in batch 
> (SelectStringColLikeStringScalar)
> --
>
> Key: HIVE-17804
> URL: https://issues.apache.org/jira/browse/HIVE-17804
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17804.01.patch
>
>
> Code setting output value to LongColumnVector.NULL_VALUE for null candidate 
> sets the 0th entry instead of the i'th.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17730) Queries can be closed automatically

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17730:
--
Status: Open  (was: Patch Available)

> Queries can be closed automatically
> ---
>
> Key: HIVE-17730
> URL: https://issues.apache.org/jira/browse/HIVE-17730
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17730.05.patch
>
>
> HIVE-16213 made QueryWrapper AutoCloseable, but queries are still closed 
> manually and not by using try-with-resource. And now Query itself is auto 
> closeable, so we don't need the wrapper at all.
> So we should get rid of QueryWrapper and use try-with-resource to create 
> queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17730) Queries can be closed automatically

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17730:
--
Attachment: HIVE-17730.05.patch

> Queries can be closed automatically
> ---
>
> Key: HIVE-17730
> URL: https://issues.apache.org/jira/browse/HIVE-17730
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17730.05.patch
>
>
> HIVE-16213 made QueryWrapper AutoCloseable, but queries are still closed 
> manually and not by using try-with-resource. And now Query itself is auto 
> closeable, so we don't need the wrapper at all.
> So we should get rid of QueryWrapper and use try-with-resource to create 
> queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17730) Queries can be closed automatically

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17730:
--
Attachment: (was: HIVE-17730.04.patch)

> Queries can be closed automatically
> ---
>
> Key: HIVE-17730
> URL: https://issues.apache.org/jira/browse/HIVE-17730
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17730.05.patch
>
>
> HIVE-16213 made QueryWrapper AutoCloseable, but queries are still closed 
> manually and not by using try-with-resource. And now Query itself is auto 
> closeable, so we don't need the wrapper at all.
> So we should get rid of QueryWrapper and use try-with-resource to create 
> queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17730) Queries can be closed automatically

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17730:
--
Status: Patch Available  (was: Open)

> Queries can be closed automatically
> ---
>
> Key: HIVE-17730
> URL: https://issues.apache.org/jira/browse/HIVE-17730
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17730.05.patch
>
>
> HIVE-16213 made QueryWrapper AutoCloseable, but queries are still closed 
> manually and not by using try-with-resource. And now Query itself is auto 
> closeable, so we don't need the wrapper at all.
> So we should get rid of QueryWrapper and use try-with-resource to create 
> queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17807) Execute maven commands in batch mode for ptests

2017-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17807:

Status: Patch Available  (was: Open)

> Execute maven commands in batch mode for ptests
> ---
>
> Key: HIVE-17807
> URL: https://issues.apache.org/jira/browse/HIVE-17807
> Project: Hive
>  Issue Type: Bug
>Reporter: Vijay Kumar
>Assignee: Vijay Kumar
> Attachments: HIVE-17807.patch
>
>
> No need to run in interactive mode in CI environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17807) Execute maven commands in batch mode for ptests

2017-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17807:

Attachment: HIVE-17807.patch

> Execute maven commands in batch mode for ptests
> ---
>
> Key: HIVE-17807
> URL: https://issues.apache.org/jira/browse/HIVE-17807
> Project: Hive
>  Issue Type: Bug
>Reporter: Vijay Kumar
>Assignee: Vijay Kumar
> Attachments: HIVE-17807.patch
>
>
> No need to run in interactive mode in CI environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17807) Execute maven commands in batch mode for ptests

2017-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-17807:
---


> Execute maven commands in batch mode for ptests
> ---
>
> Key: HIVE-17807
> URL: https://issues.apache.org/jira/browse/HIVE-17807
> Project: Hive
>  Issue Type: Bug
>Reporter: Vijay Kumar
>Assignee: Vijay Kumar
>
> No need to run in interactive mode in CI environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17806) Create directory for metrics file if it doesn't exist

2017-10-13 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov reassigned HIVE-17806:
-


> Create directory for metrics file if it doesn't exist
> -
>
> Key: HIVE-17806
> URL: https://issues.apache.org/jira/browse/HIVE-17806
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>
> HIVE-17563 changed metrics code to use local file system operations instead 
> of Hadoop local file system operations. There is an unintended side effect - 
> hadoop file systems create the directory if it doesn't exist and java nio 
> interfaces don't. The purpose of this fix is to revert the behavior to the 
> original one to avoid surprises.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17760) Create a unit test which validates HIVE-9423 does not regress

2017-10-13 Thread Andrew Sherman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-17760:
--
Attachment: HIVE-17760.3.patch

> Create a unit test which validates HIVE-9423 does not regress 
> --
>
> Key: HIVE-17760
> URL: https://issues.apache.org/jira/browse/HIVE-17760
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17760.1.patch, HIVE-17760.2.patch, 
> HIVE-17760.3.patch
>
>
> During [HIVE-9423] we verified that when the Thrift server pool is exhausted, 
> then Beeline connection times out, and provide a meaningful error message.
> Create a unit test which verifies this, and helps to keep this feature working



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17760) Create a unit test which validates HIVE-9423 does not regress

2017-10-13 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204276#comment-16204276
 ] 

Andrew Sherman commented on HIVE-17760:
---

Test is failing in Hive QA tests with

{noformat}
jdbc:hive2://localhost:50536/;user=hive;driver=org.apache.hive.jdbc.HiveDriver: 
java.net.SocketException: Connection reset
at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:240)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at 
org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
at 
org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:209)
at org.apache.hive.beeline.Commands.connect(Commands.java:1639)
at org.apache.hive.beeline.Commands.connect(Commands.java:1534)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:56)
at 
org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1284)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1323)
at org.apache.hive.beeline.BeeLine.connectUsingArgs(BeeLine.java:878)
at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:787)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1021)
at 
org.apache.hive.beeline.TestBeelinePasswordOption.getBeelineOutput(TestBeelinePasswordOption.java:320)
at 
org.apache.hive.beeline.TestBeelinePasswordOption.testMultiConnect(TestBeelinePasswordOption.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: org.apache.thrift.transport.TTransportException: 
java.net.SocketException: Connection reset
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at 
org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178)
at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307)
at 
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:327)
at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:212)
... 28 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 34 more
{noformat}

This is the same error that is discussed in [HIVE-9423].  

> Create a unit test which validates HIVE-9423 does not regress 
> --
>
> Key: HIVE-17760
> URL: https://issues.apache.org/jira/browse/HIVE-17760
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17760.1.patch, HIVE-17760.2.patch
>
>
> During [HIVE-9423] we verified that when the Thrift server pool is exhausted, 
> then Beeline connection times out, and provide a meaningful error message.
> Create a unit test which verifies this, and helps to keep this feature working



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17805) SchemaTool validate locations should not return exit 1

2017-10-13 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-17805:
---
Attachment: HIVE-17805.01.patch

> SchemaTool validate locations should not return exit 1
> --
>
> Key: HIVE-17805
> URL: https://issues.apache.org/jira/browse/HIVE-17805
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-17805.01.patch
>
>
> {{HiveSchemaTool}} can be used by other applications to validate the 
> metastore schema. One of the validation check looks at the location URLs of 
> tables/DBs and returns {{false}} which causes HiveSchemaTool to exit 1 to the 
> calling script. Invalid locations although are a problem in some instances, 
> cannot be termed as catastrophic errors in the schema which should cause Hive 
> service failure/unusable. Ideally we should introduce warning levels and 
> error levels in schemaTool validations so the caller can take appropriate 
> action.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17805) SchemaTool validate locations should not return exit 1

2017-10-13 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-17805:
---
Status: Patch Available  (was: Open)

> SchemaTool validate locations should not return exit 1
> --
>
> Key: HIVE-17805
> URL: https://issues.apache.org/jira/browse/HIVE-17805
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-17805.01.patch
>
>
> {{HiveSchemaTool}} can be used by other applications to validate the 
> metastore schema. One of the validation check looks at the location URLs of 
> tables/DBs and returns {{false}} which causes HiveSchemaTool to exit 1 to the 
> calling script. Invalid locations although are a problem in some instances, 
> cannot be termed as catastrophic errors in the schema which should cause Hive 
> service failure/unusable. Ideally we should introduce warning levels and 
> error levels in schemaTool validations so the caller can take appropriate 
> action.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17805) SchemaTool validate locations should not return exit 1

2017-10-13 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-17805:
--


> SchemaTool validate locations should not return exit 1
> --
>
> Key: HIVE-17805
> URL: https://issues.apache.org/jira/browse/HIVE-17805
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
>
> {{HiveSchemaTool}} can be used by other applications to validate the 
> metastore schema. One of the validation check looks at the location URLs of 
> tables/DBs and returns {{false}} which causes HiveSchemaTool to exit 1 to the 
> calling script. Invalid locations although are a problem in some instances, 
> cannot be termed as catastrophic errors in the schema which should cause Hive 
> service failure/unusable. Ideally we should introduce warning levels and 
> error levels in schemaTool validations so the caller can take appropriate 
> action.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17804) Vectorization: Bug erroneously causes match for 1st row in batch (SelectStringColLikeStringScalar)

2017-10-13 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-17804:
---


> Vectorization: Bug erroneously causes match for 1st row in batch 
> (SelectStringColLikeStringScalar)
> --
>
> Key: HIVE-17804
> URL: https://issues.apache.org/jira/browse/HIVE-17804
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
>
> Code setting output value to LongColumnVector.NULL_VALUE for null candidate 
> sets the 0th entry instead of the i'th.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17803) With Pig multi-query, 2 HCatStorers writing to the same table will trample each other's outputs

2017-10-13 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17803:

Status: Patch Available  (was: Open)

> With Pig multi-query, 2 HCatStorers writing to the same table will trample 
> each other's outputs
> ---
>
> Key: HIVE-17803
> URL: https://issues.apache.org/jira/browse/HIVE-17803
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
> Attachments: HIVE-17803.1.patch
>
>
> When Pig scripts use multi-query and {{HCatStorer}} with 
> dynamic-partitioning, and use more than one {{HCatStorer}} instance to write 
> to the same table, they might trample on each other's outputs. The failure 
> looks as follows:
> {noformat}
> Caused by: org.apache.hive.hcatalog.common.HCatException : 2006 : Error 
> adding partition to metastore. Cause : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /projects/foo/bar/activity_date=2016022306/_placeholder (inode 
> 2878224200): File does not exist. [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-1281544466_4952, pendingcreates: 1]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3429)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3517)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3484)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:791)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:537)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:608)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server.call(Server.java:2267)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:648)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:615)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2217)
>   at 
> org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.registerPartitions(FileOutputCommitterContainer.java:1022)
>   at 
> org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitJob(FileOutputCommitterContainer.java:269)
>   ... 20 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /projects/foo/bar/activity_date=2016022306/_placeholder (inode 
> 2878224200): File does not exist. [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-1281544466_4952, pendingcreates: 1]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3429)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3517)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3484)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:791)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:537)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:608)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server.call(Server.java:2267)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:648)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:615)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1457)
>   at 

[jira] [Updated] (HIVE-17803) With Pig multi-query, 2 HCatStorers writing to the same table will trample each other's outputs

2017-10-13 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17803:

Attachment: HIVE-17803.1.patch

> With Pig multi-query, 2 HCatStorers writing to the same table will trample 
> each other's outputs
> ---
>
> Key: HIVE-17803
> URL: https://issues.apache.org/jira/browse/HIVE-17803
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
> Attachments: HIVE-17803.1.patch
>
>
> When Pig scripts use multi-query and {{HCatStorer}} with 
> dynamic-partitioning, and use more than one {{HCatStorer}} instance to write 
> to the same table, they might trample on each other's outputs. The failure 
> looks as follows:
> {noformat}
> Caused by: org.apache.hive.hcatalog.common.HCatException : 2006 : Error 
> adding partition to metastore. Cause : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /projects/foo/bar/activity_date=2016022306/_placeholder (inode 
> 2878224200): File does not exist. [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-1281544466_4952, pendingcreates: 1]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3429)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3517)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3484)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:791)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:537)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:608)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server.call(Server.java:2267)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:648)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:615)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2217)
>   at 
> org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.registerPartitions(FileOutputCommitterContainer.java:1022)
>   at 
> org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitJob(FileOutputCommitterContainer.java:269)
>   ... 20 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /projects/foo/bar/activity_date=2016022306/_placeholder (inode 
> 2878224200): File does not exist. [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-1281544466_4952, pendingcreates: 1]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3429)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3517)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3484)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:791)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:537)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:608)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server.call(Server.java:2267)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:648)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:615)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1457)
>   at 

[jira] [Commented] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-13 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204258#comment-16204258
 ] 

Aihua Xu commented on HIVE-15016:
-

[~ashutoshc] I traced and didn't find the cause. I don't see we are including 
any older jar And also DEFAULT_LOG_LEVEL exists in the older hadoop lib as 
well. Do you have any finding?

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.patch, Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-12631) LLAP: support ORC ACID tables

2017-10-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204231#comment-16204231
 ] 

Sergey Shelukhin commented on HIVE-12631:
-

Looks like llap_acid_fast failed due to some sorting change. Some other 
failures may be relevant.

> LLAP: support ORC ACID tables
> -
>
> Key: HIVE-12631
> URL: https://issues.apache.org/jira/browse/HIVE-12631
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Transactions
>Reporter: Sergey Shelukhin
>Assignee: Teddy Choi
> Attachments: HIVE-12631.1.patch, HIVE-12631.10.patch, 
> HIVE-12631.10.patch, HIVE-12631.11.patch, HIVE-12631.11.patch, 
> HIVE-12631.12.patch, HIVE-12631.13.patch, HIVE-12631.15.patch, 
> HIVE-12631.16.patch, HIVE-12631.17.patch, HIVE-12631.18.patch, 
> HIVE-12631.19.patch, HIVE-12631.2.patch, HIVE-12631.20.patch, 
> HIVE-12631.21.patch, HIVE-12631.22.patch, HIVE-12631.23.patch, 
> HIVE-12631.24.patch, HIVE-12631.25.patch, HIVE-12631.26.patch, 
> HIVE-12631.27.patch, HIVE-12631.28.patch, HIVE-12631.29.patch, 
> HIVE-12631.3.patch, HIVE-12631.30.patch, HIVE-12631.4.patch, 
> HIVE-12631.5.patch, HIVE-12631.6.patch, HIVE-12631.7.patch, 
> HIVE-12631.8.patch, HIVE-12631.8.patch, HIVE-12631.9.patch
>
>
> LLAP uses a completely separate read path in ORC to allow for caching and 
> parallelization of reads and processing. This path does not support ACID. As 
> far as I remember ACID logic is embedded inside ORC format; we need to 
> refactor it to be on top of some interface, if practical; or just port it to 
> LLAP read path.
> Another consideration is how the logic will work with cache. The cache is 
> currently low-level (CB-level in ORC), so we could just use it to read bases 
> and deltas (deltas should be cached with higher priority) and merge as usual. 
> We could also cache merged representation in future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17508) Implement global execution triggers based on counters

2017-10-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204228#comment-16204228
 ] 

Sergey Shelukhin commented on HIVE-17508:
-

Oh I see. Needs a follow-up jira. 

> Implement global execution triggers based on counters
> -
>
> Key: HIVE-17508
> URL: https://issues.apache.org/jira/browse/HIVE-17508
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17508.1.patch, HIVE-17508.10.patch, 
> HIVE-17508.11.patch, HIVE-17508.2.patch, HIVE-17508.3.patch, 
> HIVE-17508.3.patch, HIVE-17508.4.patch, HIVE-17508.5.patch, 
> HIVE-17508.6.patch, HIVE-17508.7.patch, HIVE-17508.8.patch, 
> HIVE-17508.9.patch, HIVE-17508.WIP.2.patch, HIVE-17508.WIP.patch
>
>
> Workload management can defined Triggers that are bound to a resource plan. 
> Each trigger can have a trigger expression and an action associated with it. 
> Trigger expressions are evaluated at runtime after configurable check 
> interval, based on which actions like killing a query, moving a query to 
> different pool etc. will get invoked. Simple execution trigger could be 
> something like
> {code}
> CREATE TRIGGER slow_query IN global
> WHEN execution_time_ms > 1
> MOVE TO slow_queue
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17508) Implement global execution triggers based on counters

2017-10-13 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204221#comment-16204221
 ] 

Prasanth Jayachandran commented on HIVE-17508:
--

There is a TODO which says per pool validation runnable has to be implemented. 
The validator runnable is get all triggers and apply which only works for 
TezSessionPoolManager. For WorkloadManager, i will add it in a followup that 
will change the runnable to per pool validator?
Will address your other concerns in the next patch.

> Implement global execution triggers based on counters
> -
>
> Key: HIVE-17508
> URL: https://issues.apache.org/jira/browse/HIVE-17508
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17508.1.patch, HIVE-17508.10.patch, 
> HIVE-17508.11.patch, HIVE-17508.2.patch, HIVE-17508.3.patch, 
> HIVE-17508.3.patch, HIVE-17508.4.patch, HIVE-17508.5.patch, 
> HIVE-17508.6.patch, HIVE-17508.7.patch, HIVE-17508.8.patch, 
> HIVE-17508.9.patch, HIVE-17508.WIP.2.patch, HIVE-17508.WIP.patch
>
>
> Workload management can defined Triggers that are bound to a resource plan. 
> Each trigger can have a trigger expression and an action associated with it. 
> Trigger expressions are evaluated at runtime after configurable check 
> interval, based on which actions like killing a query, moving a query to 
> different pool etc. will get invoked. Simple execution trigger could be 
> something like
> {code}
> CREATE TRIGGER slow_query IN global
> WHEN execution_time_ms > 1
> MOVE TO slow_queue
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17508) Implement global execution triggers based on counters

2017-10-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204214#comment-16204214
 ] 

Sergey Shelukhin commented on HIVE-17508:
-

Left some smaller comments on RB. My main concern is, how does it work with WM 
that we pass all the queries and all the triggers to trigger enforcement? How 
does it make sure that triggers apply only to the queries in their own pool?

> Implement global execution triggers based on counters
> -
>
> Key: HIVE-17508
> URL: https://issues.apache.org/jira/browse/HIVE-17508
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17508.1.patch, HIVE-17508.10.patch, 
> HIVE-17508.11.patch, HIVE-17508.2.patch, HIVE-17508.3.patch, 
> HIVE-17508.3.patch, HIVE-17508.4.patch, HIVE-17508.5.patch, 
> HIVE-17508.6.patch, HIVE-17508.7.patch, HIVE-17508.8.patch, 
> HIVE-17508.9.patch, HIVE-17508.WIP.2.patch, HIVE-17508.WIP.patch
>
>
> Workload management can defined Triggers that are bound to a resource plan. 
> Each trigger can have a trigger expression and an action associated with it. 
> Trigger expressions are evaluated at runtime after configurable check 
> interval, based on which actions like killing a query, moving a query to 
> different pool etc. will get invoked. Simple execution trigger could be 
> something like
> {code}
> CREATE TRIGGER slow_query IN global
> WHEN execution_time_ms > 1
> MOVE TO slow_queue
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17803) With Pig multi-query, 2 HCatStorers writing to the same table will trample each other's outputs

2017-10-13 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan reassigned HIVE-17803:
---


> With Pig multi-query, 2 HCatStorers writing to the same table will trample 
> each other's outputs
> ---
>
> Key: HIVE-17803
> URL: https://issues.apache.org/jira/browse/HIVE-17803
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
>
> When Pig scripts use multi-query and {{HCatStorer}} with 
> dynamic-partitioning, and use more than one {{HCatStorer}} instance to write 
> to the same table, they might trample on each other's outputs. The failure 
> looks as follows:
> {noformat}
> Caused by: org.apache.hive.hcatalog.common.HCatException : 2006 : Error 
> adding partition to metastore. Cause : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /projects/foo/bar/activity_date=2016022306/_placeholder (inode 
> 2878224200): File does not exist. [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-1281544466_4952, pendingcreates: 1]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3429)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3517)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3484)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:791)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:537)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:608)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server.call(Server.java:2267)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:648)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:615)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2217)
>   at 
> org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.registerPartitions(FileOutputCommitterContainer.java:1022)
>   at 
> org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitJob(FileOutputCommitterContainer.java:269)
>   ... 20 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /projects/foo/bar/activity_date=2016022306/_placeholder (inode 
> 2878224200): File does not exist. [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-1281544466_4952, pendingcreates: 1]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3429)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3517)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3484)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:791)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:537)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:608)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server.call(Server.java:2267)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:648)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:615)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1457)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1394)
>   at 
> 

[jira] [Commented] (HIVE-12631) LLAP: support ORC ACID tables

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204155#comment-16204155
 ] 

Hive QA commented on HIVE-12631:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12891977/HIVE-12631.30.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11233 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestContribNegativeCliDriver.org.apache.hadoop.hive.cli.TestContribNegativeCliDriver
 (batchId=237)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=108)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query23] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpRetryOnServerIdleTimeout 
(batchId=229)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7282/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7282/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7282/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12891977 - PreCommit-HIVE-Build

> LLAP: support ORC ACID tables
> -
>
> Key: HIVE-12631
> URL: https://issues.apache.org/jira/browse/HIVE-12631
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Transactions
>Reporter: Sergey Shelukhin
>Assignee: Teddy Choi
> Attachments: HIVE-12631.1.patch, HIVE-12631.10.patch, 
> HIVE-12631.10.patch, HIVE-12631.11.patch, HIVE-12631.11.patch, 
> HIVE-12631.12.patch, HIVE-12631.13.patch, HIVE-12631.15.patch, 
> HIVE-12631.16.patch, HIVE-12631.17.patch, HIVE-12631.18.patch, 
> HIVE-12631.19.patch, HIVE-12631.2.patch, HIVE-12631.20.patch, 
> HIVE-12631.21.patch, HIVE-12631.22.patch, HIVE-12631.23.patch, 
> HIVE-12631.24.patch, HIVE-12631.25.patch, HIVE-12631.26.patch, 
> HIVE-12631.27.patch, HIVE-12631.28.patch, HIVE-12631.29.patch, 
> HIVE-12631.3.patch, HIVE-12631.30.patch, HIVE-12631.4.patch, 
> HIVE-12631.5.patch, HIVE-12631.6.patch, HIVE-12631.7.patch, 
> HIVE-12631.8.patch, HIVE-12631.8.patch, HIVE-12631.9.patch
>
>
> LLAP uses a completely separate read path in ORC to allow for caching and 
> parallelization of reads and processing. This path does not support ACID. As 
> far as I remember ACID logic is embedded inside ORC format; we need to 
> refactor it to be on top of some interface, if practical; or just port it to 
> LLAP read path.
> Another consideration is how the logic will work with cache. The cache is 
> currently low-level (CB-level in ORC), so we could just use it to read bases 
> and deltas (deltas should be cached with higher priority) and merge as usual. 
> We could also cache merged representation in future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17771) Implement create and show resource plan.

2017-10-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204118#comment-16204118
 ] 

Sergey Shelukhin edited comment on HIVE-17771 at 10/13/17 8:15 PM:
---

The patch so far makes sense to me.
Nit: "   throw new SemanticException("Invalid syntax for CREATE RESOURCE PLAN 
statement is c/ped for show plan.
Also I see it passes null-s/false-s for permission stuff in HiveOperation.java. 
It seems to be in line with existing stuff where priviledges only apply to 
table/partition/view/etc. operations, but it was surprising to me.
[~thejas] I see that admin commands like grant role, etc. don't have any 
privileges associated with them in HiveOperation.java. How does one control 
access to that stuff?

[~harishjp] do you want to expand this patch with more commands or to commit 
this and have another JIRA?

Also TestHiveOperationType failed, probably an enum value needs to be added 
somewhere.


was (Author: sershe):
The patch so far makes sense to me.
Nit: "   throw new SemanticException("Invalid syntax for CREATE RESOURCE PLAN 
statement is c/ped for show plan.
Also I see it passes null-s/false-s for permission stuff in HiveOperation.java. 
It seems to be in line with existing stuff where priviledges only apply to 
table/partition/view/etc. operations, but it was surprising to me.
[~thejas] I see that admin commands like grant role, etc. don't have any 
privileges associated with them in HiveOperation.java. How does one control 
access to that stuff?

[~harishjp] do you want to expand this patch with more commands or to commit 
this and have another JIRA?

Also TestHiveOperation failed, probably an enum value needs to be added 
somewhere.

> Implement create and show resource plan.
> 
>
> Key: HIVE-17771
> URL: https://issues.apache.org/jira/browse/HIVE-17771
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Attachments: HIVE-17771.01.patch
>
>
> Please see parent jira about llap workload management.
> This jira is to implement create and show resource plan commands in hive to 
> configure resource plans for llap workload.
> The following are the proposed commands implemented as part of the jira:
> CREATE RESOURCE PLAN plan_name WITH QUERY_PARALLELISM parallelism;
> SHOW RESOURCE PLAN;
> It will be followed up with more jiras to add pools, triggers and copy 
> resource plans. And also with drop commands for each of them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17771) Implement create and show resource plan.

2017-10-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204118#comment-16204118
 ] 

Sergey Shelukhin edited comment on HIVE-17771 at 10/13/17 8:14 PM:
---

The patch so far makes sense to me.
Nit: "   throw new SemanticException("Invalid syntax for CREATE RESOURCE PLAN 
statement is c/ped for show plan.
Also I see it passes null-s/false-s for permission stuff in HiveOperation.java. 
It seems to be in line with existing stuff where priviledges only apply to 
table/partition/view/etc. operations, but it was surprising to me.
[~thejas] I see that admin commands like grant role, etc. don't have any 
privileges associated with them in HiveOperation.java. How does one control 
access to that stuff?

[~harishjp] do you want to expand this patch with more commands or to commit 
this and have another JIRA?

Also TestHiveOperation failed, probably an enum value needs to be added 
somewhere.


was (Author: sershe):
The patch so far makes sense to me.
Nit: "   throw new SemanticException("Invalid syntax for CREATE RESOURCE PLAN 
statement is c/ped for show plan.
Also I see it passes null-s/false-s for permission stuff in HiveOperation.java. 
It seems to be in line with existing stuff where priviledges only apply to 
table/partition/view/etc. operations, but it was surprising to me.
[~thejas] I see that admin commands like grant role, etc. don't have any 
privileges associated with them in HiveOperation.java. How does one control 
access to that stuff?



> Implement create and show resource plan.
> 
>
> Key: HIVE-17771
> URL: https://issues.apache.org/jira/browse/HIVE-17771
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Attachments: HIVE-17771.01.patch
>
>
> Please see parent jira about llap workload management.
> This jira is to implement create and show resource plan commands in hive to 
> configure resource plans for llap workload.
> The following are the proposed commands implemented as part of the jira:
> CREATE RESOURCE PLAN plan_name WITH QUERY_PARALLELISM parallelism;
> SHOW RESOURCE PLAN;
> It will be followed up with more jiras to add pools, triggers and copy 
> resource plans. And also with drop commands for each of them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17771) Implement create and show resource plan.

2017-10-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204118#comment-16204118
 ] 

Sergey Shelukhin commented on HIVE-17771:
-

The patch so far makes sense to me.
Nit: "   throw new SemanticException("Invalid syntax for CREATE RESOURCE PLAN 
statement is c/ped for show plan.
Also I see it passes null-s/false-s for permission stuff in HiveOperation.java. 
It seems to be in line with existing stuff where priviledges only apply to 
table/partition/view/etc. operations, but it was surprising to me.
[~thejas] I see that admin commands like grant role, etc. don't have any 
privileges associated with them in HiveOperation.java. How does one control 
access to that stuff?



> Implement create and show resource plan.
> 
>
> Key: HIVE-17771
> URL: https://issues.apache.org/jira/browse/HIVE-17771
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Attachments: HIVE-17771.01.patch
>
>
> Please see parent jira about llap workload management.
> This jira is to implement create and show resource plan commands in hive to 
> configure resource plans for llap workload.
> The following are the proposed commands implemented as part of the jira:
> CREATE RESOURCE PLAN plan_name WITH QUERY_PARALLELISM parallelism;
> SHOW RESOURCE PLAN;
> It will be followed up with more jiras to add pools, triggers and copy 
> resource plans. And also with drop commands for each of them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17743) Add InterfaceAudience and InterfaceStability annotations for Thrift generated APIs

2017-10-13 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204115#comment-16204115
 ] 

Sahil Takiar commented on HIVE-17743:
-

Unfortunately, the patch is a bit too big to fit in an RB, but here is a 
summary of the changes:
* Modified the {{standalone-metastore/pom.xml}} and {{service-rpc/pom.xml}} 
files to use the {{maven-replacer-plugin}} to add {{InterfaceAudience.Public}} 
and {{InterfaceStability.Stable}} annotations in front of the class declaration 
of each public Thrift generated class
* I had to move the annotations themselves into a separate maven module called 
{{classification}} because {{service-rpc}} doesn't have a dependency on 
{{hive-common}}

> Add InterfaceAudience and InterfaceStability annotations for Thrift generated 
> APIs
> --
>
> Key: HIVE-17743
> URL: https://issues.apache.org/jira/browse/HIVE-17743
> Project: Hive
>  Issue Type: Sub-task
>  Components: Thrift API
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-17743.1.patch, HIVE-17743.2.patch
>
>
> The Thrift generated files don't have {{InterfaceAudience}} or 
> {{InterfaceStability}} annotations on them, mainly because all the files are 
> auto-generated.
> We should add some code that auto-tags all the Java Thrift generated files 
> with these annotations. This way even when they are re-generated, they still 
> contain the annotations.
> We should be able to do this using the 
> {{com.google.code.maven-replacer-plugin}} similar to what we do in 
> {{standalone-metastore/pom.xml}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17743) Add InterfaceAudience and InterfaceStability annotations for Thrift generated APIs

2017-10-13 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204108#comment-16204108
 ] 

Sahil Takiar commented on HIVE-17743:
-

[~aihuaxu] can you take a look?

> Add InterfaceAudience and InterfaceStability annotations for Thrift generated 
> APIs
> --
>
> Key: HIVE-17743
> URL: https://issues.apache.org/jira/browse/HIVE-17743
> Project: Hive
>  Issue Type: Sub-task
>  Components: Thrift API
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-17743.1.patch, HIVE-17743.2.patch
>
>
> The Thrift generated files don't have {{InterfaceAudience}} or 
> {{InterfaceStability}} annotations on them, mainly because all the files are 
> auto-generated.
> We should add some code that auto-tags all the Java Thrift generated files 
> with these annotations. This way even when they are re-generated, they still 
> contain the annotations.
> We should be able to do this using the 
> {{com.google.code.maven-replacer-plugin}} similar to what we do in 
> {{standalone-metastore/pom.xml}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17774) compaction may start with 0 splits and fail

2017-10-13 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17774:

   Resolution: Fixed
Fix Version/s: 2.4.0
   Status: Resolved  (was: Patch Available)

The failures appear to be existing ones on branch-2. Committed to branch-2, 
thanks for the review.

> compaction may start with 0 splits and fail
> ---
>
> Key: HIVE-17774
> URL: https://issues.apache.org/jira/browse/HIVE-17774
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.4.0
>
> Attachments: HIVE-17774-branch-2.patch
>
>
> {noformat}
> 2017-09-26 10:36:01,979 INFO  [...]: compactor.CompactorMR 
> (CompactorMR.java:launchCompactionJob(295)) - 
> Submitting MINOR compaction job 
>  (current delta dirs count=0, obsolete delta dirs count=0. 
> TxnIdRange[9223372036854775807,-9223372036854775808]
> ...
> 2017-09-26 10:36:02,350 INFO  [...]: mapreduce.JobSubmitter 
> (JobSubmitter.java:submitJobInternal(198)) - number of splits:0
> ...
> 2017-09-26 10:36:08,637 INFO  [...]: mapreduce.Job 
> (Job.java:monitorAndPrintJob(1380)) - 
> Job job_1503950256860_15982 failed with state FAILED due to: No of maps and 
> reduces are 0 job_1503950256860_15982
> Job commit failed: java.io.FileNotFoundException: File 
> .../hello_acid/load_date=2016-03-03/_tmp_a95346ad-bd89-4e66-9b05-e60fdfa11858 
> does not exist.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:904)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorOutputCommitter.commitJob(CompactorMR.java:776)
>   at 
> org.apache.hadoop.mapred.OutputCommitter.commitJob(OutputCommitter.java:291)
>   at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:285)
>   at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looks like the MR job should not have been attempted in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17778) Add support for custom counters in trigger expression

2017-10-13 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-17778:
-
Attachment: HIVE-17778.2.patch

This patch adds custom triggers on top HIVE-17508. It also adds a new counter 
for dynamically created partitions, so that triggers can be used to validate 
and kill query if number of dynamic partitions created is > a threshold.
cc/ [~sershe]

> Add support for custom counters in trigger expression
> -
>
> Key: HIVE-17778
> URL: https://issues.apache.org/jira/browse/HIVE-17778
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17778.1.patch, HIVE-17778.2.patch
>
>
> HIVE-17508 only supports limited counters. This ticket is to extend it to 
> support custom counters (counters that are not supported by execution engine 
> will be dropped).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16722) Converting bucketed non-acid table to acid should perform validation

2017-10-13 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204067#comment-16204067
 ] 

Eugene Koifman commented on HIVE-16722:
---

patch 1 will validate the names of the files when transactional=true is set for 
existing table.
The work in HIVE-17204 ensures that non-acid to acid conversion can handle 
original files (_OrcRawRecordMerger_)  which can be in subdirectories of 
table/partition.

> Converting bucketed non-acid table to acid should perform validation
> 
>
> Key: HIVE-16722
> URL: https://issues.apache.org/jira/browse/HIVE-16722
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-16722.01.patch, HIVE-16722.WIP.patch
>
>
> Converting a non acid table to acid only performs metadata validation (in 
> _TransactionalValidationListener_).
> The data read code path only understands certain directory layouts and file 
> names and ignores (generally) files that don't match the expected format.
> In Hive, directory layout and bucket file naming (especially older releases) 
> is poorly enforced.
> Need to add a validation step on 
> {noformat}
> alter table T SET TBLPROPERTIES ('transactional'='true')
> {noformat}
> to 
> scan the file system and report any possible data loss scenarios.
> Currently Acid understands bucket files name like "0_0" and (with 
> HIVE-16177) 0_0_copy1" etc at the root of the partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-16722) Converting bucketed non-acid table to acid should perform validation

2017-10-13 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204067#comment-16204067
 ] 

Eugene Koifman edited comment on HIVE-16722 at 10/13/17 7:37 PM:
-

patch 1 will validate the names of the files when transactional=true is set for 
existing table.
The work in HIVE-17204 ensures that non-acid to acid conversion can handle 
original files (_OrcRawRecordMerger.OriginalReaderPair_)  which can be in 
subdirectories of table/partition.


was (Author: ekoifman):
patch 1 will validate the names of the files when transactional=true is set for 
existing table.
The work in HIVE-17204 ensures that non-acid to acid conversion can handle 
original files (_OrcRawRecordMerger_)  which can be in subdirectories of 
table/partition.

> Converting bucketed non-acid table to acid should perform validation
> 
>
> Key: HIVE-16722
> URL: https://issues.apache.org/jira/browse/HIVE-16722
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-16722.01.patch, HIVE-16722.WIP.patch
>
>
> Converting a non acid table to acid only performs metadata validation (in 
> _TransactionalValidationListener_).
> The data read code path only understands certain directory layouts and file 
> names and ignores (generally) files that don't match the expected format.
> In Hive, directory layout and bucket file naming (especially older releases) 
> is poorly enforced.
> Need to add a validation step on 
> {noformat}
> alter table T SET TBLPROPERTIES ('transactional'='true')
> {noformat}
> to 
> scan the file system and report any possible data loss scenarios.
> Currently Acid understands bucket files name like "0_0" and (with 
> HIVE-16177) 0_0_copy1" etc at the root of the partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16722) Converting bucketed non-acid table to acid should perform validation

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-16722:
--
Status: Patch Available  (was: Open)

> Converting bucketed non-acid table to acid should perform validation
> 
>
> Key: HIVE-16722
> URL: https://issues.apache.org/jira/browse/HIVE-16722
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-16722.01.patch, HIVE-16722.WIP.patch
>
>
> Converting a non acid table to acid only performs metadata validation (in 
> _TransactionalValidationListener_).
> The data read code path only understands certain directory layouts and file 
> names and ignores (generally) files that don't match the expected format.
> In Hive, directory layout and bucket file naming (especially older releases) 
> is poorly enforced.
> Need to add a validation step on 
> {noformat}
> alter table T SET TBLPROPERTIES ('transactional'='true')
> {noformat}
> to 
> scan the file system and report any possible data loss scenarios.
> Currently Acid understands bucket files name like "0_0" and (with 
> HIVE-16177) 0_0_copy1" etc at the root of the partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16722) Converting bucketed non-acid table to acid should perform validation

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-16722:
--
Attachment: HIVE-16722.01.patch

> Converting bucketed non-acid table to acid should perform validation
> 
>
> Key: HIVE-16722
> URL: https://issues.apache.org/jira/browse/HIVE-16722
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-16722.01.patch, HIVE-16722.WIP.patch
>
>
> Converting a non acid table to acid only performs metadata validation (in 
> _TransactionalValidationListener_).
> The data read code path only understands certain directory layouts and file 
> names and ignores (generally) files that don't match the expected format.
> In Hive, directory layout and bucket file naming (especially older releases) 
> is poorly enforced.
> Need to add a validation step on 
> {noformat}
> alter table T SET TBLPROPERTIES ('transactional'='true')
> {noformat}
> to 
> scan the file system and report any possible data loss scenarios.
> Currently Acid understands bucket files name like "0_0" and (with 
> HIVE-16177) 0_0_copy1" etc at the root of the partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17765) expose Hive keywords

2017-10-13 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17765:

Attachment: HIVE-17765.02.patch

Not sure why another JDBC test failed. I could repro it locally but I see in 
the logs query has run fine, and nothing related to this test has changed. I 
wonder if the test is valid for async execution... trying again.

> expose Hive keywords 
> -
>
> Key: HIVE-17765
> URL: https://issues.apache.org/jira/browse/HIVE-17765
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17765.01.patch, HIVE-17765.02.patch, 
> HIVE-17765.nogen.patch, HIVE-17765.patch
>
>
> This could be useful e.g. for BI tools (via ODBC/JDBC drivers) to decide on 
> SQL capabilities of Hive



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17792) Enable Bucket Map Join when there are extra keys other than bucketed columns

2017-10-13 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-17792:
--
Attachment: HIVE-17792.2.patch

Fixed the failing tests.

> Enable Bucket Map Join when there are extra keys other than bucketed columns
> 
>
> Key: HIVE-17792
> URL: https://issues.apache.org/jira/browse/HIVE-17792
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
> Attachments: HIVE-17792.1.patch, HIVE-17792.2.patch
>
>
> Currently this wont go through Bucket Map Join(BMJ)
> CREATE TABLE tab_part (key int, value string) PARTITIONED BY(ds STRING) 
> CLUSTERED BY (key) INTO 4 BUCKETS STORED AS TEXTFILE;
> CREATE TABLE tab(key int, value string) PARTITIONED BY(ds STRING) STORED AS 
> TEXTFILE;
> select a.key, a.value, b.value
> from tab a join tab_part b on a.key = b.key and a.value = b.value;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16722) Converting bucketed non-acid table to acid should perform validation

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-16722:
--
Attachment: HIVE-16722.WIP.patch

> Converting bucketed non-acid table to acid should perform validation
> 
>
> Key: HIVE-16722
> URL: https://issues.apache.org/jira/browse/HIVE-16722
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-16722.WIP.patch
>
>
> Converting a non acid table to acid only performs metadata validation (in 
> _TransactionalValidationListener_).
> The data read code path only understands certain directory layouts and file 
> names and ignores (generally) files that don't match the expected format.
> In Hive, directory layout and bucket file naming (especially older releases) 
> is poorly enforced.
> Need to add a validation step on 
> {noformat}
> alter table T SET TBLPROPERTIES ('transactional'='true')
> {noformat}
> to 
> scan the file system and report any possible data loss scenarios.
> Currently Acid understands bucket files name like "0_0" and (with 
> HIVE-16177) 0_0_copy1" etc at the root of the partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17692) Block HCat on Acid tables

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17692:
--
Attachment: HIVE-17692.02.patch

> Block HCat on Acid tables
> -
>
> Key: HIVE-17692
> URL: https://issues.apache.org/jira/browse/HIVE-17692
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17692.01.patch, HIVE-17692.02.patch
>
>
> See _DDLSemanticAnalzyer.analyzeAlterTablePartMergeFiles(ASTNode ast, String 
> tableName, HashMap partSpec)_
> This was fine before due to 
> {noformat}
>   // throw a HiveException if the table/partition is bucketized
>   if (bucketCols != null && bucketCols.size() > 0) {
> throw new 
> SemanticException(ErrorMsg.CONCATENATE_UNSUPPORTED_TABLE_BUCKETED.getMsg());
>   }
> {noformat}
> but now that we support unbucketed acid tables



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17789) Flaky test: TestSessionManagerMetrics.testAbandonedSessionMetrics has timing related problems

2017-10-13 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204031#comment-16204031
 ] 

Andrew Sherman commented on HIVE-17789:
---

Test failures look unrelated to my test-only change, so this is ready to be 
committed IMHO.

> Flaky test: TestSessionManagerMetrics.testAbandonedSessionMetrics has timing 
> related problems
> -
>
> Key: HIVE-17789
> URL: https://issues.apache.org/jira/browse/HIVE-17789
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17789.1.patch
>
>
> The test is waiting for a worker thread to be timed out. The time after which 
> the timeout should happen in 3000 ms. The test waits for 3200 ms, and 
> sometimes this is not enough.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17802) Remove unnecessary calls to FileSystem.setOwner() from FileOutputCommitterContainer

2017-10-13 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan reassigned HIVE-17802:
---


> Remove unnecessary calls to FileSystem.setOwner() from 
> FileOutputCommitterContainer
> ---
>
> Key: HIVE-17802
> URL: https://issues.apache.org/jira/browse/HIVE-17802
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
>
> For large Pig/HCat queries that produce a large number of 
> partitions/directories/files, we have seen cases where the HDFS NameNode 
> groaned under the weight of {{FileSystem.setOwner()}} calls, originating from 
> the commit-step. This was the result of the following code in 
> FileOutputCommitterContainer:
> {code:java}
> private void applyGroupAndPerms(FileSystem fs, Path dir, FsPermission 
> permission,
>   List acls, String group, boolean recursive)
> throws IOException {
> ...
> if (recursive) {
>   for (FileStatus fileStatus : fs.listStatus(dir)) {
> if (fileStatus.isDir()) {
>   applyGroupAndPerms(fs, fileStatus.getPath(), permission, acls, 
> group, true);
> } else {
>   fs.setPermission(fileStatus.getPath(), permission);
>   chown(fs, fileStatus.getPath(), group);
> }
>   }
> }
>   }
>   private void chown(FileSystem fs, Path file, String group) throws 
> IOException {
> try {
>   fs.setOwner(file, null, group);
> } catch (AccessControlException ignore) {
>   // Some users have wrong table group, ignore it.
>   LOG.warn("Failed to change group of partition directories/files: " + 
> file, ignore);
> }
>   }
> {code}
> One call per file/directory is far too many. We have a patch that reduces the 
> namenode pressure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16395) ConcurrentModificationException on config object in HoS

2017-10-13 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204017#comment-16204017
 ] 

Andrew Sherman commented on HIVE-16395:
---

Test failures look unrelated, so this is ready to go IMHO [~stakiar]

> ConcurrentModificationException on config object in HoS
> ---
>
> Key: HIVE-16395
> URL: https://issues.apache.org/jira/browse/HIVE-16395
> Project: Hive
>  Issue Type: Task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
> Attachments: HIVE-16395.1.patch, HIVE-16395.2.patch
>
>
> Looks like this is happening inside spark executors, looks to be some race 
> condition when modifying {{Configuration}} objects.
> Stack-Trace:
> {code}
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:267)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:213)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:334)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:682)
>   at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:240)
>   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:211)
>   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:253)
>   ... 21 more
> Caused by: java.util.ConcurrentModificationException
>   at java.util.Hashtable$Enumerator.next(Hashtable.java:1167)
>   at 
> org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2455)
>   at 
> org.apache.hadoop.fs.s3a.S3AUtils.propagateBucketOptions(S3AUtils.java:716)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:181)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2815)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2852)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2834)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
>   at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:68)
>   ... 26 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17692) Block HCat on Acid tables

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204012#comment-16204012
 ] 

Hive QA commented on HIVE-17692:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12891863/HIVE-17692.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7281/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7281/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7281/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-10-13 18:42:13.741
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-7281/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-10-13 18:42:13.743
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   8eaf18d..ea89de7  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 8eaf18d HIVE-17756: Enable subquery related Qtests for Hive on 
Spark (Dapeng via Xuefu)
+ git clean -f -d
Removing common/src/java/org/apache/hadoop/hive/conf/HiveConf.java.orig
Removing itests/qtest/x.patch
Removing itests/src/test/resources/testconfiguration.properties.orig
Removing 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CrossProductHandler.java
Removing ql/src/test/queries/clientpositive/cross_prod_1.q
Removing ql/src/test/queries/clientpositive/cross_prod_3.q
Removing ql/src/test/queries/clientpositive/cross_prod_4.q
Removing ql/src/test/results/clientpositive/llap/cross_prod_1.q.out
Removing ql/src/test/results/clientpositive/llap/cross_prod_3.q.out
Removing ql/src/test/results/clientpositive/llap/cross_prod_4.q.out
Removing standalone-metastore/src/gen/org/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at ea89de7 HIVE-17782: Inconsistent cast behavior from string to 
numeric types with regards to leading/trailing spaces (Jason Dere, reviewed by 
Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-10-13 18:42:19.518
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java:32
error: 
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java:
 patch does not apply
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12891863 - PreCommit-HIVE-Build

> Block HCat on Acid tables
> -
>
> Key: HIVE-17692
> URL: https://issues.apache.org/jira/browse/HIVE-17692
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17692.01.patch
>
>
> See _DDLSemanticAnalzyer.analyzeAlterTablePartMergeFiles(ASTNode ast, String 
> tableName, HashMap partSpec)_
> This was fine before due to 
> {noformat}
>   // throw a HiveException if the table/partition is bucketized
>   if (bucketCols != null && bucketCols.size() > 0) {
> throw new 
> SemanticException(ErrorMsg.CONCATENATE_UNSUPPORTED_TABLE_BUCKETED.getMsg());
>   }
> {noformat}
> but now that we support unbucketed acid tables



--
This 

[jira] [Commented] (HIVE-17801) OpenCSVserde should store schema in metastore

2017-10-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204009#comment-16204009
 ] 

Sergey Shelukhin commented on HIVE-17801:
-

This doesn't make sense since this serde derives the schema itself and ignores 
types.
The code that does that should be removed if we convert it to metastore-based 
serde.

> OpenCSVserde should store schema in metastore
> -
>
> Key: HIVE-17801
> URL: https://issues.apache.org/jira/browse/HIVE-17801
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Serializers/Deserializers
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-17801.patch
>
>
> Just need to add opencsv serde in config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17782) Inconsistent cast behavior from string to numeric types with regards to leading/trailing spaces

2017-10-13 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-17782:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Inconsistent cast behavior from string to numeric types with regards to 
> leading/trailing spaces
> ---
>
> Key: HIVE-17782
> URL: https://issues.apache.org/jira/browse/HIVE-17782
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 3.0.0
>
> Attachments: HIVE-17782.1.patch
>
>
> {noformat}
> select cast(' 1 ' as tinyint), cast(' 1 ' as smallint), cast(' 1 ' as int), 
> cast(' 1 ' as bigint), cast(' 1 ' as float), cast(' 1 ' as double), cast(' 1 
> ' as decimal(10,2))
> NULLNULLNULLNULL1.0 1.0 1
> {noformat}
> Looks like integer types (short, int, etc) fail the conversion due to the 
> leading/trailing spaces and return NULL, while float/double/decimal do not. 
> In fact, Decimal used to also return NULL in previous versions up until 
> HIVE-10799.
> Let's try to make this behavior consistent across all of these types, should 
> be simple enough to strip spaces before passing to number formatter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17635) Add unit tests to CompactionTxnHandler and use PreparedStatements for queries

2017-10-13 Thread Andrew Sherman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-17635:
--
Attachment: HIVE-17635.2.patch

> Add unit tests to CompactionTxnHandler and use PreparedStatements for queries
> -
>
> Key: HIVE-17635
> URL: https://issues.apache.org/jira/browse/HIVE-17635
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17635.1.patch, HIVE-17635.2.patch
>
>
> It is better for jdbc code that runs against the HMS database to use 
> PreparedStatements. Convert CompactionTxnHandler queries to use 
> PreparedStatement and add tests to TestCompactionTxnHandler to test these 
> queries, and improve code coverage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17214) check/fix conversion of unbucketed non-acid to acid

2017-10-13 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170954#comment-16170954
 ] 

Eugene Koifman edited comment on HIVE-17214 at 10/13/17 6:24 PM:
-

non acid2acid conversion works except for 
TestAcidOnTez.testNonStandardConversion02 which tests data files found at 
different levels (root and subdirs).  For some reason it works locally but not 
in ptest

When converting unbucketed tables to acid, we assign bucketId based on file 
name.  For example,
if original table has _0, _0_copy1 - both will have bucketId property 
set such that the id of bucket/writer in it is 0.
Need to finish TestTxnNobuckets.testToAcidConversionMultiBucket() to have a 
test that covers the case when we start with _0, 0001_0.  This should 
assign ROW__IDs as if there are 2 buckets.



was (Author: ekoifman):
non acid2acid conversion works except for 
TestAcidOnTez.testNonStandardConversion02 which tests data files found at 
different levels (root and subdirs).  For some reason it works locally but not 
in ptest

> check/fix conversion of unbucketed non-acid to acid
> ---
>
> Key: HIVE-17214
> URL: https://issues.apache.org/jira/browse/HIVE-17214
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Minor
>
> bucketed tables have stricter rules for file layout on disk - bucket files 
> are direct children of a partition directory.
> for un-bucketed tables I'm not sure there are any rules
> for example, CTAS with Tez + Union operator creates 1 directory for each leg 
> of the union
> Supposedly Hive can read table by picking all files recursively.  
> Can it also write (other than CTAS example above) arbitrarily?
> Does it mean Acid write can also write anywhere?
> Figure out what can be supported and how can existing layout can be checked?  
> Examining a full "ls -l -R" for a large table could be expensive. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-14731) Use Tez cartesian product edge in Hive (unpartitioned case only)

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203976#comment-16203976
 ] 

Hive QA commented on HIVE-14731:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12891853/HIVE-14731.21.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11233 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[char_udf1] (batchId=87)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cross_prod_3]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_sw]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin2] 
(batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin_hint]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_exists]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_include_no_sel]
 (batchId=148)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=108)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning 
(batchId=292)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7280/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7280/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7280/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12891853 - PreCommit-HIVE-Build

> Use Tez cartesian product edge in Hive (unpartitioned case only)
> 
>
> Key: HIVE-14731
> URL: https://issues.apache.org/jira/browse/HIVE-14731
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: HIVE-14731.1.patch, HIVE-14731.10.patch, 
> HIVE-14731.11.patch, HIVE-14731.12.patch, HIVE-14731.13.patch, 
> HIVE-14731.14.patch, HIVE-14731.15.patch, HIVE-14731.16.patch, 
> HIVE-14731.17.patch, HIVE-14731.18.patch, HIVE-14731.19.patch, 
> HIVE-14731.2.patch, HIVE-14731.20.patch, HIVE-14731.21.patch, 
> HIVE-14731.3.patch, HIVE-14731.4.patch, HIVE-14731.5.patch, 
> HIVE-14731.6.patch, HIVE-14731.7.patch, HIVE-14731.8.patch, HIVE-14731.9.patch
>
>
> Given cartesian product edge is available in Tez now (see TEZ-3230), let's 
> integrate it into Hive on Tez. This allows us to have more than one reducer 
> in cross product queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17534) Add a config to turn off parquet vectorization

2017-10-13 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203982#comment-16203982
 ] 

Vihang Karajgaonkar commented on HIVE-17534:


All the test failures are unrelated and have been failing since last couple of 
builds before the change. [~sershe] Would like you to review the minor change 
in {{LlapDecider}}? If not, I would like to merge this by EOD since I already 
have a +1 from [~mmccline] above.

> Add a config to turn off parquet vectorization
> --
>
> Key: HIVE-17534
> URL: https://issues.apache.org/jira/browse/HIVE-17534
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-17534.01.patch, HIVE-17534.02.patch, 
> HIVE-17534.03.patch
>
>
> It should be a good addition to give an option for users to turn off parquet 
> vectorization without affecting vectorization on other file formats. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17797) History of API changes for Hive Common

2017-10-13 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203923#comment-16203923
 ] 

Vihang Karajgaonkar commented on HIVE-17797:


Thanks [~aponomarenko] for the report. This is interesting. Does the tool 
report sematic changes as well? Eg. the implementation of the public API 
changed such that it is still binary compatible but the behavior changed.

> History of API changes for Hive Common
> --
>
> Key: HIVE-17797
> URL: https://issues.apache.org/jira/browse/HIVE-17797
> Project: Hive
>  Issue Type: Improvement
>Reporter: Andrey Ponomarenko
> Attachments: hive-common-1.png, hive-common-2.png
>
>
> Hi,
> I'd like to share the report on API changes and backward binary compatibility 
> for the Hive Common library: 
> https://abi-laboratory.pro/java/tracker/timeline/hive-common/
> The report is generated by the https://github.com/lvc/japi-tracker tool for 
> jars found at http://central.maven.org/maven2/org/apache/hive/hive-common/ 
> according to https://wiki.eclipse.org/Evolving_Java-based_APIs_2.
> Feel free to request other Hive modules to be included to the tracker if you 
> are interested.
> Also please let me know if the tool should not check some parts of the API 
> (it checks all public API methods and classes by default).
> Thank you.
> !hive-common-2.png|API symbols timeline!
> !hive-common-1.png|API changes review!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17791) Temp dirs under the staging directory should honour `inheritPerms`

2017-10-13 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17791:

Status: Patch Available  (was: Open)

> Temp dirs under the staging directory should honour `inheritPerms`
> --
>
> Key: HIVE-17791
> URL: https://issues.apache.org/jira/browse/HIVE-17791
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 2.2.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
> Attachments: HIVE-17791.1-branch-2.patch
>
>
> For [~cdrome]:
> CLI creates two levels of staging directories but calls setPermissions on the 
> top-level directory only if {{hive.warehouse.subdir.inherit.perms=true}}.
> The top-level directory, 
> {{/user/cdrome/hive/words_text_dist/dt=c/.hive-staging_hive_2016-07-15_08-44-22_082_5534649671389063929-1}}
>  is created the first time {{Context.getExternalTmpPath}} is called.
> The child directory, 
> {{/user/cdrome/hive/words_text_dist/dt=c/.hive-staging_hive_2016-07-15_08-44-22_082_5534649671389063929-1/_tmp.-ext-1}}
>  is created when {{TezTask.execute}} is called at line 164:
> {code:java}
> DAG dag = build(jobConf, work, scratchDir, appJarLr, additionalLr, ctx);
> {code}
> This calls {{DagUtils.createVertex}}, which calls {{Utilities.createTmpDirs}}:
> {code:java}
> 3770   private static void createTmpDirs(Configuration conf,
> 3771   List ops) throws IOException {
> 3772 
> 3773 while (!ops.isEmpty()) {
> 3774   Operator op = ops.remove(0);
> 3775 
> 3776   if (op instanceof FileSinkOperator) {
> 3777 FileSinkDesc fdesc = ((FileSinkOperator) op).getConf();
> 3778 Path tempDir = fdesc.getDirName();
> 3779 
> 3780 if (tempDir != null) {
> 3781   Path tempPath = Utilities.toTempPath(tempDir);
> 3782   FileSystem fs = tempPath.getFileSystem(conf);
> 3783   fs.mkdirs(tempPath); // <-- HERE!
> 3784 }
> 3785   }
> 3786 
> 3787   if (op.getChildOperators() != null) {
> 3788 ops.addAll(op.getChildOperators());
> 3789   }
> 3790 }
> 3791   }
> {code}
> It turns out that {{inheritPerms}} is no longer part of {{master}}. I'll 
> rebase this for {{branch-2}}, and {{branch-2.2}}. {{master}} will have to 
> wait till the issues around {{StorageBasedAuthProvider}}, directory 
> permissions, etc. are sorted out.
> (Note to self: YHIVE-857)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17791) Temp dirs under the staging directory should honour `inheritPerms`

2017-10-13 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17791:

Attachment: (was: HIVE-17791.1-branch-2.2.patch)

> Temp dirs under the staging directory should honour `inheritPerms`
> --
>
> Key: HIVE-17791
> URL: https://issues.apache.org/jira/browse/HIVE-17791
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 2.2.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
> Attachments: HIVE-17791.1-branch-2.patch
>
>
> For [~cdrome]:
> CLI creates two levels of staging directories but calls setPermissions on the 
> top-level directory only if {{hive.warehouse.subdir.inherit.perms=true}}.
> The top-level directory, 
> {{/user/cdrome/hive/words_text_dist/dt=c/.hive-staging_hive_2016-07-15_08-44-22_082_5534649671389063929-1}}
>  is created the first time {{Context.getExternalTmpPath}} is called.
> The child directory, 
> {{/user/cdrome/hive/words_text_dist/dt=c/.hive-staging_hive_2016-07-15_08-44-22_082_5534649671389063929-1/_tmp.-ext-1}}
>  is created when {{TezTask.execute}} is called at line 164:
> {code:java}
> DAG dag = build(jobConf, work, scratchDir, appJarLr, additionalLr, ctx);
> {code}
> This calls {{DagUtils.createVertex}}, which calls {{Utilities.createTmpDirs}}:
> {code:java}
> 3770   private static void createTmpDirs(Configuration conf,
> 3771   List ops) throws IOException {
> 3772 
> 3773 while (!ops.isEmpty()) {
> 3774   Operator op = ops.remove(0);
> 3775 
> 3776   if (op instanceof FileSinkOperator) {
> 3777 FileSinkDesc fdesc = ((FileSinkOperator) op).getConf();
> 3778 Path tempDir = fdesc.getDirName();
> 3779 
> 3780 if (tempDir != null) {
> 3781   Path tempPath = Utilities.toTempPath(tempDir);
> 3782   FileSystem fs = tempPath.getFileSystem(conf);
> 3783   fs.mkdirs(tempPath); // <-- HERE!
> 3784 }
> 3785   }
> 3786 
> 3787   if (op.getChildOperators() != null) {
> 3788 ops.addAll(op.getChildOperators());
> 3789   }
> 3790 }
> 3791   }
> {code}
> It turns out that {{inheritPerms}} is no longer part of {{master}}. I'll 
> rebase this for {{branch-2}}, and {{branch-2.2}}. {{master}} will have to 
> wait till the issues around {{StorageBasedAuthProvider}}, directory 
> permissions, etc. are sorted out.
> (Note to self: YHIVE-857)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17791) Temp dirs under the staging directory should honour `inheritPerms`

2017-10-13 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17791:

Status: Open  (was: Patch Available)

Temporarily removing the {{branch-2.2}} patch, to get {{branch-2}} tests to run.

> Temp dirs under the staging directory should honour `inheritPerms`
> --
>
> Key: HIVE-17791
> URL: https://issues.apache.org/jira/browse/HIVE-17791
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 2.2.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
> Attachments: HIVE-17791.1-branch-2.patch
>
>
> For [~cdrome]:
> CLI creates two levels of staging directories but calls setPermissions on the 
> top-level directory only if {{hive.warehouse.subdir.inherit.perms=true}}.
> The top-level directory, 
> {{/user/cdrome/hive/words_text_dist/dt=c/.hive-staging_hive_2016-07-15_08-44-22_082_5534649671389063929-1}}
>  is created the first time {{Context.getExternalTmpPath}} is called.
> The child directory, 
> {{/user/cdrome/hive/words_text_dist/dt=c/.hive-staging_hive_2016-07-15_08-44-22_082_5534649671389063929-1/_tmp.-ext-1}}
>  is created when {{TezTask.execute}} is called at line 164:
> {code:java}
> DAG dag = build(jobConf, work, scratchDir, appJarLr, additionalLr, ctx);
> {code}
> This calls {{DagUtils.createVertex}}, which calls {{Utilities.createTmpDirs}}:
> {code:java}
> 3770   private static void createTmpDirs(Configuration conf,
> 3771   List ops) throws IOException {
> 3772 
> 3773 while (!ops.isEmpty()) {
> 3774   Operator op = ops.remove(0);
> 3775 
> 3776   if (op instanceof FileSinkOperator) {
> 3777 FileSinkDesc fdesc = ((FileSinkOperator) op).getConf();
> 3778 Path tempDir = fdesc.getDirName();
> 3779 
> 3780 if (tempDir != null) {
> 3781   Path tempPath = Utilities.toTempPath(tempDir);
> 3782   FileSystem fs = tempPath.getFileSystem(conf);
> 3783   fs.mkdirs(tempPath); // <-- HERE!
> 3784 }
> 3785   }
> 3786 
> 3787   if (op.getChildOperators() != null) {
> 3788 ops.addAll(op.getChildOperators());
> 3789   }
> 3790 }
> 3791   }
> {code}
> It turns out that {{inheritPerms}} is no longer part of {{master}}. I'll 
> rebase this for {{branch-2}}, and {{branch-2.2}}. {{master}} will have to 
> wait till the issues around {{StorageBasedAuthProvider}}, directory 
> permissions, etc. are sorted out.
> (Note to self: YHIVE-857)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17795) Add distribution management tag in pom

2017-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17795:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, [~r...@cmbasics.com]

> Add distribution management tag in pom
> --
>
> Key: HIVE-17795
> URL: https://issues.apache.org/jira/browse/HIVE-17795
> Project: Hive
>  Issue Type: Bug
>Reporter: Raja Aluri
>Assignee: Raja Aluri
> Fix For: 3.0.0
>
> Attachments: HIVE-17795.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-16722) Converting bucketed non-acid table to acid should perform validation

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-16722:
-

Assignee: Eugene Koifman

> Converting bucketed non-acid table to acid should perform validation
> 
>
> Key: HIVE-16722
> URL: https://issues.apache.org/jira/browse/HIVE-16722
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> Converting a non acid table to acid only performs metadata validation (in 
> _TransactionalValidationListener_).
> The data read code path only understands certain directory layouts and file 
> names and ignores (generally) files that don't match the expected format.
> In Hive, directory layout and bucket file naming (especially older releases) 
> is poorly enforced.
> Need to add a validation step on 
> {noformat}
> alter table T SET TBLPROPERTIES ('transactional'='true')
> {noformat}
> to 
> scan the file system and report any possible data loss scenarios.
> Currently Acid understands bucket files name like "0_0" and (with 
> HIVE-16177) 0_0_copy1" etc at the root of the partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17701) Added restriction to historic queries on web UI

2017-10-13 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203896#comment-16203896
 ] 

Tao Li commented on HIVE-17701:
---

Thanks [~thejas] for the comments and commit.

> Added restriction to historic queries on web UI
> ---
>
> Key: HIVE-17701
> URL: https://issues.apache.org/jira/browse/HIVE-17701
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Tao Li
> Fix For: 3.0.0
>
> Attachments: HIVE-17701.1.patch, HIVE-17701.2.patch, 
> HIVE-17701.3.patch, HIVE-17701.4.patch, HIVE-17701.5.patch, HIVE-17701.6.patch
>
>
> The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. 
> However, a user can see the queries run by other users as well, and that is a 
> security/privacy concern.
> Only admin users should be allowed to see queries from other users (similar 
> to behavior of display for configs, stack trace etc).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17801) OpenCSVserde should store schema in metastore

2017-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17801:

Assignee: Ashutosh Chauhan
  Status: Patch Available  (was: Open)

[~sershe] Can you take a look?

> OpenCSVserde should store schema in metastore
> -
>
> Key: HIVE-17801
> URL: https://issues.apache.org/jira/browse/HIVE-17801
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Serializers/Deserializers
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-17801.patch
>
>
> Just need to add opencsv serde in config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17801) OpenCSVserde should store schema in metastore

2017-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17801:

Attachment: HIVE-17801.patch

> OpenCSVserde should store schema in metastore
> -
>
> Key: HIVE-17801
> URL: https://issues.apache.org/jira/browse/HIVE-17801
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Serializers/Deserializers
>Reporter: Ashutosh Chauhan
> Attachments: HIVE-17801.patch
>
>
> Just need to add opencsv serde in config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-15104) Hive on Spark generate more shuffle data than hive on mr

2017-10-13 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203867#comment-16203867
 ] 

Xuefu Zhang commented on HIVE-15104:


I think it's fairly safe to assume that hive-exec.jar and the new jar are in 
the same location. We can error out if the jar cannot be found in that location.

> Hive on Spark generate more shuffle data than hive on mr
> 
>
> Key: HIVE-15104
> URL: https://issues.apache.org/jira/browse/HIVE-15104
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 1.2.1
>Reporter: wangwenli
>Assignee: Rui Li
> Attachments: HIVE-15104.1.patch, HIVE-15104.2.patch, 
> HIVE-15104.3.patch, HIVE-15104.4.patch, HIVE-15104.5.patch, 
> HIVE-15104.5.patch, TPC-H 100G.xlsx
>
>
> the same sql,  running on spark  and mr engine, will generate different size 
> of shuffle data.
> i think it is because of hive on mr just serialize part of HiveKey, but hive 
> on spark which using kryo will serialize full of Hivekey object.  
> what is your opionion?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17756) Enable subquery related Qtests for Hive on Spark

2017-10-13 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-17756:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Patch committed to master. Thanks to Dapeng for the contribution.

> Enable subquery related Qtests for Hive on Spark
> 
>
> Key: HIVE-17756
> URL: https://issues.apache.org/jira/browse/HIVE-17756
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
> Fix For: 3.0.0
>
> Attachments: HIVE-17756.001.patch
>
>
> HIVE-15456 and HIVE-15192 using Calsite to decorrelate and plan subqueries. 
> This JIRA is to indroduce subquery test and verify the subqueries plan for 
> Hive on Spark



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17730) Queries can be closed automatically

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203856#comment-16203856
 ] 

Hive QA commented on HIVE-17730:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12891845/HIVE-17730.04.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7279/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7279/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7279/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-10-13 17:01:09.226
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-7279/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-10-13 17:01:09.229
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   0a9fabb..1253450  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 0a9fabb HIVE-17790: Export/Import: Bug while getting auth 
entities due to which we write partition info during compilation phase (Vaibhav 
Gumashta reviewed by Thejas Nair)
+ git clean -f -d
Removing standalone-metastore/src/gen/org/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 1253450 HIVE-15267 Make query length calculation logic more 
accurate in TxnUtils.needNewQuery() (Steve Yeom, reviewed by Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-10-13 17:01:14.344
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: 
No such file or directory
error: 
a/metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java: 
No such file or directory
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12891845 - PreCommit-HIVE-Build

> Queries can be closed automatically
> ---
>
> Key: HIVE-17730
> URL: https://issues.apache.org/jira/browse/HIVE-17730
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: HIVE-17730.04.patch
>
>
> HIVE-16213 made QueryWrapper AutoCloseable, but queries are still closed 
> manually and not by using try-with-resource. And now Query itself is auto 
> closeable, so we don't need the wrapper at all.
> So we should get rid of QueryWrapper and use try-with-resource to create 
> queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17795) Add distribution management tag in pom

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203815#comment-16203815
 ] 

Hive QA commented on HIVE-17795:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12891846/HIVE-17795.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 11222 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_orcfile] 
(batchId=242)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=241)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=241)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=239)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=239)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=239)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=202)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7278/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7278/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7278/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12891846 - PreCommit-HIVE-Build

> Add distribution management tag in pom
> --
>
> Key: HIVE-17795
> URL: https://issues.apache.org/jira/browse/HIVE-17795
> Project: Hive
>  Issue Type: Bug
>Reporter: Raja Aluri
>Assignee: Raja Aluri
> Attachments: HIVE-17795.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17679) http-generic-click-jacking for WebHcat server

2017-10-13 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203791#comment-16203791
 ] 

Aihua Xu commented on HIVE-17679:
-

[~thejas] This change doesn't prevent XSRF, but I don't think HIVE-13853 would 
fix click jacking issue though. Hadoop has the jira HADOOP-12964 to fix click 
jacking issue.  XSRF and click jacking seem to be different issues. See 
https://stackoverflow.com/questions/17013150/does-csrf-defense-also-defend-against-clickjacking.
 

> http-generic-click-jacking for WebHcat server
> -
>
> Key: HIVE-17679
> URL: https://issues.apache.org/jira/browse/HIVE-17679
> Project: Hive
>  Issue Type: Bug
>  Components: Security, WebHCat
>Affects Versions: 2.1.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 3.0.0
>
> Attachments: HIVE-17679.1.patch, HIVE-17679.2.patch
>
>
> The web UIs do not include the "X-Frame-Options" header to prevent the pages 
> from being framed from another site.
> Reference:
> https://www.owasp.org/index.php/Clickjacking
> https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
> https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15267) Make query length calculation logic more accurate in TxnUtils.needNewQuery()

2017-10-13 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-15267:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

patch 6 committed to master
thanks Steve for the contribution

> Make query length calculation logic more accurate in TxnUtils.needNewQuery()
> 
>
> Key: HIVE-15267
> URL: https://issues.apache.org/jira/browse/HIVE-15267
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Wei Zheng
>Assignee: Steve Yeom
> Fix For: 3.0.0
>
> Attachments: HIVE-15267.01.patch, HIVE-15267.02.patch, 
> HIVE-15267.03.patch, HIVE-15267.04.patch, HIVE-15267.05.patch, 
> HIVE-15267.06.patch
>
>
> In HIVE-15181 there's such review comment, for which this ticket will handle
> {code}
> in TxnUtils.needNewQuery() "sizeInBytes / 1024 > queryMemoryLimit" doesn't do 
> the right thing.
> If the user sets METASTORE_DIRECT_SQL_MAX_QUERY_LENGTH to 1K, they most 
> likely want each SQL string to be at most 1K.
> But if sizeInBytes=2047, this still returns false.
> It should include length of "suffix" in computation of sizeInBytes
> Along the same lines: the check for max query length is done after each batch 
> is already added to the query. Suppose there are 1000 9-digit txn IDs in each 
> IN(...). That's, conservatively, 18KB of text. So the length of each query is 
> increasing in 18KB chunks. 
> I think the check for query length should be done for each item in IN clause.
> If some DB has a limit on query length of X, then any query > X will fail. So 
> I think this must ensure not to produce any queries > X, even by 1 char.
> For example, case 3.1 of the UT generates a query of almost 4000 characters - 
> this is clearly > 1KB.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203699#comment-16203699
 ] 

Hive QA commented on HIVE-17794:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12891838/HIVE-17794.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 11222 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=241)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] 
(batchId=241)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=239)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=239)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query94] 
(batchId=239)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=202)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7277/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7277/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7277/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12891838 - PreCommit-HIVE-Build

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 

[jira] [Updated] (HIVE-17787) Apply more filters on the BeeLine test output files (follow-up on HIVE-17569)

2017-10-13 Thread Marta Kuczora (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-17787:
-
Attachment: HIVE-17787.2.patch

> Apply more filters on the BeeLine test output files (follow-up on HIVE-17569)
> -
>
> Key: HIVE-17787
> URL: https://issues.apache.org/jira/browse/HIVE-17787
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Affects Versions: 3.0.0
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Minor
> Attachments: HIVE-17787.1.patch, HIVE-17787.2.patch
>
>
> When running the q tests with BeeLine, some known differences came up which 
> should be filtered out if the "test.beeline.compare.portable" parameter is 
> set to true.
> The result of the following commands can be different when running them via 
> BeeLine then in the golden out file:
> - DESCRIBE
> - SHOW TABLES
> - SHOW FORMATTED TABLES
> - SHOW DATABASES TABLES
> Also the join warnings and the mapreduce jobtracker address can be different 
> so it would make sense to filter them out.
> For example:
> {noformat}
> Warning: Map Join MAPJOIN[13][bigTable=?] in task 'Stage-3:MAPRED' is a cross 
> product
> Warning: MASKED is a cross product
> {noformat}
> {noformat}
> mapreduce.jobtracker.address=local
> mapreduce.jobtracker.address=MASKED
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17800) input_part6.q wants to test partition pruning, but tests expression evaluation

2017-10-13 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-17800:
-


> input_part6.q wants to test partition pruning, but tests expression evaluation
> --
>
> Key: HIVE-17800
> URL: https://issues.apache.org/jira/browse/HIVE-17800
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>
> input_part6.q looks like this:
> {code}
> EXPLAIN
> SELECT x.* FROM SRCPART x WHERE x.ds = 2008-04-08 LIMIT 10;
> {code}
> The intended test most probably is this:
> {code}
> EXPLAIN
> SELECT x.* FROM SRCPART x WHERE x.ds = "2008-04-08" LIMIT 10;
> {code}
> Currently we evaluete 2008-4-8 to 1996:
> {code}
> predicate: (UDFToDouble(ds) = 1996.0) (type: boolean)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >