[jira] [Commented] (HIVE-18048) Support Struct type with vectorization for Parquet file

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343032#comment-16343032
 ] 

Hive QA commented on HIVE-18048:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8905/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8905/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support Struct type with vectorization for Parquet file
> ---
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch, HIVE-18048.005.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
> New UDF will be added to access the field of Struct.
> Note:
>  * Filter operator won't be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken

2018-01-28 Thread Teddy Choi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343016#comment-16343016
 ] 

Teddy Choi commented on HIVE-18562:
---

+1. LGTM.

> Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
> 
>
> Key: HIVE-18562
> URL: https://issues.apache.org/jira/browse/HIVE-18562
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18562.01.patch
>
>
> Altering a CHAR/VARCHAR column's maxLength to a shorter value does not 
> truncate values when vectorized. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18495) JUnit rule to enable Driver level testing

2018-01-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-18495:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you Ashutosh for reviewing the changes!

> JUnit rule to enable Driver level testing
> -
>
> Key: HIVE-18495
> URL: https://issues.apache.org/jira/browse/HIVE-18495
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18495.01.patch, HIVE-18495.02.patch, 
> HIVE-18495.03.patch
>
>
> I've tried to write a case for a sophisticated check...it worked so well that 
> I've started using it and eventually created a junit rule to make it easier 
> to reuse
> Currently it takes ~15-25sec to run a test case with this framework (from 
> which most of the time is the launch time of all kind of stuff which are 
> needed to run a driver command).
> * enable to write JUnit tests which has access to the {{IDriver}} level
> * leave out the cli-driver; it sometimes causes problems
> * write tests at the {{ql}} module
> * it should also work from the IDE without changing anything
> Note: JUnit 5 would be great for this task; but unfortunately junit5 needs 
> maven-surefire 2.19.1 ; which causes all kinds of problems for hive devs 
> using idea...so that's not an option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18557) q.outs: fix issues caused by q.out_spark files

2018-01-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-18557:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you [~abstractdog] for taking care of this and @pete 
for reviewing the changes!
(I've adjusted the indentation to match the lines around that part; and 
currently there are 1 tab + 3 spaces which is pretty exotic to me :D

> q.outs: fix issues caused by q.out_spark files
> --
>
> Key: HIVE-18557
> URL: https://issues.apache.org/jira/browse/HIVE-18557
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18557.01.patch
>
>
> HIVE-18061 caused some issues in yetus check by introducing q.out_spark files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-15353) Metastore throws NPE if StorageDescriptor.cols is null

2018-01-28 Thread Ping Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ping Lu updated HIVE-15353:
---
Description: 
When using the HiveMetaStoreClient API directly to talk to the metastore, you 
get NullPointerExceptions when StorageDescriptor.cols is null in the 
Table/Partition object in the following calls:
 * create_table
 * alter_table
 * alter_partition

Calling add_partition with StorageDescriptor.cols set to null causes null to be 
stored in the metastore database and subsequent calls to alter_partition for 
that partition to fail with an NPE.

Null checks should be added to eliminate the NPEs in the metastore.

  was:
When using the HiveMetaStoreClient API directly to talk to the metastore, you 
get NullPointerExceptions when StorageDescriptor.cols is null in the 
Table/Partition object in the following calls:

* create_table
* alter_table
* alter_partition

Calling add_partition with StorageDescriptor.cols set to null causes null to be 
stored in the metastore database and subsequent calls to alter_partition for 
that partition to fail with an NPE.

Null checks should be added to eliminate the NPEs in the metastore.


> Metastore throws NPE if StorageDescriptor.cols is null
> --
>
> Key: HIVE-15353
> URL: https://issues.apache.org/jira/browse/HIVE-15353
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0, 2.2.0
>Reporter: Anthony Hsu
>Assignee: Anthony Hsu
>Priority: Major
> Attachments: HIVE-15353.1.patch, HIVE-15353.2.patch, 
> HIVE-15353.3.patch, HIVE-15353.4.patch
>
>
> When using the HiveMetaStoreClient API directly to talk to the metastore, you 
> get NullPointerExceptions when StorageDescriptor.cols is null in the 
> Table/Partition object in the following calls:
>  * create_table
>  * alter_table
>  * alter_partition
> Calling add_partition with StorageDescriptor.cols set to null causes null to 
> be stored in the metastore database and subsequent calls to alter_partition 
> for that partition to fail with an NPE.
> Null checks should be added to eliminate the NPEs in the metastore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18478) Drop of temp table creating recycle files at CM path

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342988#comment-16342988
 ] 

Hive QA commented on HIVE-18478:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908095/HIVE-18478.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 12610 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
 (batchId=207)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=207)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8904/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8904/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8904/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908095 - PreCommit-HIVE-Build

> Drop of temp table creating recycle files at CM path
> 
>
> Key: HIVE-18478
> URL: https://issues.apache.org/jira/browse/HIVE-18478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18478.01.patch, HIVE-18478.02.patch
>
>
> Drop TEMP table operation invokes deleteDir which moves the file to $CMROOT 
> which is not needed as temp tables need not be replicated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18048) Support Struct type with vectorization for Parquet file

2018-01-28 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18048:

Attachment: HIVE-18048.005.patch

> Support Struct type with vectorization for Parquet file
> ---
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch, HIVE-18048.005.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
> New UDF will be added to access the field of Struct.
> Note:
>  * Filter operator won't be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18048) Support Struct type with vectorization for Parquet file

2018-01-28 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18048:

Description: 
Struct type is not supported in MapWork with vectorization, it should be 
supported to improve the performance.
New UDF will be added to access the field of Struct.

Note:
 * Filter operator won't be supported.

  was:
Struct type is not supported in MapWork with vectorization, it should be 
supported to improve the performance.
 The following improvements will be implemented:
 * Add fields of struct type to VectorizedRowBatchCtx.
 * Improve the VectorizedParquetRecordReader to support the struct type for 
parquet file.

Note:
 * Filter operator won't be supported.


> Support Struct type with vectorization for Parquet file
> ---
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
> New UDF will be added to access the field of Struct.
> Note:
>  * Filter operator won't be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18048) Support Struct type with vectorization for Parquet file

2018-01-28 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18048:

Description: 
Struct type is not supported in MapWork with vectorization, it should be 
supported to improve the performance.
 The following improvements will be implemented:
 * Add fields of struct type to VectorizedRowBatchCtx.
 * Improve the VectorizedParquetRecordReader to support the struct type for 
parquet file.

Note:
 * Filter operator won't be supported.

  was:
Struct type is not supported in MapWork with vectorization, it should be 
supported to improve the performance.
The following improvements will be implemented:
* Add fields of struct type to VectorizedRowBatchCtx.
* Improve the VectorizedParquetRecordReader to support the struct type for 
parquet file.

Note:
* Orc file won't be supported.
* Filter operator won't be supported.


> Support Struct type with vectorization for Parquet file
> ---
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
>  The following improvements will be implemented:
>  * Add fields of struct type to VectorizedRowBatchCtx.
>  * Improve the VectorizedParquetRecordReader to support the struct type for 
> parquet file.
> Note:
>  * Filter operator won't be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18478) Drop of temp table creating recycle files at CM path

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342952#comment-16342952
 ] 

Hive QA commented on HIVE-18478:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 2 new + 251 unchanged - 0 
fixed = 253 total (was 251) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8904/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8904/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8904/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Drop of temp table creating recycle files at CM path
> 
>
> Key: HIVE-18478
> URL: https://issues.apache.org/jira/browse/HIVE-18478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18478.01.patch, HIVE-18478.02.patch
>
>
> Drop TEMP table operation invokes deleteDir which moves the file to $CMROOT 
> which is not needed as temp tables need not be replicated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18048) Support Struct type with vectorization for Parquet file

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342929#comment-16342929
 ] 

Hive QA commented on HIVE-18048:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908079/HIVE-18048.004.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 12633 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_complex_all]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=207)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8903/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8903/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8903/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 26 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908079 - PreCommit-HIVE-Build

> Support Struct type with vectorization for Parquet file
> ---
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
> The following improvements will be implemented:
> * Add fields of struct type to VectorizedRowBatchCtx.
> * Improve the VectorizedParquetRecordReader to support the struct type for 
> parquet file.
> Note:
> * Orc file won't be supported.
> * Filter operator won't be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18237) missing results for insert_only table after DP insert

2018-01-28 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342928#comment-16342928
 ] 

Steve Yeom commented on HIVE-18237:
---

The two tests failed with age 1 for the p-test run are verified to be 
successful when I ran in my environment. 

> missing results for insert_only table after DP insert
> -
>
> Key: HIVE-18237
> URL: https://issues.apache.org/jira/browse/HIVE-18237
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Zoltan Haindrich
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, 
> HIVE-18237.03.patch
>
>
> {code}
> set hive.stats.column.autogather=false;
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=200;
> set hive.exec.max.dynamic.partitions=200;
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create table i0 (p int,v int);
> insert into i0 values
> (0,0),
> (2,2),
> (3,3);
> create table p0 (v int) partitioned by (p int) stored as orc 
>   tblproperties ("transactional"="true", 
> "transactional_properties"="insert_only");
> explain insert overwrite table p0 partition (p) select * from i0 where v < 3;
> insert overwrite table p0 partition (p) select * from i0 where v < 3;
> select count(*) from p0 where v!=1;
> {code}
> The table p0 should contain {{2}} rows at this point; but the result is {{0}}.
> * seems to be specific to insert_only tables
> * the existing data appears if an {{insert into}} is executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18478) Drop of temp table creating recycle files at CM path

2018-01-28 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-18478:
---
Status: Patch Available  (was: Open)

added the checks for temp table and updated with the proper patch file

> Drop of temp table creating recycle files at CM path
> 
>
> Key: HIVE-18478
> URL: https://issues.apache.org/jira/browse/HIVE-18478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18478.01.patch, HIVE-18478.02.patch
>
>
> Drop TEMP table operation invokes deleteDir which moves the file to $CMROOT 
> which is not needed as temp tables need not be replicated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18478) Drop of temp table creating recycle files at CM path

2018-01-28 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-18478:
---
Attachment: HIVE-18478.02.patch

> Drop of temp table creating recycle files at CM path
> 
>
> Key: HIVE-18478
> URL: https://issues.apache.org/jira/browse/HIVE-18478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18478.01.patch, HIVE-18478.02.patch
>
>
> Drop TEMP table operation invokes deleteDir which moves the file to $CMROOT 
> which is not needed as temp tables need not be replicated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18478) Drop of temp table creating recycle files at CM path

2018-01-28 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-18478:
---
Status: Open  (was: Patch Available)

> Drop of temp table creating recycle files at CM path
> 
>
> Key: HIVE-18478
> URL: https://issues.apache.org/jira/browse/HIVE-18478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18478.01.patch
>
>
> Drop TEMP table operation invokes deleteDir which moves the file to $CMROOT 
> which is not needed as temp tables need not be replicated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18048) Support Struct type with vectorization for Parquet file

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342908#comment-16342908
 ] 

Hive QA commented on HIVE-18048:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8903/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8903/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support Struct type with vectorization for Parquet file
> ---
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
> The following improvements will be implemented:
> * Add fields of struct type to VectorizedRowBatchCtx.
> * Improve the VectorizedParquetRecordReader to support the struct type for 
> parquet file.
> Note:
> * Orc file won't be supported.
> * Filter operator won't be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)

2018-01-28 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342906#comment-16342906
 ] 

Matt McCline commented on HIVE-18524:
-

Committed to master.

> Vectorization: Execution failure related to non-standard embedding of 
> IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
> -
>
> Key: HIVE-18524
> URL: https://issues.apache.org/jira/browse/HIVE-18524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch
>
>
> {noformat}
> insert overwrite table insert_10_1
> select cast(gpa as float),
>age,
>IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL),
>IF(LENGTH(name)>10,cast(name as binary),NULL)
> from studentnull10k
> vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double]
> ExprNodeDescs:
> UDFToFloat(gpa) (type: float),
> age (type: int),
> if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp),
> if((length(name) > 10), CAST( name AS BINARY), null) (type: binary)
> selectExpressions:
> VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null))
> (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) 
> -> 5:timestamp,
> VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null))
> (children: LongColGreaterLongScalar(col 4:int, val 10)(children: 
> StringLength(col 0:string) -> 4:int) -> 6:boolean,
> VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary
> {noformat}
> *// Notice there is no vector expression shown for the last IF stmt.*  It has 
> been magically embedded inside the VectorUDFAdaptor object...
> Execution results in this call stack.
> {nocode}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays.copyOfRange(Arrays.java:3521)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:177)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:145)
>   ... 22 more
> {nocode}
> Change is due to:
> HIVE-17139: Conditional expressions optimization: skip the expression 
> evaluation if the condition is not satisfied for vectorization engine. (Jia 
> Ke, reviewed by Ferdinand Xu)
> Embedding a raw vector expression outside of VectorizationContext is quite 
> non-standard and evidently buggy.
> [~Ferd] [~Ke Jia] I am inclined to revert this change.  Comments?  CC: 
> [~ashutoshc] [~hagleitn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18524:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Vectorization: Execution failure related to non-standard embedding of 
> IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
> -
>
> Key: HIVE-18524
> URL: https://issues.apache.org/jira/browse/HIVE-18524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch
>
>
> {noformat}
> insert overwrite table insert_10_1
> select cast(gpa as float),
>age,
>IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL),
>IF(LENGTH(name)>10,cast(name as binary),NULL)
> from studentnull10k
> vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double]
> ExprNodeDescs:
> UDFToFloat(gpa) (type: float),
> age (type: int),
> if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp),
> if((length(name) > 10), CAST( name AS BINARY), null) (type: binary)
> selectExpressions:
> VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null))
> (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) 
> -> 5:timestamp,
> VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null))
> (children: LongColGreaterLongScalar(col 4:int, val 10)(children: 
> StringLength(col 0:string) -> 4:int) -> 6:boolean,
> VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary
> {noformat}
> *// Notice there is no vector expression shown for the last IF stmt.*  It has 
> been magically embedded inside the VectorUDFAdaptor object...
> Execution results in this call stack.
> {nocode}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays.copyOfRange(Arrays.java:3521)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:177)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:145)
>   ... 22 more
> {nocode}
> Change is due to:
> HIVE-17139: Conditional expressions optimization: skip the expression 
> evaluation if the condition is not satisfied for vectorization engine. (Jia 
> Ke, reviewed by Ferdinand Xu)
> Embedding a raw vector expression outside of VectorizationContext is quite 
> non-standard and evidently buggy.
> [~Ferd] [~Ke Jia] I am inclined to revert this change.  Comments?  CC: 
> [~ashutoshc] [~hagleitn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342905#comment-16342905
 ] 

Matt McCline commented on HIVE-18561:
-

1 day test failures are not related.

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch, HIVE-18561.04.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342895#comment-16342895
 ] 

Hive QA commented on HIVE-18561:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908078/HIVE-18561.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 12632 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag[2]
 (batchId=193)
org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteDate2 
(batchId=193)
org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteDate3 
(batchId=193)
org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteSmallint 
(batchId=193)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8902/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8902/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8902/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908078 - PreCommit-HIVE-Build

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch, HIVE-18561.04.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16886) HMS log notifications may have duplicated event IDs if multiple HMS are running concurrently

2018-01-28 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342893#comment-16342893
 ] 

anishek commented on HIVE-16886:


[http://www.datanucleus.org/products/accessplatform_4_1/jdo/transactions.html]
{quote}By default DataNucleus does not currently lock the objects fetched with 
pessimistic locking, but you can configure this behaviour for RDBMS datastores 
by setting the persistence property datanucleus.SerializeRead to true. This 
will result in all "SELECT ... FROM ..." statements being changed to be "SELECT 
... FROM ... FOR UPDATE". *This will be applied only where the underlying RDBMS 
supports the "FOR UPDATE" syntax.*
{quote}
for SQLServer this will not work since the syntax is different. the code shows 
us using datanuclues 4.1 version btw.

> HMS log notifications may have duplicated event IDs if multiple HMS are 
> running concurrently
> 
>
> Key: HIVE-16886
> URL: https://issues.apache.org/jira/browse/HIVE-16886
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0, 2.3.2, 2.3.3
>Reporter: Sergio Peña
>Assignee: anishek
>Priority: Major
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-16886.1.patch, HIVE-16886.2.patch, 
> HIVE-16886.3.patch, HIVE-16886.4.patch, HIVE-16886.5.patch, 
> HIVE-16886.6.patch, HIVE-16886.7.patch, HIVE-16886.8.patch, 
> datastore-identity-holes.diff
>
>
> When running multiple Hive Metastore servers and DB notifications are 
> enabled, I could see that notifications can be persisted with a duplicated 
> event ID. 
> This does not happen when running multiple threads in a single HMS node due 
> to the locking acquired on the DbNotificationsLog class, but multiple HMS 
> could cause conflicts.
> The issue is in the ObjectStore#addNotificationEvent() method. The event ID 
> fetched from the datastore is used for the new notification, incremented in 
> the server itself, then persisted or updated back to the datastore. If 2 
> servers read the same ID, then these 2 servers write a new notification with 
> the same ID.
> The event ID is not unique nor a primary key.
> Here's a test case using the TestObjectStore class that confirms this issue:
> {noformat}
> @Test
>   public void testConcurrentAddNotifications() throws ExecutionException, 
> InterruptedException {
> final int NUM_THREADS = 2;
> CountDownLatch countIn = new CountDownLatch(NUM_THREADS);
> CountDownLatch countOut = new CountDownLatch(1);
> HiveConf conf = new HiveConf();
> conf.setVar(HiveConf.ConfVars.METASTORE_EXPRESSION_PROXY_CLASS, 
> MockPartitionExpressionProxy.class.getName());
> ExecutorService executorService = 
> Executors.newFixedThreadPool(NUM_THREADS);
> FutureTask tasks[] = new FutureTask[NUM_THREADS];
> for (int i=0; i   final int n = i;
>   tasks[i] = new FutureTask(new Callable() {
> @Override
> public Void call() throws Exception {
>   ObjectStore store = new ObjectStore();
>   store.setConf(conf);
>   NotificationEvent dbEvent =
>   new NotificationEvent(0, 0, 
> EventMessage.EventType.CREATE_DATABASE.toString(), "CREATE DATABASE DB" + n);
>   System.out.println("ADDING NOTIFICATION");
>   countIn.countDown();
>   countOut.await();
>   store.addNotificationEvent(dbEvent);
>   System.out.println("FINISH NOTIFICATION");
>   return null;
> }
>   });
>   executorService.execute(tasks[i]);
> }
> countIn.await();
> countOut.countDown();
> for (int i = 0; i < NUM_THREADS; ++i) {
>   tasks[i].get();
> }
> NotificationEventResponse eventResponse = 
> objectStore.getNextNotification(new NotificationEventRequest());
> Assert.assertEquals(2, eventResponse.getEventsSize());
> Assert.assertEquals(1, eventResponse.getEvents().get(0).getEventId());
> // This fails because the next notification has an event ID = 1
> Assert.assertEquals(2, eventResponse.getEvents().get(1).getEventId());
>   }
> {noformat}
> The last assertion fails expecting an event ID 1 instead of 2. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342883#comment-16342883
 ] 

Hive QA commented on HIVE-18561:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 1 new + 352 unchanged - 0 
fixed = 353 total (was 352) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
39s{color} | {color:red} root: The patch generated 1 new + 352 unchanged - 0 
fixed = 353 total (was 352) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8902/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8902/yetus/diff-checkstyle-root.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8902/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8902/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch, HIVE-18561.04.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18540) remove logic for wide terminal to display in-place updates

2018-01-28 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342881#comment-16342881
 ] 

anishek commented on HIVE-18540:


[~windpiger] changes look good +1, I am assuming that having a non 
formatted/un-aligned output is better that nothing ?

>  remove logic for wide terminal to display in-place updates
> ---
>
> Key: HIVE-18540
> URL: https://issues.apache.org/jira/browse/HIVE-18540
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.3.2
>Reporter: Song Jun
>Assignee: Song Jun
>Priority: Minor
> Attachments: HIVE-18540.patch
>
>
> Hive Shell/beeline run Tez job, when the terminal wide is lower than 95, 
> there will be no output(progress bar) to the console, because of the 
> following limit logic in class InPlaceUpdate:
> {code:java}
> public static boolean canRenderInPlace(HiveConf conf) { boolean 
> inPlaceUpdates = HiveConf.getBoolVar(conf, 
> HiveConf.ConfVars.TEZ_EXEC_INPLACE_PROGRESS); // we need at least 80 chars 
> wide terminal to display in-place updates properly return inPlaceUpdates && 
> isUnixTerminal() && TerminalFactory.get().getWidth() >= MIN_TERMINAL_WIDTH; 
> }{code}
>  
> {code:java}
> && isUnixTerminal() && TerminalFactory.get().getWidth() >= 
> MIN_TERMINAL_WIDTH;{code}
> If someone do not know this logic ,and the terminal is not enough wide, he 
> will think something wrong and do know how to fix it.
>  
> and I think it is better to remove the trouble logic.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-15269) Dynamic Min-Max/BloomFilter runtime-filtering for Tez

2018-01-28 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342874#comment-16342874
 ] 

Deepak Jaiswal edited comment on HIVE-15269 at 1/29/18 4:01 AM:


Yes it is trying to remove the dpp or runtime filter branch. The idea is to 
break the cycle by removing the least impacting branch. AM operator or the TS 
operator is on the target table and not on the side where the branch is. So if 
the target is smaller, the impact is likely smaller.

 

For a simple case for runtime filtering, lets say there are two tables A & B. A 
has 1 GB data and B has 10 GB of data. There is a branch on A which creates 
filter for B and there is a branch on B which creates a filter for A. This 
results in a cycle which needs to be broken by removing the branch which is 
least effective. In this case, we want to remove the branch on B which creates 
a filter on A as A is 10 times smaller. That is what this code does. Compares 
the size of the target tables and picks the table with smallest size as 
candidate.

 

I hope it helps.

 

Please take a look at test dynamic_semijoin_reduction.q and its result file for 
the explain plans.

 

[https://github.com/apache/hive/blob/1dd863ab0bc47115d3c89ed8058967c1496819c6/ql/src/test/queries/clientpositive/dynamic_semijoin_reduction.q]

 

[https://github.com/apache/hive/blob/1dd863ab0bc47115d3c89ed8058967c1496819c6/ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction.q.out]

 


was (Author: djaiswal):
Yes it is trying to remove the dpp or runtime filter branch. The idea is to 
break the cycle by removing the least impacting branch. AM operator or the TS 
operator is on the target table and not on the side where the branch is. So if 
the target is smaller, the impact is likely smaller.

 

For a simple case for runtime filtering, lets say there are two tables A & B. A 
has 1 GB data and B has 10 GB of data. There is a branch on A which creates 
filter for B and there is a branch on B which creates a filter for A. This 
results in a cycle which needs to be broken by removing the branch which is 
least effective. In this case, we want to remove the branch on B which creates 
a filter on A as A is 10 times smaller. That is what this code does. Compares 
the size of the target tables and picks the table with smallest size as 
candidate.

 

I hope it helps.

> Dynamic Min-Max/BloomFilter runtime-filtering for Tez
> -
>
> Key: HIVE-15269
> URL: https://issues.apache.org/jira/browse/HIVE-15269
> Project: Hive
>  Issue Type: New Feature
>  Components: Tez
>Reporter: Jason Dere
>Assignee: Deepak Jaiswal
>Priority: Major
>  Labels: TODOC2.2.0
> Fix For: 2.2.0
>
> Attachments: HIVE-15269.1.patch, HIVE-15269.10.patch, 
> HIVE-15269.11.patch, HIVE-15269.12.patch, HIVE-15269.13.patch, 
> HIVE-15269.14.patch, HIVE-15269.15.patch, HIVE-15269.16.patch, 
> HIVE-15269.17.patch, HIVE-15269.18.patch, HIVE-15269.19.patch, 
> HIVE-15269.2.patch, HIVE-15269.3.patch, HIVE-15269.4.patch, 
> HIVE-15269.5.patch, HIVE-15269.6.patch, HIVE-15269.7.patch, 
> HIVE-15269.8.patch, HIVE-15269.9.patch
>
>
> If a dimension table and fact table are joined:
> {noformat}
> select *
> from store join store_sales on (store.id = store_sales.store_id)
> where store.s_store_name = 'My Store'
> {noformat}
> One optimization that can be done is to get the min/max store id values that 
> come out of the scan/filter of the store table, and send this min/max value 
> (via Tez edge) to the task which is scanning the store_sales table.
> We can add a BETWEEN(min, max) predicate to the store_sales TableScan, where 
> this predicate can be pushed down to the storage handler (for example for ORC 
> formats). Pushing a min/max predicate to the ORC reader would allow us to 
> avoid having to entire whole row groups during the table scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15269) Dynamic Min-Max/BloomFilter runtime-filtering for Tez

2018-01-28 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342874#comment-16342874
 ] 

Deepak Jaiswal commented on HIVE-15269:
---

Yes it is trying to remove the dpp or runtime filter branch. The idea is to 
break the cycle by removing the least impacting branch. AM operator or the TS 
operator is on the target table and not on the side where the branch is. So if 
the target is smaller, the impact is likely smaller.

 

For a simple case for runtime filtering, lets say there are two tables A & B. A 
has 1 GB data and B has 10 GB of data. There is a branch on A which creates 
filter for B and there is a branch on B which creates a filter for A. This 
results in a cycle which needs to be broken by removing the branch which is 
least effective. In this case, we want to remove the branch on B which creates 
a filter on A as A is 10 times smaller. That is what this code does. Compares 
the size of the target tables and picks the table with smallest size as 
candidate.

 

I hope it helps.

> Dynamic Min-Max/BloomFilter runtime-filtering for Tez
> -
>
> Key: HIVE-15269
> URL: https://issues.apache.org/jira/browse/HIVE-15269
> Project: Hive
>  Issue Type: New Feature
>  Components: Tez
>Reporter: Jason Dere
>Assignee: Deepak Jaiswal
>Priority: Major
>  Labels: TODOC2.2.0
> Fix For: 2.2.0
>
> Attachments: HIVE-15269.1.patch, HIVE-15269.10.patch, 
> HIVE-15269.11.patch, HIVE-15269.12.patch, HIVE-15269.13.patch, 
> HIVE-15269.14.patch, HIVE-15269.15.patch, HIVE-15269.16.patch, 
> HIVE-15269.17.patch, HIVE-15269.18.patch, HIVE-15269.19.patch, 
> HIVE-15269.2.patch, HIVE-15269.3.patch, HIVE-15269.4.patch, 
> HIVE-15269.5.patch, HIVE-15269.6.patch, HIVE-15269.7.patch, 
> HIVE-15269.8.patch, HIVE-15269.9.patch
>
>
> If a dimension table and fact table are joined:
> {noformat}
> select *
> from store join store_sales on (store.id = store_sales.store_id)
> where store.s_store_name = 'My Store'
> {noformat}
> One optimization that can be done is to get the min/max store id values that 
> come out of the scan/filter of the store table, and send this min/max value 
> (via Tez edge) to the task which is scanning the store_sales table.
> We can add a BETWEEN(min, max) predicate to the store_sales TableScan, where 
> this predicate can be pushed down to the storage handler (for example for ORC 
> formats). Pushing a min/max predicate to the ORC reader would allow us to 
> avoid having to entire whole row groups during the table scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-8472) Add ALTER DATABASE SET LOCATION

2018-01-28 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342873#comment-16342873
 ] 

Mithun Radhakrishnan commented on HIVE-8472:


bq. By the way, Fix Version/s should include 2.2.1.
Thanks for pointing that out, [~leftylev]. Corrected.

> Add ALTER DATABASE SET LOCATION
> ---
>
> Key: HIVE-8472
> URL: https://issues.apache.org/jira/browse/HIVE-8472
> Project: Hive
>  Issue Type: Improvement
>  Components: Database/Schema
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Jeremy Beard
>Assignee: Mithun Radhakrishnan
>Priority: Major
> Fix For: 3.0.0, 2.4.0, 2.2.1
>
> Attachments: HIVE-8472.1-branch-2.patch, HIVE-8472.1.patch, 
> HIVE-8472.2-branch-2.patch, HIVE-8472.3.patch
>
>
> Similarly to ALTER TABLE tablename SET LOCATION, it would be helpful if there 
> was an equivalent for databases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-8472) Add ALTER DATABASE SET LOCATION

2018-01-28 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-8472:
---
Fix Version/s: 2.2.1

> Add ALTER DATABASE SET LOCATION
> ---
>
> Key: HIVE-8472
> URL: https://issues.apache.org/jira/browse/HIVE-8472
> Project: Hive
>  Issue Type: Improvement
>  Components: Database/Schema
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Jeremy Beard
>Assignee: Mithun Radhakrishnan
>Priority: Major
> Fix For: 3.0.0, 2.4.0, 2.2.1
>
> Attachments: HIVE-8472.1-branch-2.patch, HIVE-8472.1.patch, 
> HIVE-8472.2-branch-2.patch, HIVE-8472.3.patch
>
>
> Similarly to ALTER TABLE tablename SET LOCATION, it would be helpful if there 
> was an equivalent for databases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342870#comment-16342870
 ] 

Matt McCline commented on HIVE-18561:
-

Test failure vector_ptf_1 is related.

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch, HIVE-18561.04.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342843#comment-16342843
 ] 

Hive QA commented on HIVE-18553:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908077/HIVE-18553.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 12632 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestDropPartitions.testDropPartition[Embedded]
 (batchId=207)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8901/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8901/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8901/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 23 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908077 - PreCommit-HIVE-Build

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> 

[jira] [Updated] (HIVE-18048) Support Struct type with vectorization for Parquet file

2018-01-28 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18048:

Attachment: HIVE-18048.004.patch

> Support Struct type with vectorization for Parquet file
> ---
>
> Key: HIVE-18048
> URL: https://issues.apache.org/jira/browse/HIVE-18048
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, 
> HIVE-18048.003.patch, HIVE-18048.004.patch
>
>
> Struct type is not supported in MapWork with vectorization, it should be 
> supported to improve the performance.
> The following improvements will be implemented:
> * Add fields of struct type to VectorizedRowBatchCtx.
> * Improve the VectorizedParquetRecordReader to support the struct type for 
> parquet file.
> Note:
> * Orc file won't be supported.
> * Filter operator won't be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15269) Dynamic Min-Max/BloomFilter runtime-filtering for Tez

2018-01-28 Thread Ke Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342836#comment-16342836
 ] 

Ke Jia commented on HIVE-15269:
---

[~djaiswal], Thanks for your reply. I have got it.

Now I wonder  whether

[https://github.com/apache/hive/blob/1dd863ab0bc47115d3c89ed8058967c1496819c6/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java#L260]
 and 

[https://github.com/apache/hive/blob/1dd863ab0bc47115d3c89ed8058967c1496819c6/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java#L281]
 code want to remove the dpp and runtime filter branch with big table size? If 
so, whether the "<" should be ">" or not? Thanks for your help!

> Dynamic Min-Max/BloomFilter runtime-filtering for Tez
> -
>
> Key: HIVE-15269
> URL: https://issues.apache.org/jira/browse/HIVE-15269
> Project: Hive
>  Issue Type: New Feature
>  Components: Tez
>Reporter: Jason Dere
>Assignee: Deepak Jaiswal
>Priority: Major
>  Labels: TODOC2.2.0
> Fix For: 2.2.0
>
> Attachments: HIVE-15269.1.patch, HIVE-15269.10.patch, 
> HIVE-15269.11.patch, HIVE-15269.12.patch, HIVE-15269.13.patch, 
> HIVE-15269.14.patch, HIVE-15269.15.patch, HIVE-15269.16.patch, 
> HIVE-15269.17.patch, HIVE-15269.18.patch, HIVE-15269.19.patch, 
> HIVE-15269.2.patch, HIVE-15269.3.patch, HIVE-15269.4.patch, 
> HIVE-15269.5.patch, HIVE-15269.6.patch, HIVE-15269.7.patch, 
> HIVE-15269.8.patch, HIVE-15269.9.patch
>
>
> If a dimension table and fact table are joined:
> {noformat}
> select *
> from store join store_sales on (store.id = store_sales.store_id)
> where store.s_store_name = 'My Store'
> {noformat}
> One optimization that can be done is to get the min/max store id values that 
> come out of the scan/filter of the store table, and send this min/max value 
> (via Tez edge) to the task which is scanning the store_sales table.
> We can add a BETWEEN(min, max) predicate to the store_sales TableScan, where 
> this predicate can be pushed down to the storage handler (for example for ORC 
> formats). Pushing a min/max predicate to the ORC reader would allow us to 
> avoid having to entire whole row groups during the table scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18550) Keep the hbase table name property as hbase.table.name

2018-01-28 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342833#comment-16342833
 ] 

Aihua Xu edited comment on HIVE-18550 at 1/29/18 2:31 AM:
--

[~ychena] Can you help review the change? The failing tests are not related.


was (Author: aihuaxu):
[~ychena] Can you help review the change?

> Keep the hbase table name property as hbase.table.name
> --
>
> Key: HIVE-18550
> URL: https://issues.apache.org/jira/browse/HIVE-18550
> Project: Hive
>  Issue Type: Sub-task
>  Components: HBase Handler
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18550.1.patch
>
>
> With hbase 2.0 support, I made some changes to the hbase table name property 
> change in HIVE-18366 and HIVE-18202. By checking the logic, seems the change 
> is not necessary since hbase.table.name is internal to hive hbase handler. We 
> just need to map hbase.table.name to 
> hbase.mapreduce.hfileoutputformat.table.name for HiveHFileOutputFormat. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18550) Keep the hbase table name property as hbase.table.name

2018-01-28 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342833#comment-16342833
 ] 

Aihua Xu commented on HIVE-18550:
-

[~ychena] Can you help review the change?

> Keep the hbase table name property as hbase.table.name
> --
>
> Key: HIVE-18550
> URL: https://issues.apache.org/jira/browse/HIVE-18550
> Project: Hive
>  Issue Type: Sub-task
>  Components: HBase Handler
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18550.1.patch
>
>
> With hbase 2.0 support, I made some changes to the hbase table name property 
> change in HIVE-18366 and HIVE-18202. By checking the logic, seems the change 
> is not necessary since hbase.table.name is internal to hive hbase handler. We 
> just need to map hbase.table.name to 
> hbase.mapreduce.hfileoutputformat.table.name for HiveHFileOutputFormat. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342828#comment-16342828
 ] 

Hive QA commented on HIVE-18553:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 10 new + 40 unchanged - 0 
fixed = 50 total (was 40) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8901/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8901/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8901/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, 

[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Status: Patch Available  (was: In Progress)

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch, HIVE-18561.04.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Attachment: HIVE-18561.04.patch

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch, HIVE-18561.04.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Status: In Progress  (was: Patch Available)

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342820#comment-16342820
 ] 

Hive QA commented on HIVE-18561:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908073/HIVE-18561.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 12632 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_ptf_1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestGetListIndexes.testGetIndexEmptyTableName[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testQueueing 
(batchId=287)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=231)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8900/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8900/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8900/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 27 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908073 - PreCommit-HIVE-Build

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu reassigned HIVE-18553:
---

Assignee: (was: Ferdinand Xu)

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3
> at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>  ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
>  

[jira] [Comment Edited] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342813#comment-16342813
 ] 

Ferdinand Xu edited comment on HIVE-18553 at 1/29/18 1:35 AM:
--

Thanks [~colinma] for the review. V2 patch is attached to add test case 
covering that case. To my understanding, it should work for adding cases since 
reader builder is involved each time for row group checking. But current patch 
surely not support other cases like type conversion. It may require further 
investigations.


was (Author: ferd):
Thanks [~colinma] for the review. It should work for adding cases since reader 
builder is involved each time for row group checking. But current patch surely 
not support other cases like type conversion. It may require further 
investigations.

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Assignee: Ferdinand Xu
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3
> at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>  ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> 

[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342813#comment-16342813
 ] 

Ferdinand Xu commented on HIVE-18553:
-

Thanks [~colinma] for the review. It should work for adding cases since reader 
builder is involved each time for row group checking. But current patch surely 
not support other cases like type conversion. It may require further 
investigations.

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Assignee: Ferdinand Xu
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3
> at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>  ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 

[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342812#comment-16342812
 ] 

Hive QA commented on HIVE-18561:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 352 unchanged - 0 
fixed = 353 total (was 352) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
48s{color} | {color:red} root: The patch generated 1 new + 352 unchanged - 0 
fixed = 353 total (was 352) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8900/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8900/yetus/diff-checkstyle-root.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8900/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8900/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-18553:

Attachment: HIVE-18553.2.patch

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Assignee: Ferdinand Xu
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3
> at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>  ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
>  

[jira] [Assigned] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu reassigned HIVE-18553:
---

Assignee: Ferdinand Xu

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Assignee: Ferdinand Xu
>Priority: Major
> Attachments: HIVE-18553.2.patch, HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3
> at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>  ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
>  

[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342811#comment-16342811
 ] 

Matt McCline commented on HIVE-18561:
-

Patch #2 failures are related.

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342800#comment-16342800
 ] 

Colin Ma commented on HIVE-18553:
-

[~Ferd], I think the patch doesn't cover the case add a column and insert new 
value into table.

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3
> at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>  ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at 
> 

[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Attachment: HIVE-18561.03.patch

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Status: Patch Available  (was: In Progress)

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, 
> HIVE-18561.03.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Status: In Progress  (was: Patch Available)

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342782#comment-16342782
 ] 

Hive QA commented on HIVE-18561:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908069/HIVE-18561.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12632 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_ptf]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_ptf] 
(batchId=135)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8899/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8899/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8899/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 24 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908069 - PreCommit-HIVE-Build

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342771#comment-16342771
 ] 

Hive QA commented on HIVE-18561:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 1 new + 352 unchanged - 0 
fixed = 353 total (was 352) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
36s{color} | {color:red} root: The patch generated 1 new + 352 unchanged - 0 
fixed = 353 total (was 352) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8899/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8899/yetus/diff-checkstyle-root.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8899/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8899/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken

2018-01-28 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342762#comment-16342762
 ] 

Matt McCline commented on HIVE-18562:
-

No 1 day related test failures.

> Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
> 
>
> Key: HIVE-18562
> URL: https://issues.apache.org/jira/browse/HIVE-18562
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18562.01.patch
>
>
> Altering a CHAR/VARCHAR column's maxLength to a shorter value does not 
> truncate values when vectorized. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342754#comment-16342754
 ] 

Hive QA commented on HIVE-18562:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908068/HIVE-18562.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 12632 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] 
(batchId=248)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.TestMarkPartition.testMarkingPartitionSet 
(batchId=213)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8898/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8898/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8898/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 23 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908068 - PreCommit-HIVE-Build

> Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
> 
>
> Key: HIVE-18562
> URL: https://issues.apache.org/jira/browse/HIVE-18562
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18562.01.patch
>
>
> Altering a CHAR/VARCHAR column's maxLength to a shorter value does not 
> truncate values when vectorized. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342738#comment-16342738
 ] 

Hive QA commented on HIVE-18562:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8898/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8898/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8898/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
> 
>
> Key: HIVE-18562
> URL: https://issues.apache.org/jira/browse/HIVE-18562
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18562.01.patch
>
>
> Altering a CHAR/VARCHAR column's maxLength to a shorter value does not 
> truncate values when vectorized. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Status: Patch Available  (was: In Progress)

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Status: In Progress  (was: Patch Available)

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18561:

Attachment: HIVE-18561.02.patch

> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342736#comment-16342736
 ] 

Hive QA commented on HIVE-18553:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908040/HIVE-18553.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 12602 tests 
executed
*Failed tests:*
{noformat}
TestMiniLlapLocalCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=160)

[vector_reduce_groupby_duplicate_cols.q,explainuser_1.q,multi_insert.q,tez_dml.q,cross_prod_4.q,vector_bround.q,orc_ppd_schema_evol_1b.q,vector_join30.q,vector_groupby_grouping_sets2.q,cte_3.q,vector_reduce_groupby_decimal.q,vector_ptf_part_simple.q,vector_decimal_cast.q,groupby_grouping_id2.q,tez_smb_empty.q,schema_evol_text_vecrow_part_all_primitive_llap_io.q,orc_merge6.q,cte_mat_1.q,vector_char_mapjoin1.q,cte_5.q,vector_decimal_2.q,columnStatsUpdateForStatsOptimizer_1.q,vector_outer_join3.q,vector_string_concat.q,vector_windowing_windowspec.q,sharedworkext.q,vectorized_context.q,auto_sortmerge_join_12.q,tez_union_multiinsert.q,mapjoin_hint.q]
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[interval_comparison] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=207)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.common.TestHiveClientCache.testCloseAllClients 
(batchId=200)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8897/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8897/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8897/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908040 - PreCommit-HIVE-Build

> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> 

[jira] [Updated] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18562:

Status: Patch Available  (was: Open)

> Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
> 
>
> Key: HIVE-18562
> URL: https://issues.apache.org/jira/browse/HIVE-18562
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18562.01.patch
>
>
> Altering a CHAR/VARCHAR column's maxLength to a shorter value does not 
> truncate values when vectorized. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18562:

Attachment: HIVE-18562.01.patch

> Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
> 
>
> Key: HIVE-18562
> URL: https://issues.apache.org/jira/browse/HIVE-18562
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18562.01.patch
>
>
> Altering a CHAR/VARCHAR column's maxLength to a shorter value does not 
> truncate values when vectorized. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken

2018-01-28 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-18562:
---


> Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
> 
>
> Key: HIVE-18562
> URL: https://issues.apache.org/jira/browse/HIVE-18562
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
>
> Altering a CHAR/VARCHAR column's maxLength to a shorter value does not 
> truncate values when vectorized. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342712#comment-16342712
 ] 

Hive QA commented on HIVE-18553:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
35s{color} | {color:red} ql: The patch generated 10 new + 40 unchanged - 0 
fixed = 50 total (was 40) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8897/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8897/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8897/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> VectorizedParquetReader fails after adding a new column to table
> 
>
> Key: HIVE-18553
> URL: https://issues.apache.org/jira/browse/HIVE-18553
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.4.0, 2.3.2
>Reporter: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18553.patch
>
>
> VectorizedParquetReader throws an exception when trying to reading from a 
> parquet table on which new columns are added. Steps to reproduce below:
> {code}
> 0: jdbc:hive2://localhost:1/default> desc test_p;
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | t1| tinyint|  |
> | t2| tinyint|  |
> | i1| int|  |
> | i2| int|  |
> +---++--+
> 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none;
> 0: jdbc:hive2://localhost:1/default> set 
> hive.vectorized.execution.enabled=true;
> 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts 
> timestamp);
> 0: jdbc:hive2://localhost:1/default> select * from test_p;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
> {code}
> Following exception is seen in the logs
> {code}
> Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the 
> store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] 

[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342693#comment-16342693
 ] 

Hive QA commented on HIVE-18561:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908038/HIVE-18561.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 43 failed/errored test(s), 12624 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_window]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_ptf_part_simple]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_expressions]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_gby2]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_gby]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_multipartitioning]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_navfn]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_order_null]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_range_multiorder]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_rank]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_streaming]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_windowspec4]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_windowing_windowspec]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_ptf]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_ptf] 
(batchId=135)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] 
(batchId=248)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
 (batchId=207)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchEmptyCommitUnpartitioned
 (batchId=202)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8896/testReport
Console output: 

[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342682#comment-16342682
 ] 

Hive QA commented on HIVE-18561:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8896/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8896/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8896/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Current vector PTF doesn't work under GroupBy and is designed 
> for reduce-shuffle input
> -
>
> Key: HIVE-18561
> URL: https://issues.apache.org/jira/browse/HIVE-18561
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18561.01.patch
>
>
> Need to add validation check in Vectorizer that doesn't vectorize unless PTF 
> is under reduce-shuffle (with optional SELECT in-between).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18237) missing results for insert_only table after DP insert

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342668#comment-16342668
 ] 

Hive QA commented on HIVE-18237:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908037/HIVE-18237.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 12631 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8895/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8895/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8895/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908037 - PreCommit-HIVE-Build

> missing results for insert_only table after DP insert
> -
>
> Key: HIVE-18237
> URL: https://issues.apache.org/jira/browse/HIVE-18237
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Zoltan Haindrich
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, 
> HIVE-18237.03.patch
>
>
> {code}
> set hive.stats.column.autogather=false;
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=200;
> set hive.exec.max.dynamic.partitions=200;
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create table i0 (p int,v int);
> insert into i0 values
> (0,0),
> (2,2),
> (3,3);
> create table p0 (v int) partitioned by (p int) stored as orc 
>   tblproperties ("transactional"="true", 
> "transactional_properties"="insert_only");
> explain insert overwrite table p0 partition (p) select * from i0 where v < 3;
> insert overwrite table p0 partition (p) select * from i0 where v < 3;
> select count(*) from p0 where v!=1;
> {code}
> The table p0 should contain {{2}} rows at this point; but the result is {{0}}.
> * seems to be specific to insert_only tables
> * the existing data appears if an {{insert into}} is executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18237) missing results for insert_only table after DP insert

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342654#comment-16342654
 ] 

Hive QA commented on HIVE-18237:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 3 new + 1043 unchanged - 0 
fixed = 1046 total (was 1043) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1dd863a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8895/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8895/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8895/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> missing results for insert_only table after DP insert
> -
>
> Key: HIVE-18237
> URL: https://issues.apache.org/jira/browse/HIVE-18237
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Zoltan Haindrich
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, 
> HIVE-18237.03.patch
>
>
> {code}
> set hive.stats.column.autogather=false;
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=200;
> set hive.exec.max.dynamic.partitions=200;
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create table i0 (p int,v int);
> insert into i0 values
> (0,0),
> (2,2),
> (3,3);
> create table p0 (v int) partitioned by (p int) stored as orc 
>   tblproperties ("transactional"="true", 
> "transactional_properties"="insert_only");
> explain insert overwrite table p0 partition (p) select * from i0 where v < 3;
> insert overwrite table p0 partition (p) select * from i0 where v < 3;
> select count(*) from p0 where v!=1;
> {code}
> The table p0 should contain {{2}} rows at this point; but the result is {{0}}.
> * seems to be specific to insert_only tables
> * the existing data appears if an {{insert into}} is executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18051) qfiles: dataset support

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342651#comment-16342651
 ] 

Hive QA commented on HIVE-18051:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908020/HIVE-18051.07.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 147 failed/errored test(s), 12636 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_role_grant2]
 (batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5a] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_schema_evolution_native]
 (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cli_print_escape_crlf] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets1] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets2] 
(batchId=25)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets3] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_1a] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_1b] 
(batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_2a] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_2b] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[print_header] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[testdataset] (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[testdataset_2] 
(batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_between_columns] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_interval_arithmetic]
 (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_order_null] 
(batchId=29)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_reference_windowed]
 (batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_tablesample_rows] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_date_funcs] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCompareCliDriver.testCliDriver[llap_0] 
(batchId=244)
org.apache.hadoop.hive.cli.TestCompareCliDriver.testCliDriver[vectorized_math_funcs]
 (batchId=244)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_grouping_id2]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_1a]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_1b]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_2a]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_2b]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_llap_io]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update_llap_io]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_table]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_table_llap_io]
 (batchId=161)

[jira] [Commented] (HIVE-18051) qfiles: dataset support

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342643#comment-16342643
 ] 

Hive QA commented on HIVE-18051:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
39s{color} | {color:red} root: The patch generated 19 new + 173 unchanged - 6 
fixed = 192 total (was 179) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} itests/util: The patch generated 19 new + 168 
unchanged - 6 fixed = 187 total (was 174) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 68e7a34 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8894/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8894/yetus/diff-checkstyle-itests_util.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8894/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8894/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8894/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> qfiles: dataset support
> ---
>
> Key: HIVE-18051
> URL: https://issues.apache.org/jira/browse/HIVE-18051
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18051.01.patch, HIVE-18051.02.patch, 
> HIVE-18051.03.patch, HIVE-18051.04.patch, HIVE-18051.05.patch, 
> HIVE-18051.06.patch, HIVE-18051.07.patch
>
>
> it would be great to have some kind of test dataset support; currently there 
> is the {{q_test_init.sql}} which is quite large; and I'm often override it 
> with an invalid string; because I write independent qtests most of the time - 
> and the load of {{src}} and other tables are just a waste of time for me ; 

[jira] [Commented] (HIVE-18051) qfiles: dataset support

2018-01-28 Thread Laszlo Bodor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342627#comment-16342627
 ] 

Laszlo Bodor commented on HIVE-18051:
-

Let me check these failures and format warnings.

> qfiles: dataset support
> ---
>
> Key: HIVE-18051
> URL: https://issues.apache.org/jira/browse/HIVE-18051
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18051.01.patch, HIVE-18051.02.patch, 
> HIVE-18051.03.patch, HIVE-18051.04.patch, HIVE-18051.05.patch, 
> HIVE-18051.06.patch, HIVE-18051.07.patch
>
>
> it would be great to have some kind of test dataset support; currently there 
> is the {{q_test_init.sql}} which is quite large; and I'm often override it 
> with an invalid string; because I write independent qtests most of the time - 
> and the load of {{src}} and other tables are just a waste of time for me ; 
> not to mention that the loading of those tables may also trigger breakpoints 
> - which is a bit annoying.
> Most of the tests are "only" using the {{src}} table and possibly 2 others; 
> however the main init script contains a bunch of tables - meanwhile there are 
> quite few other tests which could possibly also benefit from a more general 
> feature; for example the creation of {{bucket_small}} is present in 20 q 
> files.
> the proposal would be to enable the qfiles to be annotated with metadata like 
> datasets:
> {code}
> --! qt:dataset:src,bucket_small
> {code}
> proposal for storing a dataset:
> * the loader script would be at: {{data/datasets/__NAME__/load.hive.sql}}
> * the table data could be stored under that location
> a draft about this; and other qfiles related ideas:
> https://docs.google.com/document/d/1KtcIx8ggL9LxDintFuJo8NQuvNWkmtvv_ekbWrTLNGc/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)

2018-01-28 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342617#comment-16342617
 ] 

Matt McCline commented on HIVE-18524:
-

1 day test failures are unrelated.

> Vectorization: Execution failure related to non-standard embedding of 
> IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
> -
>
> Key: HIVE-18524
> URL: https://issues.apache.org/jira/browse/HIVE-18524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch
>
>
> {noformat}
> insert overwrite table insert_10_1
> select cast(gpa as float),
>age,
>IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL),
>IF(LENGTH(name)>10,cast(name as binary),NULL)
> from studentnull10k
> vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double]
> ExprNodeDescs:
> UDFToFloat(gpa) (type: float),
> age (type: int),
> if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp),
> if((length(name) > 10), CAST( name AS BINARY), null) (type: binary)
> selectExpressions:
> VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null))
> (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) 
> -> 5:timestamp,
> VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null))
> (children: LongColGreaterLongScalar(col 4:int, val 10)(children: 
> StringLength(col 0:string) -> 4:int) -> 6:boolean,
> VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary
> {noformat}
> *// Notice there is no vector expression shown for the last IF stmt.*  It has 
> been magically embedded inside the VectorUDFAdaptor object...
> Execution results in this call stack.
> {nocode}
> Caused by: java.lang.NullPointerException
>   at java.util.Arrays.copyOfRange(Arrays.java:3521)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:177)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:145)
>   ... 22 more
> {nocode}
> Change is due to:
> HIVE-17139: Conditional expressions optimization: skip the expression 
> evaluation if the condition is not satisfied for vectorization engine. (Jia 
> Ke, reviewed by Ferdinand Xu)
> Embedding a raw vector expression outside of VectorizationContext is quite 
> non-standard and evidently buggy.
> [~Ferd] [~Ke Jia] I am inclined to revert this change.  Comments?  CC: 
> [~ashutoshc] [~hagleitn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18557) q.outs: fix issues caused by q.out_spark files

2018-01-28 Thread Laszlo Bodor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342618#comment-16342618
 ] 

Laszlo Bodor commented on HIVE-18557:
-

Sure, thanks!

> q.outs: fix issues caused by q.out_spark files
> --
>
> Key: HIVE-18557
> URL: https://issues.apache.org/jira/browse/HIVE-18557
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18557.01.patch
>
>
> HIVE-18061 caused some issues in yetus check by introducing q.out_spark files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342615#comment-16342615
 ] 

Hive QA commented on HIVE-18524:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908019/HIVE-18524.02.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12631 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionsByValuesNoTblName[Embedded]
 (batchId=207)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8893/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8893/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8893/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 22 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908019 - PreCommit-HIVE-Build

> Vectorization: Execution failure related to non-standard embedding of 
> IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
> -
>
> Key: HIVE-18524
> URL: https://issues.apache.org/jira/browse/HIVE-18524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch
>
>
> {noformat}
> insert overwrite table insert_10_1
> select cast(gpa as float),
>age,
>IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL),
>IF(LENGTH(name)>10,cast(name as binary),NULL)
> from studentnull10k
> vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double]
> ExprNodeDescs:
> UDFToFloat(gpa) (type: float),
> age (type: int),
> if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp),
> if((length(name) > 10), CAST( name AS BINARY), null) (type: binary)
> selectExpressions:
> VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null))
> (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) 
> -> 5:timestamp,
> VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null))
> (children: LongColGreaterLongScalar(col 4:int, val 10)(children: 
> StringLength(col 0:string) -> 4:int) -> 6:boolean,
> VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 

[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342601#comment-16342601
 ] 

Hive QA commented on HIVE-18524:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
35s{color} | {color:red} ql: The patch generated 4 new + 39 unchanged - 10 
fixed = 43 total (was 49) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
33s{color} | {color:red} root: The patch generated 4 new + 39 unchanged - 10 
fixed = 43 total (was 49) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 68e7a34 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8893/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8893/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8893/yetus/whitespace-tabs.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8893/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8893/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Execution failure related to non-standard embedding of 
> IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
> -
>
> Key: HIVE-18524
> URL: https://issues.apache.org/jira/browse/HIVE-18524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch
>
>
> {noformat}
> insert overwrite table insert_10_1
> select cast(gpa as float),
>age,
>IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL),
>IF(LENGTH(name)>10,cast(name as binary),NULL)
> from studentnull10k
> 

[jira] [Commented] (HIVE-18557) q.outs: fix issues caused by q.out_spark files

2018-01-28 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342598#comment-16342598
 ] 

Peter Vary commented on HIVE-18557:
---

+1

(next time please take care of the indentation too :) - it is serious 
nitpicking, but anyway :) ) 

> q.outs: fix issues caused by q.out_spark files
> --
>
> Key: HIVE-18557
> URL: https://issues.apache.org/jira/browse/HIVE-18557
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18557.01.patch
>
>
> HIVE-18061 caused some issues in yetus check by introducing q.out_spark files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18051) qfiles: dataset support

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342588#comment-16342588
 ] 

Hive QA commented on HIVE-18051:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908020/HIVE-18051.07.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 142 failed/errored test(s), 12636 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_role_grant2]
 (batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5a] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_schema_evolution_native]
 (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cli_print_escape_crlf] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets1] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets2] 
(batchId=25)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets3] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_1a] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_1b] 
(batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_2a] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_schema_evol_2b] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[print_header] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[testdataset] (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[testdataset_2] 
(batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_between_columns] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_interval_arithmetic]
 (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_order_null] 
(batchId=29)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_reference_windowed]
 (batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_tablesample_rows] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_date_funcs] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCompareCliDriver.testCliDriver[llap_0] 
(batchId=244)
org.apache.hadoop.hive.cli.TestCompareCliDriver.testCliDriver[vectorized_math_funcs]
 (batchId=244)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_grouping_id2]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_1a]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_1b]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_2a]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_schema_evol_2b]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_llap_io]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update_llap_io]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_table]
 (batchId=161)

[jira] [Commented] (HIVE-18051) qfiles: dataset support

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342576#comment-16342576
 ] 

Hive QA commented on HIVE-18051:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
49s{color} | {color:red} root: The patch generated 19 new + 173 unchanged - 6 
fixed = 192 total (was 179) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} itests/util: The patch generated 19 new + 168 
unchanged - 6 fixed = 187 total (was 174) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 68e7a34 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8892/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8892/yetus/diff-checkstyle-itests_util.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8892/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8892/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8892/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> qfiles: dataset support
> ---
>
> Key: HIVE-18051
> URL: https://issues.apache.org/jira/browse/HIVE-18051
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18051.01.patch, HIVE-18051.02.patch, 
> HIVE-18051.03.patch, HIVE-18051.04.patch, HIVE-18051.05.patch, 
> HIVE-18051.06.patch, HIVE-18051.07.patch
>
>
> it would be great to have some kind of test dataset support; currently there 
> is the {{q_test_init.sql}} which is quite large; and I'm often override it 
> with an invalid string; because I write independent qtests most of the time - 
> and the load of {{src}} and other tables are just a waste of time for me ; 

[jira] [Commented] (HIVE-18557) q.outs: fix issues caused by q.out_spark files

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342566#comment-16342566
 ] 

Hive QA commented on HIVE-18557:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908014/HIVE-18557.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12630 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.mapreduce.TestHCatOutputFormat.testGetTableSchema 
(batchId=200)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8891/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8891/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8891/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 22 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908014 - PreCommit-HIVE-Build

> q.outs: fix issues caused by q.out_spark files
> --
>
> Key: HIVE-18557
> URL: https://issues.apache.org/jira/browse/HIVE-18557
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18557.01.patch
>
>
> HIVE-18061 caused some issues in yetus check by introducing q.out_spark files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18557) q.outs: fix issues caused by q.out_spark files

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342561#comment-16342561
 ] 

Hive QA commented on HIVE-18557:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 68e7a34 |
| Default Java | 1.8.0_111 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8891/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> q.outs: fix issues caused by q.out_spark files
> --
>
> Key: HIVE-18557
> URL: https://issues.apache.org/jira/browse/HIVE-18557
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18557.01.patch
>
>
> HIVE-18061 caused some issues in yetus check by introducing q.out_spark files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18557) q.outs: fix issues caused by q.out_spark files

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342551#comment-16342551
 ] 

Hive QA commented on HIVE-18557:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908014/HIVE-18557.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 12630 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.templeton.TestConcurrentJobRequestsThreadsAndTimeout.ConcurrentListJobsVerifyExceptions
 (batchId=191)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8890/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8890/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8890/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 19 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908014 - PreCommit-HIVE-Build

> q.outs: fix issues caused by q.out_spark files
> --
>
> Key: HIVE-18557
> URL: https://issues.apache.org/jira/browse/HIVE-18557
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18557.01.patch
>
>
> HIVE-18061 caused some issues in yetus check by introducing q.out_spark files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18557) q.outs: fix issues caused by q.out_spark files

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342536#comment-16342536
 ] 

Hive QA commented on HIVE-18557:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 68e7a34 |
| Default Java | 1.8.0_111 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8890/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> q.outs: fix issues caused by q.out_spark files
> --
>
> Key: HIVE-18557
> URL: https://issues.apache.org/jira/browse/HIVE-18557
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18557.01.patch
>
>
> HIVE-18061 caused some issues in yetus check by introducing q.out_spark files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18486) Create tests to cover add partition methods

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342517#comment-16342517
 ] 

Hive QA commented on HIVE-18486:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907996/HIVE-18486.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 12774 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
 (batchId=249)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8889/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8889/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8889/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 21 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907996 - PreCommit-HIVE-Build

> Create tests to cover add partition methods
> ---
>
> Key: HIVE-18486
> URL: https://issues.apache.org/jira/browse/HIVE-18486
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-18486.1.patch, HIVE-18486.2.patch, 
> HIVE-18486.3.patch
>
>
> The following methods of IMetaStoreClient are covered in this Jira:
> {code:java}
> - Partition add_partition(Partition)
> - int add_partitions(List)
> - List add_partitions(List, boolean, boolean){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18486) Create tests to cover add partition methods

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342511#comment-16342511
 ] 

Hive QA commented on HIVE-18486:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} standalone-metastore: The patch generated 15 new + 0 
unchanged - 0 fixed = 15 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 68e7a34 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8889/yetus/diff-checkstyle-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8889/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8889/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create tests to cover add partition methods
> ---
>
> Key: HIVE-18486
> URL: https://issues.apache.org/jira/browse/HIVE-18486
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-18486.1.patch, HIVE-18486.2.patch, 
> HIVE-18486.3.patch
>
>
> The following methods of IMetaStoreClient are covered in this Jira:
> {code:java}
> - Partition add_partition(Partition)
> - int add_partitions(List)
> - List add_partitions(List, boolean, boolean){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18559) Allow hive.metastore.limit.partition.request to be user configurable

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342506#comment-16342506
 ] 

Hive QA commented on HIVE-18559:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907980/HIVE-18559.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12631 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.metastore.TestMarkPartition.testMarkingPartitionSet 
(batchId=213)
org.apache.hadoop.hive.metastore.client.TestDatabases.testDropDatabaseWithFunction[Remote]
 (batchId=207)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build//testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build//console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 22 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907980 - PreCommit-HIVE-Build

> Allow hive.metastore.limit.partition.request to be user configurable
> 
>
> Key: HIVE-18559
> URL: https://issues.apache.org/jira/browse/HIVE-18559
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
>Priority: Minor
> Attachments: HIVE-18559.patch
>
>
> HIVE-13884 introduced hive.metastore.limit.partition.request to limit the 
> number of partitions that can be requested from the Metastore for a given 
> table.
> This config is set by cluster admins to prevent ad-hoc queries from putting 
> memory pressure on HMS. But the limit can be too restrictive for some (savvy) 
> users who require the ability to set a higher limit for a subset of workloads 
> (during low HMS memory pressure, for example). Such users generally instruct 
> admins not to set this confg.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18559) Allow hive.metastore.limit.partition.request to be user configurable

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342501#comment-16342501
 ] 

Hive QA commented on HIVE-18559:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 68e7a34 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-/yetus/patch-asflicense-problems.txt
 |
| modules | C: common standalone-metastore ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Allow hive.metastore.limit.partition.request to be user configurable
> 
>
> Key: HIVE-18559
> URL: https://issues.apache.org/jira/browse/HIVE-18559
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
>Priority: Minor
> Attachments: HIVE-18559.patch
>
>
> HIVE-13884 introduced hive.metastore.limit.partition.request to limit the 
> number of partitions that can be requested from the Metastore for a given 
> table.
> This config is set by cluster admins to prevent ad-hoc queries from putting 
> memory pressure on HMS. But the limit can be too restrictive for some (savvy) 
> users who require the ability to set a higher limit for a subset of workloads 
> (during low HMS memory pressure, for example). Such users generally instruct 
> admins not to set this confg.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18463) Typo in the JdbcConnectionParams constructor

2018-01-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342497#comment-16342497
 ] 

Hive QA commented on HIVE-18463:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907962/HIVE-18463.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 12630 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcr] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=180)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketmapjoin6]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.metastore.TestHiveMetaTool.testListFSRoot (batchId=225)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.TestBeeLineWithArgs.testEscapeCRLFInTSV2Output 
(batchId=231)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchEmptyCommitUnpartitioned
 (batchId=202)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8887/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8887/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8887/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 26 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907962 - PreCommit-HIVE-Build

> Typo in the JdbcConnectionParams constructor
> 
>
> Key: HIVE-18463
> URL: https://issues.apache.org/jira/browse/HIVE-18463
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
> Attachments: HIVE-18463.01.patch, HIVE-18463.patch
>
>
> Seems like the last one should be params.rejectedHostZnodePaths as well:
> *jdbc/src/java/org/apache/hive/jdbc/Utils.java:*
> {code:java}
> public JdbcConnectionParams(JdbcConnectionParams params) {
> this.host = params.host;
> this.port = params.port;
> ...
> this.rejectedHostZnodePaths.addAll(rejectedHostZnodePaths);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)