[jira] [Commented] (HIVE-20980) Reinstate Parquet timestamp conversion between HS2 time zone and UTC

2018-12-06 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711108#comment-16711108
 ] 

Karen Coppage commented on HIVE-20980:
--

Checkstyle errors are due to following indentation style of surrounding code.

> Reinstate Parquet timestamp conversion between HS2 time zone and UTC
> 
>
> Key: HIVE-20980
> URL: https://issues.apache.org/jira/browse/HIVE-20980
> Project: Hive
>  Issue Type: Sub-task
>  Components: File Formats
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-20980.1.patch, HIVE-20980.2.patch, 
> HIVE-20980.2.patch
>
>
> With HIVE-20007, Parquet timestamps became timezone-agnostic. This means that 
> timestamps written after the change are read exactly as they were written; 
> but timestamps stored before this change are effectively converted from the 
> writing HS2 server time zone to GMT time zone. This patch reinstates the 
> original behavior: timestamps are converted to UTC before write and from UTC 
> before read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20891) Call alter_partition in batch when dynamically loading partitions

2018-12-06 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1674#comment-1674
 ] 

Peter Vary commented on HIVE-20891:
---

+1

> Call alter_partition in batch when dynamically loading partitions
> -
>
> Key: HIVE-20891
> URL: https://issues.apache.org/jira/browse/HIVE-20891
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20891.01.patch, HIVE-20891.02.patch, 
> HIVE-20891.03.patch, HIVE-20891.04.patch, HIVE-20891.05.patch, 
> HIVE-20891.06.patch, HIVE-20891.07.patch
>
>
> When dynamically loading partitions, the setStatsPropAndAlterPartition() is 
> called for each partition one by one, resulting in unnecessary calls to the 
> metastore client. This whole logic can be changed to just one call. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2018-12-06 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-20615:
---

Assignee: Vaibhav Gumashta  (was: Zoltan Haindrich)

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2018-12-06 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-20615:
---

Assignee: Zoltan Haindrich  (was: Vaibhav Gumashta)

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21014) Improve vectorization column reuse

2018-12-06 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-21014:
---


> Improve vectorization column reuse
> --
>
> Key: HIVE-21014
> URL: https://issues.apache.org/jira/browse/HIVE-21014
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-20985 have fixed a correctness issue - which have degraded the column 
> reuse rate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21001) Upgrade to calcite-1.18

2018-12-06 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21001:

Attachment: HIVE-21001.02.patch

> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2018-12-06 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20615:

Attachment: HIVE-20615.1.patch

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711305#comment-16711305
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950808/HIVE-21001.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 279 failed/errored test(s), 15649 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=262)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_coltype] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ambiguitycheck] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_filter] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_simple_select] 
(batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_simple_select] 
(batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[complex_alias] 
(batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constantPropWhen] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constantPropagateForSubQuery]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constant_prop_3] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_when_case] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_udf] (batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynamic_partition_skip_default]
 (batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fold_case] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fold_eq_with_case_when] 
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fold_to_null] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fold_when] (batchId=30)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[folder_predicate] 
(batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_1_23] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_skew_1_23] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[in_typecheck_char] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[in_typecheck_mixed] 
(batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_const_type] 
(batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_join_preds] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[innerjoin1] (batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input23] (batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input42] (batchId=81)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input8] (batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part1] (batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part9] (batchId=27)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_filters_overlap] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_11] 
(batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_12] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_13] 
(batchId=27)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_14] 
(batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_1] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_2] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_3] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_4] 
(batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_5] 
(batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_6] 
(batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_7] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_8] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_9] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_multiskew_1]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_multiskew_2]
 (batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_multiskew_3]
 (batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[macro] (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_memcheck] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mergejoin] (batchId=64)

[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711267#comment-16711267
 ] 

Hive QA commented on HIVE-21001:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15190/dev-support/hive-personality.sh
 |
| git revision | master / fa512bb |
| Default Java | 1.8.0_111 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15190/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20985) If select operator inputs are temporary columns vectorization may reuse some of them as output

2018-12-06 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20985:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

psuhed to master. Thank you Teddy for reviewing the changes!

> If select operator inputs are temporary columns vectorization may reuse some 
> of them as output
> --
>
> Key: HIVE-20985
> URL: https://issues.apache.org/jira/browse/HIVE-20985
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20985.01.patch, HIVE-20985.02.patch, 
> HIVE-20985.03.patch, HIVE-20985.03.patch, HIVE-20985.04.patch, 
> HIVE-20985.05.patch, HIVE-20985.05.patch, HIVE-20985.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21014) Improve vectorization column reuse

2018-12-06 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711299#comment-16711299
 ] 

Zoltan Haindrich commented on HIVE-21014:
-

I think that the reusal should be considered for every operator; not just for 
projections.

> Improve vectorization column reuse
> --
>
> Key: HIVE-21014
> URL: https://issues.apache.org/jira/browse/HIVE-21014
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-20985 have fixed a correctness issue - which have degraded the column 
> reuse rate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711909#comment-16711909
 ] 

Hive QA commented on HIVE-21015:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950863/HIVE-21015.1.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 45 failed/errored test(s), 15650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterPartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTableCascade
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterViewParititon
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testColumnStatistics 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTypeApi 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testConcurrentMetastores
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateAndGetTableWithDriver
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateTableSettingId
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBLocationChange 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwner 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwnerChange 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabase 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocation 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocationWithPermissionProblems
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropDatabaseCascadeMVMultiDB
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterLastPartition
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterSinglePartition
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFunctionWithResources
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetConfigValue 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetMetastoreUuid 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetPartitionsWithSpec
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetSchemaWithNoClassDefFoundError
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetTableObjects 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetUUIDInParallel
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testJDOPersistanceManagerCleanup
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionNames
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitions 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionsWihtLimitEnabled
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testNameMethods 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartitionFilter 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRenamePartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRetriableClientWithConnLifetime
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleFunction 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTypeApi 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testStatsFastTrivial 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSynchronized 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableDatabase 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableFilter 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testUpdatePartitionStat_doesNotUpdateStats
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testValidateTableCols
 (batchId=227)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15195/testReport

[jira] [Comment Edited] (HIVE-20942) Worker should heartbeat its own txn

2018-12-06 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711961#comment-16711961
 ] 

Eugene Koifman edited comment on HIVE-20942 at 12/6/18 8:00 PM:


[~ikryvenko], I agree that you should keep the original call to {{cancel}} but 
I think the new one you added is in the wrong {{try-catch}} - shouldn't it be 
in the {{finally}} of the {{try-catch}} that contains the {{start}} call?

Also, I think {{LOG.info("Heartbeating comp}}... should be {{debug}} level

 


was (Author: ekoifman):
[~ikryvenko], I agree that you should keep the original call to {{cancel}} but 
I think the new one you added is in the wrong {{try-catch}} - shouldn't it be 
in the {{finally}} of the {{try-catch}} that contains the {{start}} call?

Also, I think {{LOG.info("Heartbeating comp}}... should be debug

 

 

 

> Worker should heartbeat its own txn
> ---
>
> Key: HIVE-20942
> URL: https://issues.apache.org/jira/browse/HIVE-20942
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-20942.01.patch, HIVE-20942.02.patch
>
>
> Since HIVE-20823 \{{Worker.java}} starts a txn - should either add a 
> heartbeat thread to it or use HiveTxnManager to start txn which will set up 
> heartbeat automatically.  In the later case make sure it's properly cancelled 
> on failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20942) Worker should heartbeat its own txn

2018-12-06 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711961#comment-16711961
 ] 

Eugene Koifman commented on HIVE-20942:
---

[~ikryvenko], I agree that you should keep the original call to {{cancel}} but 
I think the new one you added is in the wrong {{try-catch}} - shouldn't it be 
in the {{finally}} of the {{try-catch}} that contains the {{start}} call?

Also, I think {{LOG.info("Heartbeating comp}}... should be debug

 

 

 

> Worker should heartbeat its own txn
> ---
>
> Key: HIVE-20942
> URL: https://issues.apache.org/jira/browse/HIVE-20942
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-20942.01.patch, HIVE-20942.02.patch
>
>
> Since HIVE-20823 \{{Worker.java}} starts a txn - should either add a 
> heartbeat thread to it or use HiveTxnManager to start txn which will set up 
> heartbeat automatically.  In the later case make sure it's properly cancelled 
> on failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Attachment: HIVE-21007.3.patch

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Status: Patch Available  (was: Open)

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Status: Open  (was: Patch Available)

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20942) Worker should heartbeat its own txn

2018-12-06 Thread Igor Kryvenko (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712064#comment-16712064
 ] 

Igor Kryvenko commented on HIVE-20942:
--

Patch#3. Moved {{heartbeater.cancel()}} to the correct finally block. Change 
log level from info and trace levels to debug.

> Worker should heartbeat its own txn
> ---
>
> Key: HIVE-20942
> URL: https://issues.apache.org/jira/browse/HIVE-20942
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-20942.01.patch, HIVE-20942.02.patch, 
> HIVE-20942.03.patch
>
>
> Since HIVE-20823 \{{Worker.java}} starts a txn - should either add a 
> heartbeat thread to it or use HiveTxnManager to start txn which will set up 
> heartbeat automatically.  In the later case make sure it's properly cancelled 
> on failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20942) Worker should heartbeat its own txn

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712139#comment-16712139
 ] 

Hive QA commented on HIVE-20942:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950898/HIVE-20942.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15197/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15197/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15197/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-12-06 23:12:03.870
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15197/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-12-06 23:12:03.873
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 9f2e8e6 HIVE-20915: Make dynamic sort partition optimization 
available to HoS and MR (Yongzhi Chen, reviewed by Naveen Gangam)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 9f2e8e6 HIVE-20915: Make dynamic sort partition optimization 
available to HoS and MR (Yongzhi Chen, reviewed by Naveen Gangam)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-12-06 23:12:04.669
+ rm -rf ../yetus_PreCommit-HIVE-Build-15197
+ mkdir ../yetus_PreCommit-HIVE-Build-15197
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15197
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15197/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc8482842231737133809.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc8482842231737133809.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) on project hive-shims: Execution 
process-resource-bundles of goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed. 
ConcurrentModificationException -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hive-shims
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-15197
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950898 - PreCommit-HIVE-Build

> Worker should heartbeat its own txn
> ---
>

[jira] [Updated] (HIVE-16100) Dynamic Sorted Partition optimizer loses sibling operators

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16100:
---
Status: Open  (was: Patch Available)

> Dynamic Sorted Partition optimizer loses sibling operators
> --
>
> Key: HIVE-16100
> URL: https://issues.apache.org/jira/browse/HIVE-16100
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.2.0, 2.1.1, 1.2.1
>Reporter: Gopal V
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-16100.1.patch, HIVE-16100.2.patch, 
> HIVE-16100.2.patch, HIVE-16100.3.patch, HIVE-16100.4.patch, 
> HIVE-16100.5.patch, HIVE-16100.6.patch, HIVE-16100.7.patch
>
>
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java#L173
> {code}
>   // unlink connection between FS and its parent
>   fsParent = fsOp.getParentOperators().get(0);
>   fsParent.getChildOperators().clear();
> {code}
> The optimizer discards any cases where the fsParent has another SEL child 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Attachment: (was: HIVE-20955.2.patch)

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Status: Patch Available  (was: Open)

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Attachment: HIVE-21007.4.patch

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20942) Worker should heartbeat its own txn

2018-12-06 Thread Igor Kryvenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Kryvenko updated HIVE-20942:
-
Attachment: HIVE-20942.03.patch

> Worker should heartbeat its own txn
> ---
>
> Key: HIVE-20942
> URL: https://issues.apache.org/jira/browse/HIVE-20942
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-20942.01.patch, HIVE-20942.02.patch, 
> HIVE-20942.03.patch
>
>
> Since HIVE-20823 \{{Worker.java}} starts a txn - should either add a 
> heartbeat thread to it or use HiveTxnManager to start txn which will set up 
> heartbeat automatically.  In the later case make sure it's properly cancelled 
> on failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16100) Dynamic Sorted Partition optimizer loses sibling operators

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16100:
---
Attachment: HIVE-16100.7.patch

> Dynamic Sorted Partition optimizer loses sibling operators
> --
>
> Key: HIVE-16100
> URL: https://issues.apache.org/jira/browse/HIVE-16100
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 1.2.1, 2.1.1, 2.2.0
>Reporter: Gopal V
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-16100.1.patch, HIVE-16100.2.patch, 
> HIVE-16100.2.patch, HIVE-16100.3.patch, HIVE-16100.4.patch, 
> HIVE-16100.5.patch, HIVE-16100.6.patch, HIVE-16100.7.patch
>
>
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java#L173
> {code}
>   // unlink connection between FS and its parent
>   fsParent = fsOp.getParentOperators().get(0);
>   fsParent.getChildOperators().clear();
> {code}
> The optimizer discards any cases where the fsParent has another SEL child 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16100) Dynamic Sorted Partition optimizer loses sibling operators

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16100:
---
Status: Patch Available  (was: Open)

> Dynamic Sorted Partition optimizer loses sibling operators
> --
>
> Key: HIVE-16100
> URL: https://issues.apache.org/jira/browse/HIVE-16100
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.2.0, 2.1.1, 1.2.1
>Reporter: Gopal V
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-16100.1.patch, HIVE-16100.2.patch, 
> HIVE-16100.2.patch, HIVE-16100.3.patch, HIVE-16100.4.patch, 
> HIVE-16100.5.patch, HIVE-16100.6.patch, HIVE-16100.7.patch
>
>
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java#L173
> {code}
>   // unlink connection between FS and its parent
>   fsParent = fsOp.getParentOperators().get(0);
>   fsParent.getChildOperators().clear();
> {code}
> The optimizer discards any cases where the fsParent has another SEL child 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20988) Wrong results for group by queries with primary key on multiple columns

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20988:
---
Status: Open  (was: Patch Available)

> Wrong results for group by queries with primary key on multiple columns
> ---
>
> Key: HIVE-20988
> URL: https://issues.apache.org/jira/browse/HIVE-20988
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20988.1.patch, HIVE-20988.2.patch, 
> HIVE-20988.3.patch, HIVE-20988.4.patch, HIVE-20988.5.patch, 
> HIVE-20988.6.patch, HIVE-20988.7.patch
>
>
> If a table has multi column primary key group by optimization ends up 
> removing group by which is semantically incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20988) Wrong results for group by queries with primary key on multiple columns

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20988:
---
Attachment: HIVE-20988.7.patch

> Wrong results for group by queries with primary key on multiple columns
> ---
>
> Key: HIVE-20988
> URL: https://issues.apache.org/jira/browse/HIVE-20988
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20988.1.patch, HIVE-20988.2.patch, 
> HIVE-20988.3.patch, HIVE-20988.4.patch, HIVE-20988.5.patch, 
> HIVE-20988.6.patch, HIVE-20988.7.patch
>
>
> If a table has multi column primary key group by optimization ends up 
> removing group by which is semantically incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20988) Wrong results for group by queries with primary key on multiple columns

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20988:
---
Status: Patch Available  (was: Open)

> Wrong results for group by queries with primary key on multiple columns
> ---
>
> Key: HIVE-20988
> URL: https://issues.apache.org/jira/browse/HIVE-20988
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20988.1.patch, HIVE-20988.2.patch, 
> HIVE-20988.3.patch, HIVE-20988.4.patch, HIVE-20988.5.patch, 
> HIVE-20988.6.patch, HIVE-20988.7.patch
>
>
> If a table has multi column primary key group by optimization ends up 
> removing group by which is semantically incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Attachment: HIVE-20955.2.patch

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Status: Open  (was: Patch Available)

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Commented] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712099#comment-16712099
 ] 

Hive QA commented on HIVE-21007:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
47s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 1 new + 39 unchanged - 0 fixed 
= 40 total (was 39) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
55s{color} | {color:red} ql generated 1 new + 2312 unchanged - 0 fixed = 2313 
total (was 2312) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  
org.apache.hadoop.hive.ql.parse.TezCompiler.removeSemiJoinEdgesForUnion(OptimizeTezProcContext)
 makes inefficient use of keySet iterator instead of entrySet iterator  At 
TezCompiler.java:keySet iterator instead of entrySet iterator  At 
TezCompiler.java:[line 1363] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15196/dev-support/hive-personality.sh
 |
| git revision | master / 9f2e8e6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15196/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15196/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15196/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712137#comment-16712137
 ] 

Hive QA commented on HIVE-21007:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950895/HIVE-21007.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15650 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15196/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15196/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15196/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950895 - PreCommit-HIVE-Build

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20988) Wrong results for group by queries with primary key on multiple columns

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712150#comment-16712150
 ] 

Hive QA commented on HIVE-20988:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15198/dev-support/hive-personality.sh
 |
| git revision | master / 9f2e8e6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15198/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Wrong results for group by queries with primary key on multiple columns
> ---
>
> Key: HIVE-20988
> URL: https://issues.apache.org/jira/browse/HIVE-20988
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20988.1.patch, HIVE-20988.2.patch, 
> HIVE-20988.3.patch, HIVE-20988.4.patch, HIVE-20988.5.patch, 
> HIVE-20988.6.patch, HIVE-20988.7.patch
>
>
> If a table has multi column primary key group by optimization ends up 
> removing group by which is semantically incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Status: Patch Available  (was: Open)

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Attachment: HIVE-20955.2.patch

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Status: Open  (was: Patch Available)

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21015:
--
Status: Patch Available  (was: In Progress)

> HCatLoader can't provide statistics for tables no in default DB
> ---
>
> Key: HIVE-21015
> URL: https://issues.apache.org/jira/browse/HIVE-21015
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21015.0.patch
>
>
> This is due to a former change (HIVE-20330) that does not take database into 
> consideration when retrieving the proper InputJobInfo for the loader.
>  Found during testing:
> {code:java}
> 07:52:56 2018-12-05 07:52:16,599 [main] WARN  
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
>  - Couldn't get statistics from LoadFunc: 
> org.apache.hive.hcatalog.pig.HCatLoader@492fa72a
> 07:52:56 java.io.IOException: java.io.IOException: Could not calculate input 
> size for location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:97)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.estimateNumberOfReducers(InputSizeReducerEstimator.java:80)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.estimateNumberOfReducers(JobControlCompiler.java:1148)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.calculateRuntimeReducers(JobControlCompiler.java:1115)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.adjustNumReducers(JobControlCompiler.java:1063)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:564)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:333)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:221)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:293)
> 07:52:56  at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)
> 07:52:56  at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)
> 07:52:56  at org.apache.pig.PigServer.storeEx(PigServer.java:1119)
> 07:52:56  at org.apache.pig.PigServer.store(PigServer.java:1082)
> 07:52:56  at org.apache.pig.PigServer.openIterator(PigServer.java:995)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)
> 07:52:56  at 
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
> 07:52:56  at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
> 07:52:56  at org.apache.pig.Main.run(Main.java:630)
> 07:52:56  at org.apache.pig.Main.main(Main.java:175)
> 07:52:56  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 07:52:56  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 07:52:56  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 07:52:56  at java.lang.reflect.Method.invoke(Method.java:498)
> 07:52:56  at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
> 07:52:56  at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> 07:52:56 Caused by: java.io.IOException: Could not calculate input size for 
> location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:276)
> 07:52:56  ... 29 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711554#comment-16711554
 ] 

Hive QA commented on HIVE-21015:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} hcatalog/hcatalog-pig-adapter in master has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hcatalog/hcatalog-pig-adapter: The patch generated 0 
new + 255 unchanged - 13 fixed = 255 total (was 268) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15193/dev-support/hive-personality.sh
 |
| git revision | master / 9f2e8e6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: hcatalog/hcatalog-pig-adapter U: hcatalog/hcatalog-pig-adapter |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15193/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HCatLoader can't provide statistics for tables no in default DB
> ---
>
> Key: HIVE-21015
> URL: https://issues.apache.org/jira/browse/HIVE-21015
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21015.0.patch
>
>
> This is due to a former change (HIVE-20330) that does not take database into 
> consideration when retrieving the proper InputJobInfo for the loader.
>  Found during testing:
> {code:java}
> 07:52:56 2018-12-05 07:52:16,599 [main] WARN  
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
>  - Couldn't get statistics from LoadFunc: 
> org.apache.hive.hcatalog.pig.HCatLoader@492fa72a
> 07:52:56 java.io.IOException: java.io.IOException: Could not calculate input 
> size for location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)
> 07:52:56  at 
> 

[jira] [Commented] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711335#comment-16711335
 ] 

Hive QA commented on HIVE-20615:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
59s{color} | {color:blue} standalone-metastore/metastore-server in master has 
184 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15191/dev-support/hive-personality.sh
 |
| git revision | master / 8b968c7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15191/yetus/whitespace-eol.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15191/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711386#comment-16711386
 ] 

Hive QA commented on HIVE-20615:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950815/HIVE-20615.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 15631 tests 
executed
*Failed tests:*
{noformat}
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestMetaStoreEventListenerOnlyOnCommit - did not produce a TEST-*.xml file 
(likely timed out) (batchId=227)
TestMetaStoreListenersError - did not produce a TEST-*.xml file (likely timed 
out) (batchId=227)
TestMetaStoreSchemaInfo - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15191/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15191/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15191/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950815 - PreCommit-HIVE-Build

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21011) Upgrade MurmurHash 2.0 to 3.0 in vectorized map and reduce operators

2018-12-06 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712216#comment-16712216
 ] 

Gopal V commented on HIVE-21011:


LGTM - +1

The Uniform RS hash function isn't required to be backwards compatible & the 
new FIXED bucketing_version=2 is also Murmur3.

> Upgrade MurmurHash 2.0 to 3.0 in vectorized map and reduce operators
> 
>
> Key: HIVE-21011
> URL: https://issues.apache.org/jira/browse/HIVE-21011
> Project: Hive
>  Issue Type: Improvement
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
>
> HIVE-20873 improved map join performance by using MurmurHash 3.0. However, 
> there's more operators that can use it. VectorMapJoinCommonOperator and 
> VectorReduceSinkUniformHashOperator use MurmurHash 2.0, so it can be upgraded 
> to MurmurHash 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712264#comment-16712264
 ] 

Hive QA commented on HIVE-21007:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
50s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15201/dev-support/hive-personality.sh
 |
| git revision | master / 83d1fd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15201/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20988) Wrong results for group by queries with primary key on multiple columns

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20988:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> Wrong results for group by queries with primary key on multiple columns
> ---
>
> Key: HIVE-20988
> URL: https://issues.apache.org/jira/browse/HIVE-20988
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20988.1.patch, HIVE-20988.2.patch, 
> HIVE-20988.3.patch, HIVE-20988.4.patch, HIVE-20988.5.patch, 
> HIVE-20988.6.patch, HIVE-20988.7.patch
>
>
> If a table has multi column primary key group by optimization ends up 
> removing group by which is semantically incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16100) Dynamic Sorted Partition optimizer loses sibling operators

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712236#comment-16712236
 ] 

Hive QA commented on HIVE-16100:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950912/HIVE-16100.7.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 65 failed/errored test(s), 15650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_1] 
(batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_2] 
(batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_6] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_8] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_gby_empty] 
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_partial]
 (batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[implicit_cast_during_insert]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into6] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part10] 
(batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part1] 
(batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part3] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part4] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part8] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part9] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge4] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition2]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition3]
 (batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition4]
 (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition5]
 (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge10] (batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge2] (batchId=97)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_diff_fs] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat2] 
(batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats4] (batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_empty_dyn_part] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_groupby_reduce] 
(batchId=61)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_stats] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge10] 
(batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge2] 
(batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge_diff_fs]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_windowing_2]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization2]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[limit_pushdown3]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] 
(batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_partitioned]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[load_dyn_part5]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_exim] 
(batchId=183)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[offset_limit]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge7] 
(batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge_incompat2]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[ptf] 
(batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dml] 
(batchId=166)

[jira] [Commented] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712239#comment-16712239
 ] 

Hive QA commented on HIVE-20955:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950917/HIVE-20955.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15200/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15200/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15200/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-12-07 01:42:38.674
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15200/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-12-07 01:42:38.677
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   9f2e8e6..83d1fd2  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 9f2e8e6 HIVE-20915: Make dynamic sort partition optimization 
available to HoS and MR (Yongzhi Chen, reviewed by Naveen Gangam)
+ git clean -f -d
Removing ${project.basedir}/
Removing itests/${project.basedir}/
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 83d1fd2 HIVE-20988: Wrong results for group by queries with 
primary key on multiple columns (Vineet Garg, reviewed by Jesus Camacho 
Rodriguez)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-12-07 01:42:45.399
+ rm -rf ../yetus_PreCommit-HIVE-Build-15200
+ mkdir ../yetus_PreCommit-HIVE-Build-15200
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15200
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15200/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveExpandDistinctAggregatesRule.java:
 does not exist in index
error: a/ql/src/test/queries/clientpositive/druidmini_expressions.q: does not 
exist in index
error: a/ql/src/test/results/clientpositive/druid/druidmini_expressions.q.out: 
does not exist in index
error: patch failed: 
ql/src/test/queries/clientpositive/druidmini_expressions.q:200
Falling back to three-way merge...
Applied patch to 'ql/src/test/queries/clientpositive/druidmini_expressions.q' 
with conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/druid/druidmini_expressions.q.out:2272
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/druid/druidmini_expressions.q.out' with 
conflicts.
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:54: trailing whitespace.
Map 1 
/data/hiveptest/working/scratch/build.patch:74: trailing whitespace.
sort order: 
/data/hiveptest/working/scratch/build.patch:79: trailing whitespace.
Reducer 2 
error: patch failed: 
ql/src/test/queries/clientpositive/druidmini_expressions.q:200
Falling back to three-way merge...
Applied patch to 'ql/src/test/queries/clientpositive/druidmini_expressions.q' 
with conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/druid/druidmini_expressions.q.out:2272
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/druid/druidmini_expressions.q.out' with 
conflicts.
U ql/src/test/queries/clientpositive/druidmini_expressions.q
U 

[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Attachment: HIVE-20955.3.patch

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch, 
> HIVE-20955.3.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Commented] (HIVE-20988) Wrong results for group by queries with primary key on multiple columns

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712171#comment-16712171
 ] 

Hive QA commented on HIVE-20988:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950900/HIVE-20988.7.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15650 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15198/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15198/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15198/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950900 - PreCommit-HIVE-Build

> Wrong results for group by queries with primary key on multiple columns
> ---
>
> Key: HIVE-20988
> URL: https://issues.apache.org/jira/browse/HIVE-20988
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20988.1.patch, HIVE-20988.2.patch, 
> HIVE-20988.3.patch, HIVE-20988.4.patch, HIVE-20988.5.patch, 
> HIVE-20988.6.patch, HIVE-20988.7.patch
>
>
> If a table has multi column primary key group by optimization ends up 
> removing group by which is semantically incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16100) Dynamic Sorted Partition optimizer loses sibling operators

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712194#comment-16712194
 ] 

Hive QA commented on HIVE-16100:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} ql: The patch generated 0 new + 10 unchanged - 1 
fixed = 10 total (was 11) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15199/dev-support/hive-personality.sh
 |
| git revision | master / 83d1fd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15199/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Dynamic Sorted Partition optimizer loses sibling operators
> --
>
> Key: HIVE-16100
> URL: https://issues.apache.org/jira/browse/HIVE-16100
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 1.2.1, 2.1.1, 2.2.0
>Reporter: Gopal V
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-16100.1.patch, HIVE-16100.2.patch, 
> HIVE-16100.2.patch, HIVE-16100.3.patch, HIVE-16100.4.patch, 
> HIVE-16100.5.patch, HIVE-16100.6.patch, HIVE-16100.7.patch
>
>
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java#L173
> {code}
>   // unlink connection between FS and its parent
>   fsParent = fsOp.getParentOperators().get(0);
>   fsParent.getChildOperators().clear();
> {code}
> The optimizer discards any cases where the fsParent has another SEL child 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20966:
---
Status: Open  (was: Patch Available)

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch, 
> HIVE-20966.03.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20966:
---
Attachment: HIVE-20966.03.patch

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch, 
> HIVE-20966.03.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21005) LLAP: Reading more stripes per-split leaks ZlibCodecs

2018-12-06 Thread Nita Dembla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nita Dembla reassigned HIVE-21005:
--

Assignee: Nita Dembla

> LLAP: Reading more stripes per-split leaks ZlibCodecs
> -
>
> Key: HIVE-21005
> URL: https://issues.apache.org/jira/browse/HIVE-21005
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gopal V
>Assignee: Nita Dembla
>Priority: Major
>
> OrcEncodedDataReader - calls ensureDataReader in a loop, overwriting itself
> {code}
> for (int stripeIxMod = 0; stripeIxMod < stripeRgs.length; ++stripeIxMod) {
> 
> // 6.2. Ensure we have stripe metadata. We might have read it before 
> for RG filtering.
> if (stripeMetadatas != null) {
>   stripeMetadata = stripeMetadatas.get(stripeIxMod);
> } else {
> ...
>   ensureDataReader();
> ...
> }
> {code}
> {code}
>   private void ensureDataReader() throws IOException {
> ...
> stripeReader = orcReader.encodedReader(
> fileKey, dw, dw, useObjectPools ? POOL_FACTORY : null, trace, 
> useCodecPool, cacheTag);
> {code}
> creates new encodedReader without closing previous stripe's encoded reader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20992) Split the config "hive.metastore.dbaccess.ssl.properties" into more meaningful configs

2018-12-06 Thread Morio Ramdenbourg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Morio Ramdenbourg updated HIVE-20992:
-
Description: 
HIVE-13044 brought in the ability to enable TLS encryption from the HMS Service 
to the HMSDB by configuring the following two properties:
 # _javax.jdo.option.ConnectionURL_: JDBC connect string for a JDBC metastore. 
To use SSL to encrypt/authenticate the connection, provide database-specific 
SSL flag in the connection URL. (E.g. "jdbc:postgresql://myhost/db?ssl=true")
 # _hive.metastore.dbaccess.ssl.properties_: Comma-separated SSL properties for 
metastore to access database when JDO connection URL. (E.g. 
javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd)

However, the latter configuration option is opaque and poses some problems. The 
most glaring of which is it takes in _any_ 
[java.lang.System|https://docs.oracle.com/javase/7/docs/api/java/lang/System.html]
 system property, whether it is 
[TLS-related|https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#InstallationAndCustomization]
 or not. This can cause some unintended side-effects for other components of 
the HMS, especially if it overrides an already-set system property. If the user 
truly wishes to add an unrelated Java property, setting it statically using the 
"-D" option of the _java_ command is more appropriate. Secondly, the truststore 
password is stored in plain text. We should add Hadoop Shims back to the HMS to 
prevent exposing these passwords, but this effort can be done after this ticket.

I propose we deprecate _hive.metastore.dbaccess.ssl.properties_, and add the 
following properties:
 * *_hive.metastore.dbaccess.use.SSL_*
 ** Set this to true to for using SSL/TLS encryption from the HMS Service to 
the HMS backend store
 ** Default: false
 * *_javax.net.ssl.trustStore_*
 ** Truststore location
 ** Default: None
 ** E.g. _/tmp/truststore_

 *  *_javax.net.ssl.trustStorePassword_*
 ** Truststore password
 ** Default: None
 ** E.g. _password_

 * *_javax.net.ssl.trustStoreType_*
 ** Truststore type
 ** Default: JKS
 ** E.g. _pkcs12_

We should guide the user towards an easier TLS configuration experience. This 
is the minimum configuration necessary to configure TLS to the HMSDB. If we 
need other options, such as the keystore location/password for 
dual-authentication, then we can add those on afterwards.

Also, document these changes - 
[javax.jdo.option.ConnectionURL|https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-javax.jdo.option.ConnectionURL]
 does not have up-to-date documentation, and these new parameters will need 
documentation as well.

Note "TLS" refers to both SSL and TLS. TLS is simply the successor of SSL.

  was:
HIVE-13044 brought in the ability to enable TLS encryption from the HMS Service 
to the HMSDB by configuring the following two properties:
 # _javax.jdo.option.ConnectionURL_: JDBC connect string for a JDBC metastore. 
To use SSL to encrypt/authenticate the connection, provide database-specific 
SSL flag in the connection URL. (E.g. "jdbc:postgresql://myhost/db?ssl=true")
 # _hive.metastore.dbaccess.ssl.properties_: Comma-separated SSL properties for 
metastore to access database when JDO connection URL. (E.g. 
javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd)

However, the latter configuration option is opaque and poses some problems. The 
most glaring of which is it takes in _any_ 
[java.lang.System|https://docs.oracle.com/javase/7/docs/api/java/lang/System.html]
 system property, whether it is 
[TLS-related|https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#InstallationAndCustomization]
 or not. This can cause some unintended side-effects for other components of 
the HMS, especially if it overrides an already-set system property. If the user 
truly wishes to add an unrelated Java property, setting it statically using the 
"-D" option of the _java_ command is more appropriate. Secondly, the truststore 
password is stored in plain text. We should add Hadoop Shims back to the HMS to 
prevent exposing these passwords, but this effort can be done after this ticket.

I propose we deprecate _hive.metastore.dbaccess.ssl.properties_, and add the 
following properties:
 * *_hive.metastore.dbaccess.use.SSL_*
 ** Set this to true to for using SSL/TLS encryption from the HMS Service to 
the HMS backend store
 ** Default: false
 * *_javax.net.ssl.trustStore_*

 ** Truststore location
 ** Default: None
 ** E.g. _/tmp/truststore_
 *  *_javax.net.ssl.trustStorePassword_*

 ** Truststore password
 ** Default: None
 ** E.g. _password_
 * *_javax.net.ssl.trustStoreType_*

 ** Truststore type
 ** Default: JKS
 ** E.g. _pkcs12_

We should guide the user towards an easier TLS configuration experience. This 
is the minimum 

[jira] [Updated] (HIVE-20992) Split the config "hive.metastore.dbaccess.ssl.properties" into more meaningful configs

2018-12-06 Thread Morio Ramdenbourg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Morio Ramdenbourg updated HIVE-20992:
-
Description: 
HIVE-13044 brought in the ability to enable TLS encryption from the HMS Service 
to the HMSDB by configuring the following two properties:
 # _javax.jdo.option.ConnectionURL_: JDBC connect string for a JDBC metastore. 
To use SSL to encrypt/authenticate the connection, provide database-specific 
SSL flag in the connection URL. (E.g. "jdbc:postgresql://myhost/db?ssl=true")
 # _hive.metastore.dbaccess.ssl.properties_: Comma-separated SSL properties for 
metastore to access database when JDO connection URL. (E.g. 
javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd)

However, the latter configuration option is opaque and poses some problems. The 
most glaring of which is it takes in _any_ 
[java.lang.System|https://docs.oracle.com/javase/7/docs/api/java/lang/System.html]
 system property, whether it is 
[TLS-related|https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#InstallationAndCustomization]
 or not. This can cause some unintended side-effects for other components of 
the HMS, especially if it overrides an already-set system property. If the user 
truly wishes to add an unrelated Java property, setting it statically using the 
"-D" option of the _java_ command is more appropriate. Secondly, the truststore 
password is stored in plain text. We should add Hadoop Shims back to the HMS to 
prevent exposing these passwords, but this effort can be done after this ticket.

I propose we deprecate _hive.metastore.dbaccess.ssl.properties_, and add the 
following properties:
 * *_hive.metastore.dbaccess.use.SSL_*
 ** Set this to true to for using SSL/TLS encryption from the HMS Service to 
the HMS backend store
 ** Default: false
 * *_javax.net.ssl.trustStore_*

 ** Truststore location
 ** Default: None
 ** E.g. _/tmp/truststore_
 *  *_javax.net.ssl.trustStorePassword_*

 ** Truststore password
 ** Default: None
 ** E.g. _password_
 * *_javax.net.ssl.trustStoreType_*

 ** Truststore type
 ** Default: JKS
 ** E.g. _pkcs12_

We should guide the user towards an easier TLS configuration experience. This 
is the minimum configuration necessary to configure TLS to the HMSDB. If we 
need other options, such as the keystore location/password for 
dual-authentication, then we can add those on afterwards.

Also, document these changes - 
[javax.jdo.option.ConnectionURL|https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-javax.jdo.option.ConnectionURL]
 does not have up-to-date documentation, and these new parameters will need 
documentation as well.

Note "TLS" refers to both SSL and TLS. TLS is simply the successor of SSL.

  was:
HIVE-13044 brought in the ability to enable TLS encryption from the HMS Service 
to the HMSDB by configuring the following two properties:
 # _javax.jdo.option.ConnectionURL_: JDBC connect string for a JDBC metastore. 
To use SSL to encrypt/authenticate the connection, provide database-specific 
SSL flag in the connection URL. (E.g. "jdbc:postgresql://myhost/db?ssl=true")
 # _hive.metastore.dbaccess.ssl.properties_: Comma-separated SSL properties for 
metastore to access database when JDO connection URL. (E.g. 
javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd)

However, the latter configuration option is opaque and poses some problems. The 
most glaring of which is it takes in _any_ 
[java.lang.System|https://docs.oracle.com/javase/7/docs/api/java/lang/System.html]
 system property, whether it is 
[TLS-related|https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#InstallationAndCustomization]
 or not. This can cause some unintended side-effects for other components of 
the HMS, especially if it overrides an already-set system property. If the user 
truly wishes to add an unrelated Java property, setting it statically using the 
"-D" option of the _java_ command is more appropriate. Secondly, the truststore 
password is stored in plain text. We should add Hadoop Shims back to the HMS to 
prevent exposing these passwords, but this effort can be done after this ticket.

I propose we deprecate _hive.metastore.dbaccess.ssl.properties_, and add the 
following properties:
 * *_hive.metastore.dbaccess.ssl.use.SSL_*
 ** Set this to true to use TLS encryption from the HMS Service to the HMSDB
 * *_hive.metastore.dbaccess.ssl.truststore.path_*
 ** TLS truststore file location
 ** Java property: _javax.net.ssl.trustStore_
 ** E.g. _/tmp/truststore_
 * *_hive.metastore.dbaccess.ssl.truststore.password_*
 ** Password of the truststore file
 ** Java property: _javax.net.ssl.trustStorePassword_
 ** E.g. _pwd_
 * _*hive.metastore.dbaccess.ssl.truststore.type*_
 ** Type of the truststore file
 ** Java property: 

[jira] [Commented] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712282#comment-16712282
 ] 

Hive QA commented on HIVE-21007:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950919/HIVE-21007.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 15650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_limit]
 (batchId=171)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=256)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=256)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=256)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15201/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15201/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15201/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950919 - PreCommit-HIVE-Build

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20966:
---
Status: Patch Available  (was: Open)

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch, 
> HIVE-20966.03.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21005) LLAP: Reading more stripes per-split leaks ZlibCodecs

2018-12-06 Thread Nita Dembla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nita Dembla updated HIVE-21005:
---
Attachment: HIVE-21005.patch

> LLAP: Reading more stripes per-split leaks ZlibCodecs
> -
>
> Key: HIVE-21005
> URL: https://issues.apache.org/jira/browse/HIVE-21005
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gopal V
>Assignee: Nita Dembla
>Priority: Major
> Attachments: HIVE-21005.patch
>
>
> OrcEncodedDataReader - calls ensureDataReader in a loop, overwriting itself
> {code}
> for (int stripeIxMod = 0; stripeIxMod < stripeRgs.length; ++stripeIxMod) {
> 
> // 6.2. Ensure we have stripe metadata. We might have read it before 
> for RG filtering.
> if (stripeMetadatas != null) {
>   stripeMetadata = stripeMetadatas.get(stripeIxMod);
> } else {
> ...
>   ensureDataReader();
> ...
> }
> {code}
> {code}
>   private void ensureDataReader() throws IOException {
> ...
> stripeReader = orcReader.encodedReader(
> fileKey, dw, dw, useObjectPools ? POOL_FACTORY : null, trace, 
> useCodecPool, cacheTag);
> {code}
> creates new encodedReader without closing previous stripe's encoded reader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Status: Patch Available  (was: Open)

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch, 
> HIVE-20955.3.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Updated] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20955:
---
Status: Open  (was: Patch Available)

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch, 
> HIVE-20955.3.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1680)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1043)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1439)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:478)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12296)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> 

[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Status: Open  (was: Patch Available)

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch, HIVE-21007.5.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Status: Patch Available  (was: Open)

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch, HIVE-21007.5.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21007:
---
Attachment: HIVE-21007.5.patch

> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch, HIVE-21007.5.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-06 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-20733:

Attachment: HIVE-20733.2.patch
Status: Patch Available  (was: In Progress)

> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711475#comment-16711475
 ] 

Hive QA commented on HIVE-20733:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
35s{color} | {color:red} ql: The patch generated 1 new + 1 unchanged - 3 fixed 
= 2 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15192/dev-support/hive-personality.sh
 |
| git revision | master / 8b968c7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15192/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15192/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-06 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-20733:

Status: In Progress  (was: Patch Available)

> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21015:
--
Description: 
This is due to a former change (HIVE-20330) that does not take database into 
consideration when retrieving the proper InputJobInfo for the loader.
 Found during testing:
{code:java}
07:52:56 2018-12-05 07:52:16,599 [main] WARN  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
 - Couldn't get statistics from LoadFunc: 
org.apache.hive.hcatalog.pig.HCatLoader@492fa72a
07:52:56 java.io.IOException: java.io.IOException: Could not calculate input 
size for location (table) tpcds_3000_decimal_parquet.date_dim
07:52:56at 
org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:97)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.estimateNumberOfReducers(InputSizeReducerEstimator.java:80)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.estimateNumberOfReducers(JobControlCompiler.java:1148)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.calculateRuntimeReducers(JobControlCompiler.java:1115)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.adjustNumReducers(JobControlCompiler.java:1063)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:564)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:333)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:221)
07:52:56at 
org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:293)
07:52:56at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)
07:52:56at 
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)
07:52:56at org.apache.pig.PigServer.storeEx(PigServer.java:1119)
07:52:56at org.apache.pig.PigServer.store(PigServer.java:1082)
07:52:56at org.apache.pig.PigServer.openIterator(PigServer.java:995)
07:52:56at 
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)
07:52:56at 
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)
07:52:56at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
07:52:56at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
07:52:56at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
07:52:56at org.apache.pig.Main.run(Main.java:630)
07:52:56at org.apache.pig.Main.main(Main.java:175)
07:52:56at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
07:52:56at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
07:52:56at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
07:52:56at java.lang.reflect.Method.invoke(Method.java:498)
07:52:56at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
07:52:56at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
07:52:56 Caused by: java.io.IOException: Could not calculate input size for 
location (table) tpcds_3000_decimal_parquet.date_dim
07:52:56at 
org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:276)
07:52:56... 29 more{code}

  was:
This is due to a former change (HIVE-20330) that does not take database into 
consideration when retrieving the proper InputJobInfo for the loader.
Found during testing:
*07:52:56* 2018-12-05 07:52:16,599 [main] WARN  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
 - Couldn't get statistics from LoadFunc: 
org.apache.hive.hcatalog.pig.HCatLoader@492fa72a*07:52:56* java.io.IOException: 
java.io.IOException: Could not calculate input size for location (table) 
tpcds_3000_decimal_parquet.date_dim*07:52:56*  at 
org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)*07:52:56*
 at 

[jira] [Assigned] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita reassigned HIVE-21015:
-


> HCatLoader can't provide statistics for tables no in default DB
> ---
>
> Key: HIVE-21015
> URL: https://issues.apache.org/jira/browse/HIVE-21015
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>
> This is due to a former change (HIVE-20330) that does not take database into 
> consideration when retrieving the proper InputJobInfo for the loader.
> Found during testing:
> *07:52:56* 2018-12-05 07:52:16,599 [main] WARN  
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
>  - Couldn't get statistics from LoadFunc: 
> org.apache.hive.hcatalog.pig.HCatLoader@492fa72a*07:52:56* 
> java.io.IOException: java.io.IOException: Could not calculate input size for 
> location (table) tpcds_3000_decimal_parquet.date_dim*07:52:56*at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)*07:52:56*
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)*07:52:56*
>at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:97)*07:52:56*
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.estimateNumberOfReducers(InputSizeReducerEstimator.java:80)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.estimateNumberOfReducers(JobControlCompiler.java:1148)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.calculateRuntimeReducers(JobControlCompiler.java:1115)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.adjustNumReducers(JobControlCompiler.java:1063)*07:52:56*
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:564)*07:52:56*
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:333)*07:52:56*
>at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:221)*07:52:56*
>at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:293)*07:52:56*
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)*07:52:56* 
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)*07:52:56*
>at org.apache.pig.PigServer.storeEx(PigServer.java:1119)*07:52:56*  at 
> org.apache.pig.PigServer.store(PigServer.java:1082)*07:52:56*at 
> org.apache.pig.PigServer.openIterator(PigServer.java:995)*07:52:56*  at 
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)*07:52:56*
>at 
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)*07:52:56*
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)*07:52:56*
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)*07:52:56*
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)*07:52:56*   
> at org.apache.pig.Main.run(Main.java:630)*07:52:56* at 
> org.apache.pig.Main.main(Main.java:175)*07:52:56*at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*07:52:56*
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)*07:52:56*
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*07:52:56*
>   at java.lang.reflect.Method.invoke(Method.java:498)*07:52:56*   at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:313)*07:52:56* at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:227)*07:52:56* Caused by: 
> java.io.IOException: Could not calculate input size for location (table) 
> tpcds_3000_decimal_parquet.date_dim*07:52:56*  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:276)*07:52:56*
>  ... 29 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21015 started by Adam Szita.
-
> HCatLoader can't provide statistics for tables no in default DB
> ---
>
> Key: HIVE-21015
> URL: https://issues.apache.org/jira/browse/HIVE-21015
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>
> This is due to a former change (HIVE-20330) that does not take database into 
> consideration when retrieving the proper InputJobInfo for the loader.
> Found during testing:
> *07:52:56* 2018-12-05 07:52:16,599 [main] WARN  
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
>  - Couldn't get statistics from LoadFunc: 
> org.apache.hive.hcatalog.pig.HCatLoader@492fa72a*07:52:56* 
> java.io.IOException: java.io.IOException: Could not calculate input size for 
> location (table) tpcds_3000_decimal_parquet.date_dim*07:52:56*at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)*07:52:56*
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)*07:52:56*
>at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:97)*07:52:56*
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.estimateNumberOfReducers(InputSizeReducerEstimator.java:80)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.estimateNumberOfReducers(JobControlCompiler.java:1148)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.calculateRuntimeReducers(JobControlCompiler.java:1115)*07:52:56*
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.adjustNumReducers(JobControlCompiler.java:1063)*07:52:56*
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:564)*07:52:56*
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:333)*07:52:56*
>at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:221)*07:52:56*
>at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:293)*07:52:56*
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)*07:52:56* 
>   at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)*07:52:56*
>at org.apache.pig.PigServer.storeEx(PigServer.java:1119)*07:52:56*  at 
> org.apache.pig.PigServer.store(PigServer.java:1082)*07:52:56*at 
> org.apache.pig.PigServer.openIterator(PigServer.java:995)*07:52:56*  at 
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)*07:52:56*
>at 
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)*07:52:56*
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)*07:52:56*
>   at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)*07:52:56*
>   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)*07:52:56*   
> at org.apache.pig.Main.run(Main.java:630)*07:52:56* at 
> org.apache.pig.Main.main(Main.java:175)*07:52:56*at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*07:52:56*
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)*07:52:56*
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*07:52:56*
>   at java.lang.reflect.Method.invoke(Method.java:498)*07:52:56*   at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:313)*07:52:56* at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:227)*07:52:56* Caused by: 
> java.io.IOException: Could not calculate input size for location (table) 
> tpcds_3000_decimal_parquet.date_dim*07:52:56*  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:276)*07:52:56*
>  ... 29 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21013) JdbcStorageHandler fail to find partition column in Oracle

2018-12-06 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712314#comment-16712314
 ] 

Jesus Camacho Rodriguez commented on HIVE-21013:


+1

> JdbcStorageHandler fail to find partition column in Oracle
> --
>
> Key: HIVE-21013
> URL: https://issues.apache.org/jira/browse/HIVE-21013
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21013.1.patch
>
>
> Stack:
> {code}
> ERROR : Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1543830849610_0048_1_00, diagnostics=[Task failed, 
> taskId=task_1543830849610_0048_1_00_05, diagnostics=[TaskAttempt 0 
> failed, info=[Error: Error while running task ( failure ) : 
> attempt_1543830849610_0048_1_00_05_0:java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
> java.io.IOException: 
> org.apache.hive.storage.jdbc.exception.HiveJdbcDatabaseAccessException: 
> Caught exception while trying to execute query:Cannot find salaries in sql 
> query salaries 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: java.io.IOException: 
> org.apache.hive.storage.jdbc.exception.HiveJdbcDatabaseAccessException: 
> Caught exception while trying to execute query:Cannot find salaries in sql 
> query salaries 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:80)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
>   ... 16 more
> Caused by: java.io.IOException: java.io.IOException: 
> org.apache.hive.storage.jdbc.exception.HiveJdbcDatabaseAccessException: 
> Caught exception while trying to execute query:Cannot find salaries in sql 
> query salaries 
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
>   at 
> org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
>   at 
> org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
>   at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151)
>   at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   ... 18 more
> Caused by: java.io.IOException: 
> org.apache.hive.storage.jdbc.exception.HiveJdbcDatabaseAccessException: 
> Caught exception while trying to execute query:Cannot find 

[jira] [Commented] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712320#comment-16712320
 ] 

Hive QA commented on HIVE-20966:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
18s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
4s{color} | {color:blue} standalone-metastore/metastore-server in master has 
184 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} hcatalog/server-extensions in master has 1 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} common: The patch generated 1 new + 427 unchanged - 0 
fixed = 428 total (was 427) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 55 new + 854 unchanged - 11 
fixed = 909 total (was 865) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} itests/hive-unit: The patch generated 145 new + 658 
unchanged - 58 fixed = 803 total (was 716) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 16 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} standalone-metastore/metastore-server generated 4 new 
+ 184 unchanged - 0 fixed = 188 total (was 184) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
53s{color} | {color:red} ql generated 1 new + 2311 unchanged - 1 fixed = 2312 
total (was 2312) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hcatalog/server-extensions generated 2 new + 1 
unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  Boxed value is unboxed and then immediately reboxed in 

[jira] [Commented] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712369#comment-16712369
 ] 

Hive QA commented on HIVE-20955:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950936/HIVE-20955.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions]
 (batchId=194)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15203/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15203/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15203/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950936 - PreCommit-HIVE-Build

> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch, 
> HIVE-20955.3.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 
> ~[?:?]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins(Unknown Source) 
> ~[?:?]
>  at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getColumnOrigins(RelMetadataQuery.java:345)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveExpandDistinctAggregatesRule.onMatch(HiveExpandDistinctAggregatesRule.java:168)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:315)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:415) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:280)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:211) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:198) 
> ~[calcite-core-1.17.0.jar:1.17.0]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2363)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:2314)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:2031)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1780)
>  

[jira] [Updated] (HIVE-21016) Duplicate column name in GROUP BY statement causing Vertex failures

2018-12-06 Thread Bjorn Olsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjorn Olsen updated HIVE-21016:
---
Description: 
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message. For complex queries this can 
result in a lot of debugging effort, whereas a simple error message could have 
saved some time.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count( * ), party_id from party {{group by party_id, party_id;}}

Note the duplicate column in the GROUP BY.

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
 Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector

  was:
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count( * ), party_id from party {{group by party_id, party_id;}}

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
 Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector


> Duplicate column name in GROUP BY statement causing Vertex failures
> ---
>
> Key: HIVE-21016
> URL: https://issues.apache.org/jira/browse/HIVE-21016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Bjorn Olsen
>Priority: Major
>
> Hive queries fail with "Vertex failure" messages when the user submits a 
> query containing duplicate GROUP BY columns. The Hive query parser should 
> detect and reject this scenario with a meaningful error message, rather than 
> executing the query and failing with an obfuscated message. For complex 
> queries this can result in a lot of debugging effort, whereas a simple error 
> message could have saved some time.
> To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
> column name.
> {{For example:}}
> select count( * ), party_id from party {{group by party_id, party_id;}}
> Note the duplicate column in the GROUP BY.
> This will fail with messages similar to below:
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 

[jira] [Updated] (HIVE-21016) Duplicate column name in GROUP BY statement causing Vertex failures

2018-12-06 Thread Bjorn Olsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjorn Olsen updated HIVE-21016:
---
Description: 
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message. For complex queries this can 
result in a lot of debugging effort, whereas a simple error message could have 
saved some time.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count( * ), party_id from party {{group by party_id, party_id;}}

Note the duplicate column in the GROUP BY.

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
 *Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector*

  was:
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message. For complex queries this can 
result in a lot of debugging effort, whereas a simple error message could have 
saved some time.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count( * ), party_id from party {{group by party_id, party_id;}}

Note the duplicate column in the GROUP BY.

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
 Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector


> Duplicate column name in GROUP BY statement causing Vertex failures
> ---
>
> Key: HIVE-21016
> URL: https://issues.apache.org/jira/browse/HIVE-21016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Bjorn Olsen
>Priority: Major
>
> Hive queries fail with "Vertex failure" messages when the user submits a 
> query containing duplicate GROUP BY columns. The Hive query parser should 
> detect and reject this scenario with a meaningful error message, rather than 
> executing the query and failing with an obfuscated message. For complex 
> queries this can result in a lot of debugging effort, whereas a simple error 
> message could have saved some time.
> To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
> column name.
> {{For example:}}
> select count( * ), party_id from party {{group by party_id, party_id;}}
> Note the duplicate column in 

[jira] [Updated] (HIVE-21011) Upgrade MurmurHash 2.0 to 3.0 in vectorized map and reduce operators

2018-12-06 Thread Teddy Choi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-21011:
--
Attachment: HIVE-21011.1.patch

> Upgrade MurmurHash 2.0 to 3.0 in vectorized map and reduce operators
> 
>
> Key: HIVE-21011
> URL: https://issues.apache.org/jira/browse/HIVE-21011
> Project: Hive
>  Issue Type: Improvement
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21011.1.patch
>
>
> HIVE-20873 improved map join performance by using MurmurHash 3.0. However, 
> there's more operators that can use it. VectorMapJoinCommonOperator and 
> VectorReduceSinkUniformHashOperator use MurmurHash 2.0, so it can be upgraded 
> to MurmurHash 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712329#comment-16712329
 ] 

Hive QA commented on HIVE-20966:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950933/HIVE-20966.03.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 15637 tests 
executed
*Failed tests:*
{noformat}
TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) 
(batchId=248)
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=248)
TestReplAcidTablesWithJsonMessage - did not produce a TEST-*.xml file (likely 
timed out) (batchId=248)
TestReplIncrementalLoadAcidTablesWithJsonMessage - did not produce a TEST-*.xml 
file (likely timed out) (batchId=248)
TestReplicationScenariosMigration - did not produce a TEST-*.xml file (likely 
timed out) (batchId=248)
TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely 
timed out) (batchId=248)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=259)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15202/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15202/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15202/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950933 - PreCommit-HIVE-Build

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch, 
> HIVE-20966.03.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20955) Calcite Rule HiveExpandDistinctAggregatesRule seems throwing IndexOutOfBoundsException

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712345#comment-16712345
 ] 

Hive QA commented on HIVE-20955:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
37s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15203/dev-support/hive-personality.sh
 |
| git revision | master / 83d1fd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15203/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Calcite Rule HiveExpandDistinctAggregatesRule seems throwing 
> IndexOutOfBoundsException
> --
>
> Key: HIVE-20955
> URL: https://issues.apache.org/jira/browse/HIVE-20955
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20955.1.patch, HIVE-20955.2.patch, 
> HIVE-20955.3.patch
>
>
>  
> Adde the following query to Druid test  
> ql/src/test/queries/clientpositive/druidmini_expressions.q
> {code}
> select count(distinct `__time`, cint) from (select * from 
> druid_table_alltypesorc) as src;
> {code}
> leads to error \{code} 2018-11-21T07:36:39,449 ERROR [main] QTestUtil: Client 
> execution failed with error code = 4 running "\{code}
> with exception stack 
> {code}
> 2018-11-21T07:36:39,443 ERROR [ecd48683-0286-4cb4-b0ad-e150fab51038 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:310)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:293)
>  ~[guava-19.0.jar:?]
>  at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:41)
>  ~[guava-19.0.jar:?]
>  at 
> org.apache.calcite.rel.metadata.RelMdColumnOrigins.getColumnOrigins(RelMdColumnOrigins.java:77)
>  ~[calcite-core-1.17.0.jar:1.17.0]
>  at GeneratedMetadataHandler_ColumnOrigin.getColumnOrigins_$(Unknown Source) 

[jira] [Updated] (HIVE-21016) Duplicate column name in GROUP BY statement causing Vertex failures

2018-12-06 Thread Bjorn Olsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjorn Olsen updated HIVE-21016:
---
Description: 
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count(*), party_id from party {{group by party_id, party_id;}}

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
 Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector

  was:
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count(*), party_id from party}}{{group by party_id, party_id;}}

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector


> Duplicate column name in GROUP BY statement causing Vertex failures
> ---
>
> Key: HIVE-21016
> URL: https://issues.apache.org/jira/browse/HIVE-21016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Bjorn Olsen
>Priority: Major
>
> Hive queries fail with "Vertex failure" messages when the user submits a 
> query containing duplicate GROUP BY columns. The Hive query parser should 
> detect and reject this scenario with a meaningful error message, rather than 
> executing the query and failing with an obfuscated message.
> To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
> column name.
> {{For example:}}
> select count(*), party_id from party {{group by party_id, party_id;}}
> This will fail with messages similar to below:
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
>  at 
> 

[jira] [Commented] (HIVE-21011) Upgrade MurmurHash 2.0 to 3.0 in vectorized map and reduce operators

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712437#comment-16712437
 ] 

Hive QA commented on HIVE-21011:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} common: The patch generated 0 new + 3 unchanged - 3 
fixed = 3 total (was 6) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15205/dev-support/hive-personality.sh
 |
| git revision | master / 83d1fd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15205/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade MurmurHash 2.0 to 3.0 in vectorized map and reduce operators
> 
>
> Key: HIVE-21011
> URL: https://issues.apache.org/jira/browse/HIVE-21011
> Project: Hive
>  Issue Type: Improvement
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21011.1.patch
>
>
> HIVE-20873 improved map join performance by using MurmurHash 3.0. However, 
> there's more operators that can use it. VectorMapJoinCommonOperator and 
> VectorReduceSinkUniformHashOperator use MurmurHash 2.0, so it can be upgraded 
> to MurmurHash 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21016) Duplicate column name in GROUP BY statement causing Vertex failures

2018-12-06 Thread Bjorn Olsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjorn Olsen updated HIVE-21016:
---
Description: 
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count( * ), party_id from party {{group by party_id, party_id;}}

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
 Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector

  was:
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count(*), party_id from party {{group by party_id, party_id;}}

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
 Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector


> Duplicate column name in GROUP BY statement causing Vertex failures
> ---
>
> Key: HIVE-21016
> URL: https://issues.apache.org/jira/browse/HIVE-21016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Bjorn Olsen
>Priority: Major
>
> Hive queries fail with "Vertex failure" messages when the user submits a 
> query containing duplicate GROUP BY columns. The Hive query parser should 
> detect and reject this scenario with a meaningful error message, rather than 
> executing the query and failing with an obfuscated message.
> To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
> column name.
> {{For example:}}
> select count( * ), party_id from party {{group by party_id, party_id;}}
> This will fail with messages similar to below:
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
>  at 
> 

[jira] [Updated] (HIVE-21016) Duplicate column name in GROUP BY statement causing Vertex failures

2018-12-06 Thread Bjorn Olsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjorn Olsen updated HIVE-21016:
---
Description: 
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name.

{{For example:}}

select count(*), party_id from party}}{{group by party_id, party_id;}}

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector

  was:
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

{{To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name. }}

{{For example:}}


{{ select count(*), party_id from party}}{{group by party_id, party_id;}}

This will fail with messages similar to below:

{{Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)}}
{{ ... 14 more}}
{{ Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)}}
{{ ... 17 more}}
{{ Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector}}


> Duplicate column name in GROUP BY statement causing Vertex failures
> ---
>
> Key: HIVE-21016
> URL: https://issues.apache.org/jira/browse/HIVE-21016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Bjorn Olsen
>Priority: Major
>
> Hive queries fail with "Vertex failure" messages when the user submits a 
> query containing duplicate GROUP BY columns. The Hive query parser should 
> detect and reject this scenario with a meaningful error message, rather than 
> executing the query and failing with an obfuscated message.
> To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
> column name.
> {{For example:}}
> select count(*), party_id from party}}{{group by party_id, party_id;}}
> This will fail with messages similar to below:
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
>  at 

[jira] [Updated] (HIVE-21016) Duplicate column name in GROUP BY statement causing Vertex failures

2018-12-06 Thread Bjorn Olsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjorn Olsen updated HIVE-21016:
---
Description: 
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

{{To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name. }}

{{For example:}}


{{ select count(*), party_id from party}}{{group by party_id, party_id;}}

This will fail with messages similar to below:

{{Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)}}
{{ ... 14 more}}
{{ Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
Error while processing vector batch (tag=0) 
ffb9-5fb1-3024-922a-10cc313a7c171}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)}}
{{ at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)}}
{{ ... 17 more}}
{{ Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector}}

  was:
Hive queries fail with "Vertex failure" messages when the user submits a query 
containing duplicate GROUP BY columns. The Hive query parser should detect and 
reject this scenario with a meaningful error message, rather than executing the 
query and failing with an obfuscated message.

To repeat the issue, choose any table and perform a GROUP BY with a duplicate 
column name. For example:
select count(*), party_id from party

group by party_id, party_id;

This will fail with messages similar to below:

Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:232)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:266)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
 ... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:454)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:381)
 ... 17 more
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector


> Duplicate column name in GROUP BY statement causing Vertex failures
> ---
>
> Key: HIVE-21016
> URL: https://issues.apache.org/jira/browse/HIVE-21016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Bjorn Olsen
>Priority: Major
>
> Hive queries fail with "Vertex failure" messages when the user submits a 
> query containing duplicate GROUP BY columns. The Hive query parser should 
> detect and reject this scenario with a meaningful error message, rather than 
> executing the query and failing with an obfuscated message.
> {{To repeat the issue, choose any table and perform a GROUP BY with a 
> duplicate column name. }}
> {{For example:}}
> {{ select count(*), party_id from party}}{{group by party_id, party_id;}}
> This will fail with messages similar to below:
> {{Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing vector batch (tag=0) ffb9-5fb1-3024-922a-10cc313a7c171}}
> {{ at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:390)}}
> {{ at 
> 

[jira] [Commented] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712387#comment-16712387
 ] 

Hive QA commented on HIVE-21007:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
47s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15204/dev-support/hive-personality.sh
 |
| git revision | master / 83d1fd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15204/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Semi join + Union can lead to wrong plans
> -
>
> Key: HIVE-21007
> URL: https://issues.apache.org/jira/browse/HIVE-21007
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21007.1.patch, HIVE-21007.2.patch, 
> HIVE-21007.3.patch, HIVE-21007.4.patch, HIVE-21007.5.patch
>
>
> Tez compiler has the ability to push JOIN within UNION (by replicating join 
> on each branch). If this JOIN had a SJ branch outgoing (or incoming) it could 
> mess up the plan and end up generating incorrect or wrong plan.
> As a safe measure any SJ branch after UNION should be removed (until we 
> improve the logic to better handle SJ branches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-6050) Newer versions of JDBC driver does not work with older HiveServer2

2018-12-06 Thread dongping (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongping updated HIVE-6050:
---
Description: 
HiveServer2 instance has to be upgraded before the JDBC drivers used by 
applications are upgraded. If jdbc drivers are updated before HiveServer2 is 
upgraded it will not be functional.

Connect from JDBC driver of Hive 0.13 (TProtocolVersion=v4) to HiveServer2 of 
Hive 0.10 (TProtocolVersion=v1), will return the following exception:
{noformat}
java.sql.SQLException: Could not establish connection to 
jdbc:hive2://localhost:1/default: Required field 'client_protocol' is 
unset! Struct:TOpenSessionReq(client_protocol:null)
at 
org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:336)
at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:158)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at 
org.apache.hive.jdbc.MyTestJdbcDriver2.getConnection(MyTestJdbcDriver2.java:73)
at 
org.apache.hive.jdbc.MyTestJdbcDriver2.init(MyTestJdbcDriver2.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:187)
at 
org.junit.runners.$1.runReflectiveCall(BlockJUnit4ClassRunner.java:236)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:233)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914)
Caused by: org.apache.thrift.TApplicationException: Required field 
'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null)
at 
org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at 
org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:160)
at 
org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:147)
at 
org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:327)
... 37 more
{noformat}
On code analysis, it looks like the 'client_protocol' scheme is a ThriftEnum, 
which doesn't seem to be backward-compatible. Look at the code path in the 
generated file 'TOpenSessionReq.java', method 
TOpenSessionReqStandardScheme.read():

1. The method will call 'TProtocolVersion.findValue()' on the thrift protocol's 
byte stream, which returns null if the client is sending an enum value unknown 
to the server. (v4 is unknown to server)
 2. The method will then call struct.validate(), which will throw the above 
exception because of null version.

So doesn't look like the current backward-compatibility scheme will work.

  was:
HiveServer2 instance has to be upgraded before the JDBC drivers used by 
applications are upgraded. If jdbc drivers are updated before HiveServer2 is 
upgraded it will not be functional.

Connect from JDBC driver of Hive 0.13 (TProtocolVersion=v4) to HiveServer2 of 
Hive 0.10 (TProtocolVersion=v1), will return the following exception:

{noformat}
java.sql.SQLException: Could not establish connection to 
jdbc:hive2://localhost:1/default: Required field 'client_protocol' is 
unset! Struct:TOpenSessionReq(client_protocol:null)
at 
org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:336)
at 

[jira] [Assigned] (HIVE-20784) Migrate hbase.util.Base64 to java.util.Base64

2018-12-06 Thread dongping (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongping reassigned HIVE-20784:
---

Assignee: dongping  (was: Dagang Wei)

> Migrate hbase.util.Base64 to java.util.Base64
> -
>
> Key: HIVE-20784
> URL: https://issues.apache.org/jira/browse/HIVE-20784
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 3.1.0
> Environment: HBase 2.0.2
> Hive 3.1.0
>Reporter: Dagang Wei
>Assignee: dongping
>Priority: Critical
>  Labels: pull-request-available
>
> By default Hive 3.1.0 depends on HBase 2.0.0-alpha4. HBase 2.0.2 migrated 
> from hbase.util.Base64 to java.util.Base64 (HBASE-20884), which causes Hive 
> 3.1.0 fails to build with HBase 2.0.2.
>  
> $ cd hbase-handler
> $ mvn package -DskipTests -Dhbase.version=2.0.2
> [ERROR] 
> .../hive/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableSnapshotInputFormat.java:[29,36]
>  cannot find symbol
> [ERROR] symbol: class Base64
> [ERROR] location: package org.apache.hadoop.hbase.util 
>  
> To make Hive works with 2.0.2+ (and also older versions), we should consider 
> migrating Hive to java.util.Base64.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20784) Migrate hbase.util.Base64 to java.util.Base64

2018-12-06 Thread dongping (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongping reassigned HIVE-20784:
---

Assignee: (was: dongping)

> Migrate hbase.util.Base64 to java.util.Base64
> -
>
> Key: HIVE-20784
> URL: https://issues.apache.org/jira/browse/HIVE-20784
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 3.1.0
> Environment: HBase 2.0.2
> Hive 3.1.0
>Reporter: Dagang Wei
>Priority: Critical
>  Labels: pull-request-available
>
> By default Hive 3.1.0 depends on HBase 2.0.0-alpha4. HBase 2.0.2 migrated 
> from hbase.util.Base64 to java.util.Base64 (HBASE-20884), which causes Hive 
> 3.1.0 fails to build with HBase 2.0.2.
>  
> $ cd hbase-handler
> $ mvn package -DskipTests -Dhbase.version=2.0.2
> [ERROR] 
> .../hive/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableSnapshotInputFormat.java:[29,36]
>  cannot find symbol
> [ERROR] symbol: class Base64
> [ERROR] location: package org.apache.hadoop.hbase.util 
>  
> To make Hive works with 2.0.2+ (and also older versions), we should consider 
> migrating Hive to java.util.Base64.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21007) Semi join + Union can lead to wrong plans

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712426#comment-16712426
 ] 

Hive QA commented on HIVE-21007:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950937/HIVE-21007.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 45 failed/errored test(s), 15650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterPartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTableCascade
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterViewParititon
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testColumnStatistics 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTypeApi 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testConcurrentMetastores
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateAndGetTableWithDriver
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateTableSettingId
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBLocationChange 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwner 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwnerChange 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabase 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocation 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocationWithPermissionProblems
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropDatabaseCascadeMVMultiDB
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterLastPartition
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterSinglePartition
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFunctionWithResources
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetConfigValue 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetMetastoreUuid 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetPartitionsWithSpec
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetSchemaWithNoClassDefFoundError
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetTableObjects 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetUUIDInParallel
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testJDOPersistanceManagerCleanup
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionNames
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitions 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionsWihtLimitEnabled
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testNameMethods 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartitionFilter 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRenamePartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRetriableClientWithConnLifetime
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleFunction 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTypeApi 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testStatsFastTrivial 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSynchronized 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableDatabase 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableFilter 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testUpdatePartitionStat_doesNotUpdateStats
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testValidateTableCols
 (batchId=227)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15204/testReport
Console 

[jira] [Updated] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20966:
---
Status: Open  (was: Patch Available)

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20966:
---
Attachment: HIVE-20966.02.patch

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20966:
---
Status: Patch Available  (was: Open)

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711624#comment-16711624
 ] 

Hive QA commented on HIVE-21015:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950845/HIVE-21015.0.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 45 failed/errored test(s), 15650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testAlterPartition
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testAlterTable
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testAlterTableCascade
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testAlterViewParititon
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testColumnStatistics
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testComplexTable
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testComplexTypeApi
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testConcurrentMetastores
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testCreateAndGetTableWithDriver
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testCreateTableSettingId
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDBLocationChange
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDBOwner 
(batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDBOwnerChange
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDatabase 
(batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDatabaseLocation
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDatabaseLocationWithPermissionProblems
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDropDatabaseCascadeMVMultiDB
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testDropTable
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testFilterLastPartition
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testFilterSinglePartition
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testFunctionWithResources
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testGetConfigValue
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testGetMetastoreUuid
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testGetPartitionsWithSpec
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testGetSchemaWithNoClassDefFoundError
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testGetTableObjects
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testGetUUIDInParallel
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testJDOPersistanceManagerCleanup
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testListPartitionNames
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testListPartitions
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testListPartitionsWihtLimitEnabled
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testNameMethods
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testPartition
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testPartitionFilter
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testRenamePartition
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testRetriableClientWithConnLifetime
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testSimpleFunction
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testSimpleTable
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testSimpleTypeApi
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testStatsFastTrivial
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testSynchronized
 (batchId=229)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZKBindHost.testTableDatabase
 (batchId=229)

[jira] [Updated] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21015:
--
Status: In Progress  (was: Patch Available)

> HCatLoader can't provide statistics for tables no in default DB
> ---
>
> Key: HIVE-21015
> URL: https://issues.apache.org/jira/browse/HIVE-21015
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21015.0.patch, HIVE-21015.1.patch
>
>
> This is due to a former change (HIVE-20330) that does not take database into 
> consideration when retrieving the proper InputJobInfo for the loader.
>  Found during testing:
> {code:java}
> 07:52:56 2018-12-05 07:52:16,599 [main] WARN  
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
>  - Couldn't get statistics from LoadFunc: 
> org.apache.hive.hcatalog.pig.HCatLoader@492fa72a
> 07:52:56 java.io.IOException: java.io.IOException: Could not calculate input 
> size for location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:97)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.estimateNumberOfReducers(InputSizeReducerEstimator.java:80)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.estimateNumberOfReducers(JobControlCompiler.java:1148)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.calculateRuntimeReducers(JobControlCompiler.java:1115)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.adjustNumReducers(JobControlCompiler.java:1063)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:564)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:333)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:221)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:293)
> 07:52:56  at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)
> 07:52:56  at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)
> 07:52:56  at org.apache.pig.PigServer.storeEx(PigServer.java:1119)
> 07:52:56  at org.apache.pig.PigServer.store(PigServer.java:1082)
> 07:52:56  at org.apache.pig.PigServer.openIterator(PigServer.java:995)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)
> 07:52:56  at 
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
> 07:52:56  at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
> 07:52:56  at org.apache.pig.Main.run(Main.java:630)
> 07:52:56  at org.apache.pig.Main.main(Main.java:175)
> 07:52:56  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 07:52:56  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 07:52:56  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 07:52:56  at java.lang.reflect.Method.invoke(Method.java:498)
> 07:52:56  at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
> 07:52:56  at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> 07:52:56 Caused by: java.io.IOException: Could not calculate input size for 
> location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:276)
> 07:52:56  ... 29 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21015:
--
Status: Patch Available  (was: In Progress)

> HCatLoader can't provide statistics for tables no in default DB
> ---
>
> Key: HIVE-21015
> URL: https://issues.apache.org/jira/browse/HIVE-21015
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21015.0.patch, HIVE-21015.1.patch
>
>
> This is due to a former change (HIVE-20330) that does not take database into 
> consideration when retrieving the proper InputJobInfo for the loader.
>  Found during testing:
> {code:java}
> 07:52:56 2018-12-05 07:52:16,599 [main] WARN  
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
>  - Couldn't get statistics from LoadFunc: 
> org.apache.hive.hcatalog.pig.HCatLoader@492fa72a
> 07:52:56 java.io.IOException: java.io.IOException: Could not calculate input 
> size for location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:97)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.estimateNumberOfReducers(InputSizeReducerEstimator.java:80)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.estimateNumberOfReducers(JobControlCompiler.java:1148)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.calculateRuntimeReducers(JobControlCompiler.java:1115)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.adjustNumReducers(JobControlCompiler.java:1063)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:564)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:333)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:221)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:293)
> 07:52:56  at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)
> 07:52:56  at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)
> 07:52:56  at org.apache.pig.PigServer.storeEx(PigServer.java:1119)
> 07:52:56  at org.apache.pig.PigServer.store(PigServer.java:1082)
> 07:52:56  at org.apache.pig.PigServer.openIterator(PigServer.java:995)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)
> 07:52:56  at 
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
> 07:52:56  at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
> 07:52:56  at org.apache.pig.Main.run(Main.java:630)
> 07:52:56  at org.apache.pig.Main.main(Main.java:175)
> 07:52:56  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 07:52:56  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 07:52:56  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 07:52:56  at java.lang.reflect.Method.invoke(Method.java:498)
> 07:52:56  at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
> 07:52:56  at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> 07:52:56 Caused by: java.io.IOException: Could not calculate input size for 
> location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:276)
> 07:52:56  ... 29 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21015) HCatLoader can't provide statistics for tables no in default DB

2018-12-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21015:
--
Attachment: HIVE-21015.1.patch

> HCatLoader can't provide statistics for tables no in default DB
> ---
>
> Key: HIVE-21015
> URL: https://issues.apache.org/jira/browse/HIVE-21015
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21015.0.patch, HIVE-21015.1.patch
>
>
> This is due to a former change (HIVE-20330) that does not take database into 
> consideration when retrieving the proper InputJobInfo for the loader.
>  Found during testing:
> {code:java}
> 07:52:56 2018-12-05 07:52:16,599 [main] WARN  
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
>  - Couldn't get statistics from LoadFunc: 
> org.apache.hive.hcatalog.pig.HCatLoader@492fa72a
> 07:52:56 java.io.IOException: java.io.IOException: Could not calculate input 
> size for location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:281)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getInputSizeFromLoader(InputSizeReducerEstimator.java:171)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:118)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.getTotalInputFileSize(InputSizeReducerEstimator.java:97)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator.estimateNumberOfReducers(InputSizeReducerEstimator.java:80)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.estimateNumberOfReducers(JobControlCompiler.java:1148)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.calculateRuntimeReducers(JobControlCompiler.java:1115)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.adjustNumReducers(JobControlCompiler.java:1063)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:564)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:333)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:221)
> 07:52:56  at 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:293)
> 07:52:56  at org.apache.pig.PigServer.launchPlan(PigServer.java:1475)
> 07:52:56  at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1460)
> 07:52:56  at org.apache.pig.PigServer.storeEx(PigServer.java:1119)
> 07:52:56  at org.apache.pig.PigServer.store(PigServer.java:1082)
> 07:52:56  at org.apache.pig.PigServer.openIterator(PigServer.java:995)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782)
> 07:52:56  at 
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
> 07:52:56  at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
> 07:52:56  at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
> 07:52:56  at org.apache.pig.Main.run(Main.java:630)
> 07:52:56  at org.apache.pig.Main.main(Main.java:175)
> 07:52:56  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 07:52:56  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 07:52:56  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 07:52:56  at java.lang.reflect.Method.invoke(Method.java:498)
> 07:52:56  at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
> 07:52:56  at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> 07:52:56 Caused by: java.io.IOException: Could not calculate input size for 
> location (table) tpcds_3000_decimal_parquet.date_dim
> 07:52:56  at 
> org.apache.hive.hcatalog.pig.HCatLoader.getStatistics(HCatLoader.java:276)
> 07:52:56  ... 29 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20784) Migrate hbase.util.Base64 to java.util.Base64

2018-12-06 Thread Dagang Wei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dagang Wei reassigned HIVE-20784:
-

Assignee: Dagang Wei

> Migrate hbase.util.Base64 to java.util.Base64
> -
>
> Key: HIVE-20784
> URL: https://issues.apache.org/jira/browse/HIVE-20784
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 3.1.0
> Environment: HBase 2.0.2
> Hive 3.1.0
>Reporter: Dagang Wei
>Assignee: Dagang Wei
>Priority: Critical
>  Labels: pull-request-available
>
> By default Hive 3.1.0 depends on HBase 2.0.0-alpha4. HBase 2.0.2 migrated 
> from hbase.util.Base64 to java.util.Base64 (HBASE-20884), which causes Hive 
> 3.1.0 fails to build with HBase 2.0.2.
>  
> $ cd hbase-handler
> $ mvn package -DskipTests -Dhbase.version=2.0.2
> [ERROR] 
> .../hive/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableSnapshotInputFormat.java:[29,36]
>  cannot find symbol
> [ERROR] symbol: class Base64
> [ERROR] location: package org.apache.hadoop.hbase.util 
>  
> To make Hive works with 2.0.2+ (and also older versions), we should consider 
> migrating Hive to java.util.Base64.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2018-12-06 Thread Thomas Uhren (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Uhren updated HIVE-21009:

Issue Type: Bug  (was: Improvement)

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.1
>Reporter: Thomas Uhren
>Priority: Major
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=exmple
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711778#comment-16711778
 ] 

Sankar Hariappan commented on HIVE-20966:
-

[~maheshk114]
+1 for 02.patch, pending tests

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711782#comment-16711782
 ] 

Hive QA commented on HIVE-20966:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
12s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
3s{color} | {color:blue} standalone-metastore/metastore-server in master has 
184 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
45s{color} | {color:blue} ql in master has 2312 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} hcatalog/server-extensions in master has 1 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} common: The patch generated 1 new + 427 unchanged - 0 
fixed = 428 total (was 427) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 55 new + 854 unchanged - 11 
fixed = 909 total (was 865) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
22s{color} | {color:red} itests/hive-unit: The patch generated 145 new + 658 
unchanged - 58 fixed = 803 total (was 716) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 16 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} standalone-metastore/metastore-server generated 5 new 
+ 184 unchanged - 0 fixed = 189 total (was 184) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
2s{color} | {color:red} ql generated 1 new + 2311 unchanged - 1 fixed = 2312 
total (was 2312) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hcatalog/server-extensions generated 2 new + 1 
unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  Boxed value is unboxed and then immediately reboxed in 

[jira] [Commented] (HIVE-20125) Typo in MetricsCollection for OutputMetrics

2018-12-06 Thread Adesh Kumar Rao (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711786#comment-16711786
 ] 

Adesh Kumar Rao commented on HIVE-20125:


[~stakiar] uploaded the patch with the relevant fix for typo.

> Typo in MetricsCollection for OutputMetrics
> ---
>
> Key: HIVE-20125
> URL: https://issues.apache.org/jira/browse/HIVE-20125
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-20125.1.patch
>
>
> When creating {{OutputMetrics}} in the {{aggregate}} method we check for 
> {{hasInputMetrics}} instead of {{hasOutputMetrics}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20125) Typo in MetricsCollection for OutputMetrics

2018-12-06 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711806#comment-16711806
 ] 

Sahil Takiar commented on HIVE-20125:
-

+1 LGTM

> Typo in MetricsCollection for OutputMetrics
> ---
>
> Key: HIVE-20125
> URL: https://issues.apache.org/jira/browse/HIVE-20125
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-20125.1.patch
>
>
> When creating {{OutputMetrics}} in the {{aggregate}} method we check for 
> {{hasInputMetrics}} instead of {{hasOutputMetrics}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20966) Support bootstrap and incremental replication to a target with hive.strict.managed.tables enabled.

2018-12-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711814#comment-16711814
 ] 

Hive QA commented on HIVE-20966:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12950864/HIVE-20966.02.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 15637 tests 
executed
*Failed tests:*
{noformat}
TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) 
(batchId=248)
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=248)
TestReplAcidTablesWithJsonMessage - did not produce a TEST-*.xml file (likely 
timed out) (batchId=248)
TestReplIncrementalLoadAcidTablesWithJsonMessage - did not produce a TEST-*.xml 
file (likely timed out) (batchId=248)
TestReplicationScenariosMigration - did not produce a TEST-*.xml file (likely 
timed out) (batchId=248)
TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely 
timed out) (batchId=248)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15194/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15194/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15194/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12950864 - PreCommit-HIVE-Build

> Support bootstrap and incremental replication to a target with 
> hive.strict.managed.tables enabled.
> --
>
> Key: HIVE-20966
> URL: https://issues.apache.org/jira/browse/HIVE-20966
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR
> Attachments: HIVE-20966.01.patch, HIVE-20966.02.patch
>
>
> *Requirements:*
> Hive2 supports replication of managed tables. But in Hive3 with 
> hive.strict.managed.tables=true, some of these managed tables are converted 
> to ACID or MM tables. Also, some of them are converted to external tables 
> based on below rules. 
> - Avro format with external schema, Storage handlers, List bucketed tabled 
> are converted to external tables.
> - Location not owned by "hive" user are converted to external table.
> - Hive owned ORC format are converted to full ACID transactional table.
> - Hive owned Non-ORC format are converted to MM transactional table.
> REPL LOAD should apply these rules during bootstrap and incremental phases 
> and convert the tables accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >