[jira] [Updated] (HIVE-22907) Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers

2020-02-20 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22907:
--
Attachment: HIVE-22907.03.patch

> Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers
> 
>
> Key: HIVE-22907
> URL: https://issues.apache.org/jira/browse/HIVE-22907
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22907.01.patch, HIVE-22907.02.patch, 
> HIVE-22907.03.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #15: extract the rest of the alter table analyzers from 
> DDLSemanticAnalyzer, and move them under the new package. Remove 
> DDLSemanticAnalyzer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041606#comment-17041606
 ] 

Hive QA commented on HIVE-22888:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
23s{color} | {color:blue} standalone-metastore/metastore-server in master has 
185 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 22 new + 547 unchanged - 38 fixed = 569 total (was 585) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} standalone-metastore/metastore-server generated 2 new 
+ 184 unchanged - 1 fixed = 186 total (was 185) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLock(Connection, 
long, long) may fail to clean up java.sql.Statement  Obligation to clean up 
resource created at TxnHandler.java:to clean up java.sql.Statement  Obligation 
to clean up resource created at TxnHandler.java:[line 4408] is not discharged |
|  |  org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLock(Connection, 
long, long) passes a nonconstant String to an execute or addBatch method on an 
SQL statement  At TxnHandler.java:nonconstant String to an execute or addBatch 
method on an SQL statement  At TxnHandler.java:[line 4411] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20761/dev-support/hive-personality.sh
 |
| git revision | master / f826283 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20761/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20761/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20761/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20761/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko

[jira] [Commented] (HIVE-22840) Race condition in formatters of TimestampColumnVector and DateColumnVector

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041585#comment-17041585
 ] 

Hive QA commented on HIVE-22840:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993999/HIVE-22840.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18049 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver[url_hook] 
(batchId=300)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20760/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20760/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20760/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993999 - PreCommit-HIVE-Build

> Race condition in formatters of TimestampColumnVector and DateColumnVector 
> ---
>
> Key: HIVE-22840
> URL: https://issues.apache.org/jira/browse/HIVE-22840
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: László Bodor
>Assignee: Shubham Chaurasia
>Priority: Major
> Attachments: HIVE-22840.1.patch, HIVE-22840.2.patch, HIVE-22840.patch
>
>
> HIVE-22405 added support for proleptic calendar. It uses java's 
> SimpleDateFormat/Calendar APIs which are not thread-safe and cause race in 
> some scenarios. 
> As a result of those race conditions, we see some exceptions like
> {code:java}
> 1) java.lang.NumberFormatException: For input string: "" 
> OR 
> java.lang.NumberFormatException: For input string: ".821582E.821582E44"
> OR
> 2) Caused by: java.lang.ArrayIndexOutOfBoundsException: -5325980
>   at 
> sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:453)
>   at 
> java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2397)
> {code}
> This issue is to address those thread-safety issues/race conditions.
> cc [~jcamachorodriguez] [~abstractdog] [~omalley]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22840) Race condition in formatters of TimestampColumnVector and DateColumnVector

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041560#comment-17041560
 ] 

Hive QA commented on HIVE-22840:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} storage-api in master has 58 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} storage-api: The patch generated 4 new + 17 unchanged 
- 3 fixed = 21 total (was 20) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} common: The patch generated 0 new + 0 unchanged - 2 
fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} The patch serde passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} storage-api generated 0 new + 48 unchanged - 10 
fixed = 48 total (was 58) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} serde in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20760/dev-support/hive-personality.sh
 |
| git revision | master / f826283 |
| Default Java | 1.8.0_111 |
| findbugs | 

[jira] [Commented] (HIVE-22899) Make sure qtests clean up copied files from test directories

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041550#comment-17041550
 ] 

Hive QA commented on HIVE-22899:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993998/HIVE-22899.5.patch

{color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18047 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[stats_noscan_2] 
(batchId=133)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20759/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20759/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20759/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993998 - PreCommit-HIVE-Build

> Make sure qtests clean up copied files from test directories
> 
>
> Key: HIVE-22899
> URL: https://issues.apache.org/jira/browse/HIVE-22899
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Minor
> Attachments: HIVE-22899.2.patch, HIVE-22899.3.patch, 
> HIVE-22899.4.patch, HIVE-22899.5.patch, HIVE-22899.patch
>
>
> Several qtest files are copying schema or test files to the test directories 
> (such as ${system:test.tmp.dir} and 
> ${hiveconf:hive.metastore.warehouse.dir}), many times without changing the 
> name of the copied file. When the same files is copied by another qtest to 
> the same directory the copy and hence the test fails. This can lead to flaky 
> tests when any two of these qtests gets scheduled to the same batch.
>  
> In order to avoid these failures, we should make sure the files copied to the 
> test dirs have unique names and we should make sure these files are cleaned 
> up by the same qtest files that copies the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22891) Skip PartitonDesc Extraction In CombineHiveRecord For Non-LLAP Execution Mode

2020-02-20 Thread Syed Shameerur Rahman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041539#comment-17041539
 ] 

Syed Shameerur Rahman commented on HIVE-22891:
--

Failures are unrelated. Verified locally. asflicense warning is due to 
HIVE-16355

cc [~szita]

> Skip PartitonDesc Extraction In CombineHiveRecord For Non-LLAP Execution Mode
> -
>
> Key: HIVE-22891
> URL: https://issues.apache.org/jira/browse/HIVE-22891
> Project: Hive
>  Issue Type: Task
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22891.01.patch, HIVE-22891.02.patch, 
> HIVE-22891.03.patch
>
>
> {code:java}
> try {
>   // TODO: refactor this out
>   if (pathToPartInfo == null) {
> MapWork mrwork;
> if (HiveConf.getVar(conf, 
> HiveConf.ConfVars.HIVE_EXECUTION_ENGINE).equals("tez")) {
>   mrwork = (MapWork) Utilities.getMergeWork(jobConf);
>   if (mrwork == null) {
> mrwork = Utilities.getMapWork(jobConf);
>   }
> } else {
>   mrwork = Utilities.getMapWork(jobConf);
> }
> pathToPartInfo = mrwork.getPathToPartitionInfo();
>   }  PartitionDesc part = extractSinglePartSpec(hsplit);
>   inputFormat = HiveInputFormat.wrapForLlap(inputFormat, jobConf, part);
> } catch (HiveException e) {
>   throw new IOException(e);
> }
> {code}
> The above piece of code in CombineHiveRecordReader.java was introduced in 
> HIVE-15147. This overwrites inputFormat based on the PartitionDesc which is 
> not required in non-LLAP mode of execution as the method 
> HiveInputFormat.wrapForLlap() simply returns the previously defined 
> inputFormat in case of non-LLAP mode. The method call extractSinglePartSpec() 
> has some serious performance implications. If there are large no. of small 
> files, each call in the method extractSinglePartSpec() takes approx ~ (2 - 3) 
> seconds. Hence the same query which runs in Hive 1.x / Hive 2 is way faster 
> than the query run on latest hive.
> {code:java}
> 2020-02-11 07:15:04,701 INFO [main] 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl: Reading ORC rows from 
> 2020-02-11 07:15:06,468 WARN [main] 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader: Multiple partitions 
> found; not going to pass a part spec to LLAP IO: {{logdate=2020-02-03, 
> hour=01, event=win}} and {{logdate=2020-02-03, hour=02, event=act}}
> 2020-02-11 07:15:06,468 INFO [main] 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader: succeeded in getting 
> org.apache.hadoop.mapred.FileSplit{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22816) QueryCache: Queries using views can have them cached after CTE expansion

2020-02-20 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22816:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, [~gopalv]

> QueryCache: Queries using views can have them cached after CTE expansion
> 
>
> Key: HIVE-22816
> URL: https://issues.apache.org/jira/browse/HIVE-22816
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Gopal Vijayaraghavan
>Assignee: Gopal Vijayaraghavan
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22816.1.patch
>
>
> {code}
> create view ss_null as select * from store_Sales where ss_Sold_date_sk is 
> null;
> select count(ss_ticket_number) from ss_null;
> with ss_null_cte as 
> (select * from store_Sales where ss_Sold_date_sk is null)
> select count(ss_ticket_number) from ss_null_cte;
> {code}
> Are treated differently by the query cache, however their execution is 
> identical.
> CBO rewrites the view query into AST form as follows
> {code}
> SELECT COUNT(`ss_ticket_number`) AS `$f0`
> FROM `tpcds_bin_partitioned_acid_orc_1`.`store_sales`
> WHERE `ss_sold_date_sk` IS NULL
> {code}
> But retains the write-entity for the VIRTUAL_VIEW for Ranger authorization 
> {code}
> 0: jdbc:hive2://localhost:10013> explain dependency select count(distinct 
> ss_ticket_number) from ss_null;
> ++
> |  Explain   |
> ++
> | 
> {"input_tables":[{"tablename":"tpcds_bin_partitioned_acid_orc_1@ss_null","tabletype":"VIRTUAL_VIEW"},{"tablename":"tpcds_bin_partitioned_acid_orc_1@store_sales","tabletype":"MANAGED_TABLE","tableParents":"[tpcds_bin_partitioned_acid_orc_1@ss_null]"}],"input_partitions":[{"partitionName":"tpcds_bin_partitioned_acid_orc_1@store_sales@ss_sold_date_sk=__HIVE_DEFAULT_PARTITION__"}]}
>  |
> ++
> {code}
> Causing Query cache to print out
> {code}
> parse.CalcitePlanner: Not eligible for results caching - query contains 
> non-transactional tables [ss_null]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode

2020-02-20 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041523#comment-17041523
 ] 

Ashutosh Chauhan commented on HIVE-22786:
-

Actually new result  in 
/ql/src/test/results/clientpositive/llap/vector_groupby_rollup1.q.out looks 
incorrect. [~rajesh.balamohan] can you please check?

> Vectorization: Agg with distinct can be optimised in HASH mode
> --
>
> Key: HIVE-22786
> URL: https://issues.apache.org/jira/browse/HIVE-22786
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-22786.1.patch, HIVE-22786.2.patch, 
> HIVE-22786.3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode

2020-02-20 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041507#comment-17041507
 ] 

Ashutosh Chauhan commented on HIVE-22786:
-

+1

> Vectorization: Agg with distinct can be optimised in HASH mode
> --
>
> Key: HIVE-22786
> URL: https://issues.apache.org/jira/browse/HIVE-22786
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-22786.1.patch, HIVE-22786.2.patch, 
> HIVE-22786.3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22899) Make sure qtests clean up copied files from test directories

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041505#comment-17041505
 ] 

Hive QA commented on HIVE-22899:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  3m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20759/dev-support/hive-personality.sh
 |
| git revision | master / 657b510 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20759/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20759/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Make sure qtests clean up copied files from test directories
> 
>
> Key: HIVE-22899
> URL: https://issues.apache.org/jira/browse/HIVE-22899
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Minor
> Attachments: HIVE-22899.2.patch, HIVE-22899.3.patch, 
> HIVE-22899.4.patch, HIVE-22899.5.patch, HIVE-22899.patch
>
>
> Several qtest files are copying schema or test files to the test directories 
> (such as ${system:test.tmp.dir} and 
> ${hiveconf:hive.metastore.warehouse.dir}), many times without changing the 
> name of the copied file. When the same files is copied by another qtest to 
> the same directory the copy and hence the test fails. This can lead to flaky 
> tests when any two of these qtests gets scheduled to the same batch.
>  
> In order to avoid these failures, we should make sure the files copied to the 
> test dirs have unique names and we should make sure these files are cleaned 
> up by the same qtest files that copies the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22856) Hive LLAP LlapArrowBatchRecordReader skipping remaining batches when ArrowStreamReader returns a 0 length batch.

2020-02-20 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22856:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Hive LLAP LlapArrowBatchRecordReader skipping remaining batches when 
> ArrowStreamReader returns a 0 length batch.
> 
>
> Key: HIVE-22856
> URL: https://issues.apache.org/jira/browse/HIVE-22856
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22856.01.patch, HIVE-22856.02.patch
>
>
> LlapArrowBatchRecordReader returns false when the ArrowStreamReader 
> loadNextBatch returns column vector with 0 length. But we should keep reading 
> data until loadNextBatch returns false. Some batch may return column vector 
> of length 0, but we should ignore and wait for the next batch.
> The batch size of 0 is possible in the case when a split read by ORC reader 
> has all deleted or aborted data. The VectorizedOrcAcidRowBatchReader , reads 
> the data from split info and then filters the rows which are not visible to 
> the read transaction. So it may happen that, none of the records satisfy the 
> filter. In that case VectorizedOrcAcidRowBatchReader sends a batch size of 0. 
> With 0 batch size, VectorFileSinkArrowOperator creates a batch of just 
> metadata and set the value count to 0. This kind of batch should be ignore by 
> the client and should wait for next batch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22880) ACID: All delete event readers should ignore ORC SARGs

2020-02-20 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22880:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, [~gopalv]

> ACID: All delete event readers should ignore ORC SARGs
> --
>
> Key: HIVE-22880
> URL: https://issues.apache.org/jira/browse/HIVE-22880
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions, Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Gopal Vijayaraghavan
>Priority: Blocker
> Fix For: 4.0.0
>
> Attachments: HIVE-22880.1.patch
>
>
> Delete delta readers should not apply any SARGs other than the ones related 
> to the transaction id ranges within the inserts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22891) Skip PartitonDesc Extraction In CombineHiveRecord For Non-LLAP Execution Mode

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041493#comment-17041493
 ] 

Hive QA commented on HIVE-22891:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993965/HIVE-22891.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18047 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[metadata_only_queries_with_filters]
 (batchId=186)
org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1
 (batchId=290)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20758/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20758/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20758/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993965 - PreCommit-HIVE-Build

> Skip PartitonDesc Extraction In CombineHiveRecord For Non-LLAP Execution Mode
> -
>
> Key: HIVE-22891
> URL: https://issues.apache.org/jira/browse/HIVE-22891
> Project: Hive
>  Issue Type: Task
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22891.01.patch, HIVE-22891.02.patch, 
> HIVE-22891.03.patch
>
>
> {code:java}
> try {
>   // TODO: refactor this out
>   if (pathToPartInfo == null) {
> MapWork mrwork;
> if (HiveConf.getVar(conf, 
> HiveConf.ConfVars.HIVE_EXECUTION_ENGINE).equals("tez")) {
>   mrwork = (MapWork) Utilities.getMergeWork(jobConf);
>   if (mrwork == null) {
> mrwork = Utilities.getMapWork(jobConf);
>   }
> } else {
>   mrwork = Utilities.getMapWork(jobConf);
> }
> pathToPartInfo = mrwork.getPathToPartitionInfo();
>   }  PartitionDesc part = extractSinglePartSpec(hsplit);
>   inputFormat = HiveInputFormat.wrapForLlap(inputFormat, jobConf, part);
> } catch (HiveException e) {
>   throw new IOException(e);
> }
> {code}
> The above piece of code in CombineHiveRecordReader.java was introduced in 
> HIVE-15147. This overwrites inputFormat based on the PartitionDesc which is 
> not required in non-LLAP mode of execution as the method 
> HiveInputFormat.wrapForLlap() simply returns the previously defined 
> inputFormat in case of non-LLAP mode. The method call extractSinglePartSpec() 
> has some serious performance implications. If there are large no. of small 
> files, each call in the method extractSinglePartSpec() takes approx ~ (2 - 3) 
> seconds. Hence the same query which runs in Hive 1.x / Hive 2 is way faster 
> than the query run on latest hive.
> {code:java}
> 2020-02-11 07:15:04,701 INFO [main] 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl: Reading ORC rows from 
> 2020-02-11 07:15:06,468 WARN [main] 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader: Multiple partitions 
> found; not going to pass a part spec to LLAP IO: {{logdate=2020-02-03, 
> hour=01, event=win}} and {{logdate=2020-02-03, hour=02, event=act}}
> 2020-02-11 07:15:06,468 INFO [main] 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader: succeeded in getting 
> org.apache.hadoop.mapred.FileSplit{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22891) Skip PartitonDesc Extraction In CombineHiveRecord For Non-LLAP Execution Mode

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041468#comment-17041468
 ] 

Hive QA commented on HIVE-22891:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} ql: The patch generated 0 new + 22 unchanged - 1 
fixed = 22 total (was 23) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20758/dev-support/hive-personality.sh
 |
| git revision | master / 703cf29 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20758/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20758/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Skip PartitonDesc Extraction In CombineHiveRecord For Non-LLAP Execution Mode
> -
>
> Key: HIVE-22891
> URL: https://issues.apache.org/jira/browse/HIVE-22891
> Project: Hive
>  Issue Type: Task
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22891.01.patch, HIVE-22891.02.patch, 
> HIVE-22891.03.patch
>
>
> {code:java}
> try {
>   // TODO: refactor this out
>   if (pathToPartInfo == null) {
> MapWork mrwork;
> if (HiveConf.getVar(conf, 
> HiveConf.ConfVars.HIVE_EXECUTION_ENGINE).equals("tez")) {
>   mrwork = (MapWork) Utilities.getMergeWork(jobConf);
>   if (mrwork == null) {
> mrwork = Utilities.getMapWork(jobConf);
>   }
> } else {
>   mrwork = Utilities.getMapWork(jobConf);
> }
> pathToPartInfo = mrwork.getPathToPartitionInfo();
>   }  PartitionDesc part = extractSinglePartSpec(hsplit);
>   inputFormat = HiveInputFormat.wrapForLlap(inputFormat, jobConf, part);
> } catch (HiveException e) {
>   throw new IOException(e);
> }
> {code}
> The above piece of code in CombineHiveRecordReader.java was introduced in 
> HIVE-15147. This overwrites inputFormat based on the PartitionDesc which is 
> not required in non-LLAP mode of execution as the method 
> HiveInputFormat.wrapForLlap() 

[jira] [Commented] (HIVE-22907) Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041439#comment-17041439
 ] 

Hive QA commented on HIVE-22907:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993956/HIVE-22907.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 18004 tests 
executed
*Failed tests:*
{noformat}
TestHCatAuthUtil - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
TestHCatNonPartitioned - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
TestHCatPartitionPublish - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
TestHCatPartitioned - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
TestPassProperties - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
TestPermsGrp - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
TestRCFileMapReduceInputFormat - did not produce a TEST-*.xml file (likely 
timed out) (batchId=218)
TestSemanticAnalysis - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
TestUseDatabase - did not produce a TEST-*.xml file (likely timed out) 
(batchId=218)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_tableprops_external_with_default_constraint]
 (batchId=106)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_tableprops_external_with_notnull_constraint]
 (batchId=105)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20757/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20757/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20757/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993956 - PreCommit-HIVE-Build

> Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers
> 
>
> Key: HIVE-22907
> URL: https://issues.apache.org/jira/browse/HIVE-22907
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22907.01.patch, HIVE-22907.02.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #15: extract the rest of the alter table analyzers from 
> DDLSemanticAnalyzer, and move them under the new package. Remove 
> DDLSemanticAnalyzer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22907) Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041406#comment-17041406
 ] 

Hive QA commented on HIVE-22907:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 2 new + 428 unchanged - 14 
fixed = 430 total (was 442) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20757/dev-support/hive-personality.sh
 |
| git revision | master / 703cf29 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20757/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20757/yetus/whitespace-eol.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20757/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20757/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers
> 
>
> Key: HIVE-22907
> URL: https://issues.apache.org/jira/browse/HIVE-22907
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22907.01.patch, HIVE-22907.02.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #15: extract the rest of the alter table analyzers from 
> DDLSemanticAnalyzer, and move them under the new package. Remove 
> DDLSemanticAnalyzer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21164) ACID: explore how we can avoid a move step during inserts/compaction

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041388#comment-17041388
 ] 

Hive QA commented on HIVE-21164:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993955/HIVE-21164.22.patch

{color:green}SUCCESS:{color} +1 due to 21 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18050 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20756/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20756/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20756/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993955 - PreCommit-HIVE-Build

> ACID: explore how we can avoid a move step during inserts/compaction
> 
>
> Key: HIVE-21164
> URL: https://issues.apache.org/jira/browse/HIVE-21164
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-21164.1.patch, HIVE-21164.10.patch, 
> HIVE-21164.11.patch, HIVE-21164.11.patch, HIVE-21164.12.patch, 
> HIVE-21164.13.patch, HIVE-21164.14.patch, HIVE-21164.14.patch, 
> HIVE-21164.15.patch, HIVE-21164.16.patch, HIVE-21164.17.patch, 
> HIVE-21164.18.patch, HIVE-21164.19.patch, HIVE-21164.2.patch, 
> HIVE-21164.20.patch, HIVE-21164.21.patch, HIVE-21164.22.patch, 
> HIVE-21164.3.patch, HIVE-21164.4.patch, HIVE-21164.5.patch, 
> HIVE-21164.6.patch, HIVE-21164.7.patch, HIVE-21164.8.patch, HIVE-21164.9.patch
>
>
> Currently, we write compacted data to a temporary location and then move the 
> files to a final location, which is an expensive operation on some cloud file 
> systems. Since HIVE-20823 is already in, it can control the visibility of 
> compacted data for the readers. Therefore, we can perhaps avoid writing data 
> to a temporary location and directly write compacted data to the intended 
> final path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21164) ACID: explore how we can avoid a move step during inserts/compaction

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041363#comment-17041363
 ] 

Hive QA commented on HIVE-21164:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
9s{color} | {color:red} ql: The patch generated 35 new + 2704 unchanged - 26 
fixed = 2739 total (was 2730) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} itests/hive-unit: The patch generated 21 new + 165 
unchanged - 21 fixed = 186 total (was 186) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 15 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} ql generated 0 new + 1529 unchanged - 1 fixed = 1529 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20756/dev-support/hive-personality.sh
 |
| git revision | master / 703cf29 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| 

[jira] [Updated] (HIVE-22359) LLAP: when a node restarts with the exact same host/port in kubernetes it is not detected as a task failure

2020-02-20 Thread Prasanth Jayachandran (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-22359:
-
Attachment: HIVE-22359.3.patch

> LLAP: when a node restarts with the exact same host/port in kubernetes it is 
> not detected as a task failure
> ---
>
> Key: HIVE-22359
> URL: https://issues.apache.org/jira/browse/HIVE-22359
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal Vijayaraghavan
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22359.1.patch, HIVE-22359.2.patch, 
> HIVE-22359.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> │ <14>1 2019-10-16T22:16:39.233Z 
> query-coordinator-0-5.query-coordinator-0-service.compute-1569601454-l2x9.svc.cluster.local
>  query-coordinator 1 461e5ad9-f05f-11e9-85f7-06e84765763e [mdc@18060 
> class="te │
> │ zplugins.LlapTaskCommunicator" level="INFO" thread="IPC Server handler 4 on 
> 3"] The tasks we expected to be on the node are not there: 
> attempt_1569601631911__1_04_34_0, attempt_15696016319 │
> │ 11__1_04_71_0, attempt_1569601631911__1_04_000191_0, 
> attempt_1569601631911__1_04_000211_0, 
> attempt_1569601631911__1_04_000229_0, 
> attempt_1569601631911__1_04_000231_0, attempt_1 │
> │ 569601631911__1_04_000235_0, attempt_1569601631911__1_04_000242_0, 
> attempt_1569601631911__1_04_000160_1, 
> attempt_1569601631911__1_04_12_2, 
> attempt_1569601631911__1_04_03_2, │
> │  attempt_1569601631911__1_04_56_2, 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22359) LLAP: when a node restarts with the exact same host/port in kubernetes it is not detected as a task failure

2020-02-20 Thread Prasanth Jayachandran (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041345#comment-17041345
 ] 

Prasanth Jayachandran commented on HIVE-22359:
--

another try because of unrelated failure.. 

> LLAP: when a node restarts with the exact same host/port in kubernetes it is 
> not detected as a task failure
> ---
>
> Key: HIVE-22359
> URL: https://issues.apache.org/jira/browse/HIVE-22359
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal Vijayaraghavan
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22359.1.patch, HIVE-22359.2.patch, 
> HIVE-22359.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> │ <14>1 2019-10-16T22:16:39.233Z 
> query-coordinator-0-5.query-coordinator-0-service.compute-1569601454-l2x9.svc.cluster.local
>  query-coordinator 1 461e5ad9-f05f-11e9-85f7-06e84765763e [mdc@18060 
> class="te │
> │ zplugins.LlapTaskCommunicator" level="INFO" thread="IPC Server handler 4 on 
> 3"] The tasks we expected to be on the node are not there: 
> attempt_1569601631911__1_04_34_0, attempt_15696016319 │
> │ 11__1_04_71_0, attempt_1569601631911__1_04_000191_0, 
> attempt_1569601631911__1_04_000211_0, 
> attempt_1569601631911__1_04_000229_0, 
> attempt_1569601631911__1_04_000231_0, attempt_1 │
> │ 569601631911__1_04_000235_0, attempt_1569601631911__1_04_000242_0, 
> attempt_1569601631911__1_04_000160_1, 
> attempt_1569601631911__1_04_12_2, 
> attempt_1569601631911__1_04_03_2, │
> │  attempt_1569601631911__1_04_56_2, 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22825) Reduce directory lookup cost for acid tables

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041336#comment-17041336
 ] 

Hive QA commented on HIVE-22825:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993614/HIVE-22825.8.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20755/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20755/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20755/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12993614/HIVE-22825.8.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993614 - PreCommit-HIVE-Build

> Reduce directory lookup cost for acid tables
> 
>
> Key: HIVE-22825
> URL: https://issues.apache.org/jira/browse/HIVE-22825
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-22825.1.patch, HIVE-22825.2.patch, 
> HIVE-22825.3.patch, HIVE-22825.4.patch, HIVE-22825.5.patch, 
> HIVE-22825.6.patch, HIVE-22825.7.patch, HIVE-22825.8.patch
>
>
> With objectstores, directory lookup costs are expensive. For acid tables, it 
> would be good to have a directory cache to reduce number of lookup calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22359) LLAP: when a node restarts with the exact same host/port in kubernetes it is not detected as a task failure

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041335#comment-17041335
 ] 

Hive QA commented on HIVE-22359:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993943/HIVE-22359.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18048 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.org.apache.hadoop.hive.ql.TestWarehouseExternalDir
 (batchId=270)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testExternalDefaultPaths 
(batchId=270)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20754/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20754/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20754/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993943 - PreCommit-HIVE-Build

> LLAP: when a node restarts with the exact same host/port in kubernetes it is 
> not detected as a task failure
> ---
>
> Key: HIVE-22359
> URL: https://issues.apache.org/jira/browse/HIVE-22359
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal Vijayaraghavan
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22359.1.patch, HIVE-22359.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> │ <14>1 2019-10-16T22:16:39.233Z 
> query-coordinator-0-5.query-coordinator-0-service.compute-1569601454-l2x9.svc.cluster.local
>  query-coordinator 1 461e5ad9-f05f-11e9-85f7-06e84765763e [mdc@18060 
> class="te │
> │ zplugins.LlapTaskCommunicator" level="INFO" thread="IPC Server handler 4 on 
> 3"] The tasks we expected to be on the node are not there: 
> attempt_1569601631911__1_04_34_0, attempt_15696016319 │
> │ 11__1_04_71_0, attempt_1569601631911__1_04_000191_0, 
> attempt_1569601631911__1_04_000211_0, 
> attempt_1569601631911__1_04_000229_0, 
> attempt_1569601631911__1_04_000231_0, attempt_1 │
> │ 569601631911__1_04_000235_0, attempt_1569601631911__1_04_000242_0, 
> attempt_1569601631911__1_04_000160_1, 
> attempt_1569601631911__1_04_12_2, 
> attempt_1569601631911__1_04_03_2, │
> │  attempt_1569601631911__1_04_56_2, 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22908) AM caching connections to LLAP based on hostname and port does not work in kubernetes

2020-02-20 Thread Prasanth Jayachandran (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-22908:
-
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks for the review [~gopalv]! Merged patch to master.

> AM caching connections to LLAP based on hostname and port does not work in 
> kubernetes
> -
>
> Key: HIVE-22908
> URL: https://issues.apache.org/jira/browse/HIVE-22908
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22908.1.patch
>
>
> AM is caching all connections to LLAP services using combination of hostname 
> and port which does not work in kubernetes environment where hostname of pod 
> and port can be same with statefulset. This causes AM to talk to old LLAP 
> which could have died or OOM/Pod kill etc. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22912) Support native submission of Hive queries to a Kubernetes Cluster

2020-02-20 Thread Surbhi Aggarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041326#comment-17041326
 ] 

Surbhi Aggarwal commented on HIVE-22912:


Noh, I meant the entire hive query execution process on kubernetes, that is, 
starting a hive session on a hive server kubernetes pod, which ultimately 
starts a DAGAppMaster pod, which in turn executes query by either launching 
kubernetes worker pods or by submitting work to LLAP daemons depending on the 
configuration.

 

 

> Support native submission of Hive queries to a Kubernetes Cluster
> -
>
> Key: HIVE-22912
> URL: https://issues.apache.org/jira/browse/HIVE-22912
> Project: Hive
>  Issue Type: New Feature
>Reporter: Surbhi Aggarwal
>Priority: Major
>
> So many big data applications are already integrated or trying to natively 
> integrate with Kubernetes engine. Should we not work together to support hive 
> with this engine?
> If efforts are already being spent on this, please point me to it. Thanks !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread David Lavati (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-22585:

Attachment: HIVE-22585.03.patch

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch, 
> HIVE-22585.03.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22903) Vectorized row_number() resets the row number after one batch in case of constant expression in partition clause

2020-02-20 Thread Ramesh Kumar Thangarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041312#comment-17041312
 ] 

Ramesh Kumar Thangarajan edited comment on HIVE-22903 at 2/20/20 9:39 PM:
--

[~ShubhamChaurasia] I was thinking something like
{code:java}
for (VectorPTFEvaluatorBase evaluator : evaluators) {
  if(!(evaluator instanceof VectorPTFEvaluatorRowNumber && 
verifyEvaluatorArgumentsAreConstant)) {
evaluator.resetEvaluator();
  }
}
{code}
Need to pass the arguments of each of the evaluators to compute this –  
verifyEvaluatorArgumentsAreConstant

Looking more into this, the problem doesn't look specific to constants too. For 
example, we reset the evaluators for every batch. So the problem should exists 
for grouping by columns too. We might notice the issue if we actually group by 
a column, where the column contains a repeated value for more than 1024 
times(spanning the VRB size). Thinking more about this, it looks like we are 
not calling the resetEvaluators() at the right place in the code. I think we 
are not differentiating between the partition groups and the row batch groups. 
We should only reset for the partition groups and not for the row batch groups.

 


was (Author: rameshkumar):
I was thinking something like

 
{code:java}
for (VectorPTFEvaluatorBase evaluator : evaluators) {
  if(!(evaluator instanceof VectorPTFEvaluatorRowNumber && 
verifyEvaluatorArgumentsAreConstant)) {
evaluator.resetEvaluator();
  }
}
{code}
Need to pass the arguments of each of the evaluators to compute this –  
verifyEvaluatorArgumentsAreConstant

Looking more into this, the problem doesn't look specific to constants too. For 
example, we reset the evaluators for every batch. So the problem should exists 
for grouping by columns too. We might notice the issue if we actually group by 
a column, where the column contains a repeated value for more than 1024 
times(spanning the VRB size). Thinking more about this, it looks like we are 
not calling the resetEvaluators() at the right place in the code. I think we 
are not differentiating between the partition groups and the row batch groups. 
We should only reset for the partition groups and not for the row batch groups.

 

> Vectorized row_number() resets the row number after one batch in case of 
> constant expression in partition clause
> 
>
> Key: HIVE-22903
> URL: https://issues.apache.org/jira/browse/HIVE-22903
> Project: Hive
>  Issue Type: Bug
>  Components: UDF, Vectorization
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22903.01.patch, HIVE-22903.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorized row number implementation resets the row number when constant 
> expression is passed in partition clause.
> Repro Query
> {code}
> select row_number() over(partition by 1) r1, t from over10k_n8;
> Or
> select row_number() over() r1, t from over10k_n8;
> {code}
> where table over10k_n8 contains more than 1024 records.
> This happens because currently in VectorPTFOperator, we reset evaluators if 
> only partition clause is there.
> {code:java}
> // If we are only processing a PARTITION BY, reset our evaluators.
> if (!isPartitionOrderBy) {
>   groupBatches.resetEvaluators();
> }
> {code}
> To resolve, we should also check if the entire partition clause is a constant 
> expression, if it is so then we should not do 
> {{groupBatches.resetEvaluators()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22903) Vectorized row_number() resets the row number after one batch in case of constant expression in partition clause

2020-02-20 Thread Ramesh Kumar Thangarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041312#comment-17041312
 ] 

Ramesh Kumar Thangarajan commented on HIVE-22903:
-

I was thinking something like

 
{code:java}
for (VectorPTFEvaluatorBase evaluator : evaluators) {
  if(!(evaluator instanceof VectorPTFEvaluatorRowNumber && 
verifyEvaluatorArgumentsAreConstant)) {
evaluator.resetEvaluator();
  }
}
{code}
Need to pass the arguments of each of the evaluators to compute this –  
verifyEvaluatorArgumentsAreConstant

Looking more into this, the problem doesn't look specific to constants too. For 
example, we reset the evaluators for every batch. So the problem should exists 
for grouping by columns too. We might notice the issue if we actually group by 
a column, where the column contains a repeated value for more than 1024 
times(spanning the VRB size). Thinking more about this, it looks like we are 
not calling the resetEvaluators() at the right place in the code. I think we 
are not differentiating between the partition groups and the row batch groups. 
We should only reset for the partition groups and not for the row batch groups.

 

> Vectorized row_number() resets the row number after one batch in case of 
> constant expression in partition clause
> 
>
> Key: HIVE-22903
> URL: https://issues.apache.org/jira/browse/HIVE-22903
> Project: Hive
>  Issue Type: Bug
>  Components: UDF, Vectorization
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22903.01.patch, HIVE-22903.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorized row number implementation resets the row number when constant 
> expression is passed in partition clause.
> Repro Query
> {code}
> select row_number() over(partition by 1) r1, t from over10k_n8;
> Or
> select row_number() over() r1, t from over10k_n8;
> {code}
> where table over10k_n8 contains more than 1024 records.
> This happens because currently in VectorPTFOperator, we reset evaluators if 
> only partition clause is there.
> {code:java}
> // If we are only processing a PARTITION BY, reset our evaluators.
> if (!isPartitionOrderBy) {
>   groupBatches.resetEvaluators();
> }
> {code}
> To resolve, we should also check if the entire partition clause is a constant 
> expression, if it is so then we should not do 
> {{groupBatches.resetEvaluators()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22359) LLAP: when a node restarts with the exact same host/port in kubernetes it is not detected as a task failure

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041305#comment-17041305
 ] 

Hive QA commented on HIVE-22359:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} llap-tez in master has 18 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} llap-tez: The patch generated 1 new + 37 unchanged - 0 
fixed = 38 total (was 37) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20754/dev-support/hive-personality.sh
 |
| git revision | master / faaf2c3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20754/yetus/diff-checkstyle-llap-tez.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20754/yetus/patch-asflicense-problems.txt
 |
| modules | C: llap-tez U: llap-tez |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20754/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP: when a node restarts with the exact same host/port in kubernetes it is 
> not detected as a task failure
> ---
>
> Key: HIVE-22359
> URL: https://issues.apache.org/jira/browse/HIVE-22359
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal Vijayaraghavan
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22359.1.patch, HIVE-22359.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> │ <14>1 2019-10-16T22:16:39.233Z 
> query-coordinator-0-5.query-coordinator-0-service.compute-1569601454-l2x9.svc.cluster.local
>  query-coordinator 1 461e5ad9-f05f-11e9-85f7-06e84765763e [mdc@18060 
> class="te │
> │ zplugins.LlapTaskCommunicator" level="INFO" thread="IPC Server handler 4 on 
> 3"] The tasks we expected to be on the node are not there: 
> attempt_1569601631911__1_04_34_0, attempt_15696016319 │
> │ 11__1_04_71_0, attempt_1569601631911__1_04_000191_0, 
> attempt_1569601631911__1_04_000211_0, 
> attempt_1569601631911__1_04_000229_0, 
> attempt_1569601631911__1_04_000231_0, attempt_1 │
> │ 569601631911__1_04_000235_0, attempt_1569601631911__1_04_000242_0, 
> attempt_1569601631911__1_04_000160_1, 

[jira] [Commented] (HIVE-22840) Race condition in formatters of TimestampColumnVector and DateColumnVector

2020-02-20 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041301#comment-17041301
 ] 

Jesus Camacho Rodriguez commented on HIVE-22840:


[~ShubhamChaurasia], can you create a PR? Btw, it seems patch is missing the 
removal of original {{CalendarUtils}}.

> Race condition in formatters of TimestampColumnVector and DateColumnVector 
> ---
>
> Key: HIVE-22840
> URL: https://issues.apache.org/jira/browse/HIVE-22840
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: László Bodor
>Assignee: Shubham Chaurasia
>Priority: Major
> Attachments: HIVE-22840.1.patch, HIVE-22840.2.patch, HIVE-22840.patch
>
>
> HIVE-22405 added support for proleptic calendar. It uses java's 
> SimpleDateFormat/Calendar APIs which are not thread-safe and cause race in 
> some scenarios. 
> As a result of those race conditions, we see some exceptions like
> {code:java}
> 1) java.lang.NumberFormatException: For input string: "" 
> OR 
> java.lang.NumberFormatException: For input string: ".821582E.821582E44"
> OR
> 2) Caused by: java.lang.ArrayIndexOutOfBoundsException: -5325980
>   at 
> sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:453)
>   at 
> java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2397)
> {code}
> This issue is to address those thread-safety issues/race conditions.
> cc [~jcamachorodriguez] [~abstractdog] [~omalley]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22763) 0 is accepted in 12-hour format during timestamp cast

2020-02-20 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-22763:
-
Attachment: HIVE-22763.02.patch

> 0 is accepted in 12-hour format during timestamp cast
> -
>
> Key: HIVE-22763
> URL: https://issues.apache.org/jira/browse/HIVE-22763
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch, HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch, HIVE-22763.02.patch
>
>
> Having a timestamp string in 12-hour format can be parsed if the hour is 0, 
> however, based on the [design 
> document|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit],
>  it should be rejected.
> h3. How to reproduce
> Run {code}select cast("2020-01-01 0 am 00" as timestamp format "-mm-dd 
> hh12 p.m. ss"){code}
> It shouldn' t be parsed, as the hour component is 0.
> h3. Spec
> ||Pattern||Meaning||Additional details||
> |HH12|Hour of day (1-12)|Same as HH|
> |HH|Hour of day (1-12)|{panel:borderStyle=none}
> - One digit inputs are possible in a string to datetime conversion but needs 
> to be surrounded by separators.
> - In a datetime to string conversion one digit hours are prefixed with a zero.
> - Error if provided hour is not between 1 and 12.
> - Displaying an unformatted timestamp in Impala uses the HH24 format 
> regardless if it was created using HH12.
> - If no AM/PM provided then defaults to AM.
> - In string to datetime conversion, conflicts with S and 
> HH24.{panel:borderStyle=none}|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22908) AM caching connections to LLAP based on hostname and port does not work in kubernetes

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041297#comment-17041297
 ] 

Hive QA commented on HIVE-22908:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993945/HIVE-22908.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18047 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20753/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20753/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20753/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993945 - PreCommit-HIVE-Build

> AM caching connections to LLAP based on hostname and port does not work in 
> kubernetes
> -
>
> Key: HIVE-22908
> URL: https://issues.apache.org/jira/browse/HIVE-22908
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-22908.1.patch
>
>
> AM is caching all connections to LLAP services using combination of hostname 
> and port which does not work in kubernetes environment where hostname of pod 
> and port can be same with statefulset. This causes AM to talk to old LLAP 
> which could have died or OOM/Pod kill etc. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22098) Data loss occurs when multiple tables are join with different bucket_version

2020-02-20 Thread Ramesh Kumar Thangarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041296#comment-17041296
 ] 

Ramesh Kumar Thangarajan commented on HIVE-22098:
-

[~jithendhir92] Do you know if there were any inserts done to the migrated 
table after migration? If yes, then this probably might be related to 
https://issues.apache.org/jira/browse/HIVE-22429. We can try backporting the 
change to 3.1.2 and verify if it fixes the issue.

> Data loss occurs when multiple tables are join with different bucket_version
> 
>
> Key: HIVE-22098
> URL: https://issues.apache.org/jira/browse/HIVE-22098
> Project: Hive
>  Issue Type: Bug
>  Components: Operators
>Affects Versions: 3.1.0
>Reporter: LuGuangMing
>Assignee: LuGuangMing
>Priority: Major
> Attachments: HIVE-22098.1.patch, image-2019-08-12-18-45-15-771.png, 
> join_test.sql, table_a_data.orc, table_b_data.orc, table_c_data.orc
>
>
> When different bucketVersion of tables do join and  reducers number greater 
> than 2, result is easy to lose data.
> *Scenario 1*: Three tables join. The temporary result data of table_a in the 
> first table and table_b in the second table joins result is recorded as 
> tmp_a_b, When it joins with the third table, the bucket_version=2 of the 
> table created by default after hive-3.0.0, temporary data tmp_a_b initialized 
> the bucketVerison=-1, and then ReduceSinkOperator Verketison=-1 is joined. In 
> the init method, the hash algorithm of selecting join column is selected 
> according to bucketVersion. If bucketVersion = 2 and is not an acid 
> operation, it will acquired the new algorithm of hash. Otherwise, the old 
> algorithm of hash is acquired. Because of the inconsistency of the algorithm 
> of hash, the partition of data allocation caused are different. At stage of 
> Reducer, Data with the same key can not be paired resulting in data loss.
> *Scenario 2*: create two test tables, create table 
> table_bucketversion_1(col_1 string, col_2 string) TBLPROPERTIES 
> ('bucketing_version'='1'); table_bucketversion_2(col_1 string, col_2 string) 
> TBLPROPERTIES ('bucketing_version'='2');
> when use table_bucketversion_1 to join table_bucketversion_2, partial result 
> data will be loss due to bucketVerison is different.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22763) 0 is accepted in 12-hour format during timestamp cast

2020-02-20 Thread Karen Coppage (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041292#comment-17041292
 ] 

Karen Coppage commented on HIVE-22763:
--

{quote}the SQL range is 1-12 (or rather 12, 1..11){quote}

The SQL range is [1,12], I meant that ChronoField.HOUR_OF_AMPM's range: [0, 1, 
2, 3, ..., 11] corresponds to SQL [12, 1, 2, 3, ..., 11].

I'm ok with returning {{int 0}} immediately in the case of input=12.

> 0 is accepted in 12-hour format during timestamp cast
> -
>
> Key: HIVE-22763
> URL: https://issues.apache.org/jira/browse/HIVE-22763
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch, HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch
>
>
> Having a timestamp string in 12-hour format can be parsed if the hour is 0, 
> however, based on the [design 
> document|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit],
>  it should be rejected.
> h3. How to reproduce
> Run {code}select cast("2020-01-01 0 am 00" as timestamp format "-mm-dd 
> hh12 p.m. ss"){code}
> It shouldn' t be parsed, as the hour component is 0.
> h3. Spec
> ||Pattern||Meaning||Additional details||
> |HH12|Hour of day (1-12)|Same as HH|
> |HH|Hour of day (1-12)|{panel:borderStyle=none}
> - One digit inputs are possible in a string to datetime conversion but needs 
> to be surrounded by separators.
> - In a datetime to string conversion one digit hours are prefixed with a zero.
> - Error if provided hour is not between 1 and 12.
> - Displaying an unformatted timestamp in Impala uses the HH24 format 
> regardless if it was created using HH12.
> - If no AM/PM provided then defaults to AM.
> - In string to datetime conversion, conflicts with S and 
> HH24.{panel:borderStyle=none}|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22908) AM caching connections to LLAP based on hostname and port does not work in kubernetes

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041270#comment-17041270
 ] 

Hive QA commented on HIVE-22908:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} llap-common in master has 90 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20753/dev-support/hive-personality.sh
 |
| git revision | master / faaf2c3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20753/yetus/patch-asflicense-problems.txt
 |
| modules | C: llap-common U: llap-common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20753/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> AM caching connections to LLAP based on hostname and port does not work in 
> kubernetes
> -
>
> Key: HIVE-22908
> URL: https://issues.apache.org/jira/browse/HIVE-22908
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-22908.1.patch
>
>
> AM is caching all connections to LLAP services using combination of hostname 
> and port which does not work in kubernetes environment where hostname of pod 
> and port can be same with statefulset. This causes AM to talk to old LLAP 
> which could have died or OOM/Pod kill etc. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22562) Harmonize SessionState.getUserName

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041260#comment-17041260
 ] 

Hive QA commented on HIVE-22562:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993942/HIVE-22562.05.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 28 failed/errored test(s), 18008 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_logged_in_user] 
(batchId=6)
org.apache.hadoop.hive.cli.TestCliDriverMethods.testProcessSelectDatabase 
(batchId=209)
org.apache.hadoop.hive.cli.TestCliDriverMethods.testprocessInitFiles 
(batchId=209)
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testMetastoreVersion 
(batchId=251)
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionMatching 
(batchId=251)
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionMisMatch 
(batchId=251)
org.apache.hadoop.hive.ql.hooks.TestHooks.org.apache.hadoop.hive.ql.hooks.TestHooks
 (batchId=357)
org.apache.hadoop.hive.ql.parse.authorization.TestSessionUserName.testSessionDefaultUser
 (batchId=336)
org.apache.hadoop.hive.ql.parse.authorization.TestSessionUserName.testSessionGetGroupNames
 (batchId=336)
org.apache.hadoop.hive.ql.parse.authorization.TestSessionUserName.testSessionNullUser
 (batchId=336)
org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.testScheduledQueryExecution
 (batchId=357)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=284)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=284)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow
 (batchId=290)
org.apache.hive.jdbc.authorization.TestCLIAuthzSessionContext.testAuthzSessionContextContents
 (batchId=294)
org.apache.hive.jdbc.authorization.TestHS2AuthzContext.org.apache.hive.jdbc.authorization.TestHS2AuthzContext
 (batchId=294)
org.apache.hive.jdbc.authorization.TestHS2AuthzSessionContext.org.apache.hive.jdbc.authorization.TestHS2AuthzSessionContext
 (batchId=294)
org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth.org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth
 (batchId=294)
org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthUDFBlacklist.testBlackListedUdfUsage
 (batchId=294)
org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthorization.org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthorization
 (batchId=294)
org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary
 (batchId=306)
org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp
 (batchId=306)
org.apache.hive.service.cli.session.TestQueryDisplay.testQueryDisplay 
(batchId=288)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testAbandonedSessionMetrics
 (batchId=244)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testActiveSessionMetrics
 (batchId=244)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testActiveSessionTimeMetrics
 (batchId=244)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testOpenSessionTimeMetrics
 (batchId=244)
org.apache.hive.service.cli.thrift.TestThriftHttpCLIServiceFeatures.org.apache.hive.service.cli.thrift.TestThriftHttpCLIServiceFeatures
 (batchId=286)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20752/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20752/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20752/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 28 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993942 - PreCommit-HIVE-Build

> Harmonize SessionState.getUserName
> --
>
> Key: HIVE-22562
> URL: https://issues.apache.org/jira/browse/HIVE-22562
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22562.01.patch, HIVE-22562.02.patch, 
> HIVE-22562.03.patch, HIVE-22562.04.patch, HIVE-22562.05.patch
>
>
> we might have 2 different user names at the same time:
> * 
> 

[jira] [Comment Edited] (HIVE-21348) Execute the TIMESTAMP types roadmap

2020-02-20 Thread H. Vetinari (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041258#comment-17041258
 ] 

H. Vetinari edited comment on HIVE-21348 at 2/20/20 7:31 PM:
-

After finding out about this via HIVE-22006 (where [~klcopp] also 
[posted|https://issues.apache.org/jira/browse/HIVE-22006?focusedCommentId=17040113=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17040113]
 a more indepth version of the [design 
doc|https://docs.google.com/document/d/1gNRww9mZJcHvUDCXklzjFEQGpefsuR_akCDfWsdE35Q/edit]),
 I opened IMPALA-9408 and SPARK-30905 for tracking the progress along this 
roadmap for those two projects as well.


was (Author: h-vetinari):
After finding out about this via HIVE-22006 (where [~klcopp] also 
[posted|https://issues.apache.org/jira/browse/HIVE-22006?focusedCommentId=17040113=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17040113]
 a more indepth version of the ]design 
doc|https://docs.google.com/document/d/1gNRww9mZJcHvUDCXklzjFEQGpefsuR_akCDfWsdE35Q/edit]),
 I opened IMPALA-9408 and SPARK-30905 for tracking the progress along this 
roadmap for those two projects as well.

> Execute the TIMESTAMP types roadmap
> ---
>
> Key: HIVE-21348
> URL: https://issues.apache.org/jira/browse/HIVE-21348
> Project: Hive
>  Issue Type: Task
>Reporter: Zoltan Ivanfi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is the top-level JIRA for tracking the addition and/or alteration of 
> different TIMESTAMP types in order to eventually reach the desired state as 
> specified in the [design doc for TIMESTAMP 
> types|https://cwiki.apache.org/confluence/display/Hive/Different+TIMESTAMP+types].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21348) Execute the TIMESTAMP types roadmap

2020-02-20 Thread H. Vetinari (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041258#comment-17041258
 ] 

H. Vetinari commented on HIVE-21348:


After finding out about this via HIVE-22006 (where [~klcopp] also 
[posted|https://issues.apache.org/jira/browse/HIVE-22006?focusedCommentId=17040113=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17040113]
 a more indepth version of the ]design 
doc|https://docs.google.com/document/d/1gNRww9mZJcHvUDCXklzjFEQGpefsuR_akCDfWsdE35Q/edit]),
 I opened IMPALA-9408 and SPARK-30905 for tracking the progress along this 
roadmap for those two projects as well.

> Execute the TIMESTAMP types roadmap
> ---
>
> Key: HIVE-21348
> URL: https://issues.apache.org/jira/browse/HIVE-21348
> Project: Hive
>  Issue Type: Task
>Reporter: Zoltan Ivanfi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is the top-level JIRA for tracking the addition and/or alteration of 
> different TIMESTAMP types in order to eventually reach the desired state as 
> specified in the [design doc for TIMESTAMP 
> types|https://cwiki.apache.org/confluence/display/Hive/Different+TIMESTAMP+types].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=390174=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-390174
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 19:00
Start Date: 20/Feb/20 19:00
Worklog Time Spent: 10m 
  Work Description: davidov541 commented on pull request #526: HIVE-21218: 
KafkaSerDe doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#discussion_r382194556
 
 

 ##
 File path: 
kafka-handler/src/test/org/apache/hadoop/hive/kafka/AvroBytesConverterTest.java
 ##
 @@ -0,0 +1,85 @@
+package org.apache.hadoop.hive.kafka;
+
+import com.google.common.collect.Maps;
+import io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient;
+import io.confluent.kafka.serializers.KafkaAvroSerializer;
+import org.apache.avro.Schema;
+import org.apache.hadoop.hive.serde2.avro.AvroGenericRecordWritable;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.util.Map;
+
+/**
+ * Test class for Hive Kafka Avro bytes converter.
+ */
+public class AvroBytesConverterTest {
+
+private static SimpleRecord simpleRecord1 = 
SimpleRecord.newBuilder().setId("123").setName("test").build();
+private static byte[] simpleRecord1AsBytes;
+
+/**
+ * Emulate confluent avro producer that add 4 magic bits (int) before 
value bytes. The int represents the schema ID from schema registry.
+ */
+@BeforeClass
+public static void setUp() {
+Map config = Maps.newHashMap();
+config.put("schema.registry.url","http://localhost;);
+KafkaAvroSerializer avroSerializer = new KafkaAvroSerializer(new 
MockSchemaRegistryClient());
+avroSerializer.configure(config, false);
+simpleRecord1AsBytes = avroSerializer.serialize("temp", simpleRecord1);
+}
+
+/**
+ * Emulate - avro.serde.type = none (Default)
+ */
+@Test
+public void convertWithAvroBytesConverter() {
+Schema schema = SimpleRecord.getClassSchema();
+KafkaSerDe.AvroBytesConverter conv = new 
KafkaSerDe.AvroBytesConverter(schema);
+AvroGenericRecordWritable simpleRecord1Writable = 
conv.getWritable(simpleRecord1AsBytes);
+
+Assert.assertNotNull(simpleRecord1Writable);
+
Assert.assertEquals(SimpleRecord.class,simpleRecord1Writable.getRecord().getClass());
+
+SimpleRecord simpleRecord1Deserialized = (SimpleRecord) 
simpleRecord1Writable.getRecord();
+
+Assert.assertNotNull(simpleRecord1Deserialized);
+Assert.assertNotEquals(simpleRecord1, simpleRecord1Deserialized);
+}
+
+
+/**
+ * Emulate - avro.serde.type = confluent
+ */
+@Test
+public void convertWithConfluentAvroBytesConverter() {
 
 Review comment:
   No test for avro.serde.type = skip?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 390174)
Time Spent: 4h 40m  (was: 4.5h)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: Milan Baran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=390173=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-390173
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 19:00
Start Date: 20/Feb/20 19:00
Worklog Time Spent: 10m 
  Work Description: davidov541 commented on pull request #526: HIVE-21218: 
KafkaSerDe doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#discussion_r382190009
 
 

 ##
 File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java
 ##
 @@ -133,12 +134,40 @@
   Preconditions.checkArgument(!schemaFromProperty.isEmpty(), "Avro Schema 
is empty Can not go further");
   Schema schema = AvroSerdeUtils.getSchemaFor(schemaFromProperty);
   LOG.debug("Building Avro Reader with schema {}", schemaFromProperty);
-  bytesConverter = new AvroBytesConverter(schema);
+  bytesConverter = getByteConverterForAvroDelegate(schema, tbl);
 } else {
   bytesConverter = new BytesWritableConverter();
 }
   }
 
+  enum BytesConverterType {
+CONFLUENT,
+SKIP,
+NONE;
+
+static BytesConverterType fromString(String value) {
+  try {
+return BytesConverterType.valueOf(value.trim().toUpperCase());
+  } catch (Exception e){
+return NONE;
+  }
+}
+  }
+
+  BytesConverter getByteConverterForAvroDelegate(Schema schema, Properties 
tbl) {
+String avroBytesConverterProperty = tbl.getProperty(AvroSerdeUtils
+
.AvroTableProperties.AVRO_SERDE_TYPE
+.getPropName(), 
"none");
 
 Review comment:
   Seems this should be the enum version converted to a string. That'll make 
maintenance easier if the name ever needed to be changed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 390173)
Time Spent: 4.5h  (was: 4h 20m)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: Milan Baran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=390172=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-390172
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 19:00
Start Date: 20/Feb/20 19:00
Worklog Time Spent: 10m 
  Work Description: davidov541 commented on pull request #526: HIVE-21218: 
KafkaSerDe doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#discussion_r382191749
 
 

 ##
 File path: kafka-handler/src/resources/SimpleRecord.avsc
 ##
 @@ -0,0 +1,13 @@
+{
+  "type" : "record",
+  "name" : "SimpleRecord",
+  "namespace" : "org.apache.hadoop.hive.kafka",
+  "fields" : [ {
 
 Review comment:
   Agreed, but this seems tangential to the issue at hand. It could be 
committed without additional tests on the schema.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 390172)
Time Spent: 4h 20m  (was: 4h 10m)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: Milan Baran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22562) Harmonize SessionState.getUserName

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041232#comment-17041232
 ] 

Hive QA commented on HIVE-22562:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
53s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} service in master has 51 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 13 new + 203 unchanged - 24 
fixed = 216 total (was 227) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
3s{color} | {color:red} ql generated 1 new + 1529 unchanged - 1 fixed = 1530 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Unused public or protected 
field:org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator.conf  In 
SessionStateUserAuthenticator.java |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20752/dev-support/hive-personality.sh
 |
| git revision | master / faaf2c3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20752/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20752/yetus/whitespace-tabs.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20752/yetus/new-findbugs-ql.html
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20752/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql service itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20752/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Harmonize SessionState.getUserName
> --
>
> Key: HIVE-22562
> URL: https://issues.apache.org/jira/browse/HIVE-22562
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: 

[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=390164=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-390164
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 18:43
Start Date: 20/Feb/20 18:43
Worklog Time Spent: 10m 
  Work Description: cricket007 commented on pull request #526: HIVE-21218: 
KafkaSerDe doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#discussion_r29248
 
 

 ##
 File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java
 ##
 @@ -133,12 +134,24 @@
   Preconditions.checkArgument(!schemaFromProperty.isEmpty(), "Avro Schema 
is empty Can not go further");
   Schema schema = AvroSerdeUtils.getSchemaFor(schemaFromProperty);
   LOG.debug("Building Avro Reader with schema {}", schemaFromProperty);
-  bytesConverter = new AvroBytesConverter(schema);
+  bytesConverter = getByteConverterForAvroDelegate(schema, tbl);
 } else {
   bytesConverter = new BytesWritableConverter();
 }
   }
 
+  BytesConverter getByteConverterForAvroDelegate(Schema schema, Properties 
tbl) {
+String avroByteConverterType = 
tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_TYPE
+ .getPropName(), 
"none");
+int avroSkipBytes = 
Integer.getInteger(tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_SKIP_BYTES
+ .getPropName(), "5"));
+switch ( avroByteConverterType ) {
+  case "confluent" : return new AvroSkipBytesConverter(schema, 5);
+  case "skip" : return new AvroSkipBytesConverter(schema, avroSkipBytes);
+  default : return new AvroBytesConverter(schema);
 
 Review comment:
   Would it be better if this were an enum rather than a string comparison? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 390164)
Time Spent: 4h  (was: 3h 50m)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: Milan Baran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=390165=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-390165
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 18:43
Start Date: 20/Feb/20 18:43
Worklog Time Spent: 10m 
  Work Description: cricket007 commented on pull request #526: HIVE-21218: 
KafkaSerDe doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#discussion_r382186607
 
 

 ##
 File path: kafka-handler/pom.xml
 ##
 @@ -114,8 +114,21 @@
   1.7.25
   test
 
+
+  io.confluent
+  kafka-streams-avro-serde
 
 Review comment:
   Bump?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 390165)
Time Spent: 4h 10m  (was: 4h)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: Milan Baran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22744) TezTask for the vertex with more than one outedge should have proportional sort memory

2020-02-20 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22744:

Attachment: HIVE-22744.3.patch
Status: Patch Available  (was: Open)

> TezTask for the vertex with more than one outedge should have proportional 
> sort memory
> --
>
> Key: HIVE-22744
> URL: https://issues.apache.org/jira/browse/HIVE-22744
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22744.1.patch, HIVE-22744.2.patch, 
> HIVE-22744.3.patch
>
>
> TezTask for the vertex with more than one outedge should have proportional 
> sort memory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22744) TezTask for the vertex with more than one outedge should have proportional sort memory

2020-02-20 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22744:

Attachment: (was: HIVE-22744.3.patch)

> TezTask for the vertex with more than one outedge should have proportional 
> sort memory
> --
>
> Key: HIVE-22744
> URL: https://issues.apache.org/jira/browse/HIVE-22744
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22744.1.patch, HIVE-22744.2.patch, 
> HIVE-22744.3.patch
>
>
> TezTask for the vertex with more than one outedge should have proportional 
> sort memory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22744) TezTask for the vertex with more than one outedge should have proportional sort memory

2020-02-20 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22744:

Status: Open  (was: Patch Available)

> TezTask for the vertex with more than one outedge should have proportional 
> sort memory
> --
>
> Key: HIVE-22744
> URL: https://issues.apache.org/jira/browse/HIVE-22744
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22744.1.patch, HIVE-22744.2.patch, 
> HIVE-22744.3.patch
>
>
> TezTask for the vertex with more than one outedge should have proportional 
> sort memory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041197#comment-17041197
 ] 

Hive QA commented on HIVE-21304:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993939/HIVE-21304.13.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20751/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20751/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20751/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2020-02-20 17:59:28.989
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-20751/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2020-02-20 17:59:28.991
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at faaf2c3 HIVE-22831: Add option in HiveStrictManagedMigration to 
also move tables converted to external living in old WH (Adam Szita, reviewed 
by Peter Vary)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at faaf2c3 HIVE-22831: Add option in HiveStrictManagedMigration to 
also move tables converted to external living in old WH (Adam Szita, reviewed 
by Peter Vary)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-02-20 17:59:30.094
+ rm -rf ../yetus_PreCommit-HIVE-Build-20751
+ mkdir ../yetus_PreCommit-HIVE-Build-20751
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-20751
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-20751/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
error: patch failed: ql/src/test/results/clientpositive/acid_nullscan.q.out:96
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/acid_nullscan.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/acid_table_stats.q.out:97
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/acid_table_stats.q.out' 
with conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/autoColumnStats_4.q.out:214
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/autoColumnStats_4.q.out' 
with conflicts.
Going to apply patch with: git apply -p0
/data/hiveptest/working/scratch/build.patch:520: trailing whitespace.
totalSize   4357
/data/hiveptest/working/scratch/build.patch:529: trailing whitespace.
totalSize   4357
/data/hiveptest/working/scratch/build.patch:538: trailing whitespace.
totalSize   4357
/data/hiveptest/working/scratch/build.patch:547: trailing whitespace.
totalSize   8714
/data/hiveptest/working/scratch/build.patch:556: trailing whitespace.
totalSize   8714
error: patch failed: ql/src/test/results/clientpositive/acid_nullscan.q.out:96
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/acid_nullscan.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/acid_table_stats.q.out:97
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/acid_table_stats.q.out' 
with conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/autoColumnStats_4.q.out:214
Falling back to three-way merge...
Applied patch to 

[jira] [Commented] (HIVE-22893) Enhance data size estimation for fields computed by UDFs

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041195#comment-17041195
 ] 

Hive QA commented on HIVE-22893:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993940/HIVE-22893.08.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20750/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20750/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20750/

Messages:
{noformat}
 This message was trimmed, see log for full details 
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at faaf2c3 HIVE-22831: Add option in HiveStrictManagedMigration to 
also move tables converted to external living in old WH (Adam Szita, reviewed 
by Peter Vary)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-02-20 17:56:37.810
+ rm -rf ../yetus_PreCommit-HIVE-Build-20750
+ mkdir ../yetus_PreCommit-HIVE-Build-20750
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-20750
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-20750/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
error: cannot apply binary patch to 
'ql/src/test/results/clientpositive/llap/vector_udf1.q.out' without full index 
line
Falling back to three-way merge...
error: cannot apply binary patch to 
'ql/src/test/results/clientpositive/llap/vector_udf1.q.out' without full index 
line
error: ql/src/test/results/clientpositive/llap/vector_udf1.q.out: patch does 
not apply
Trying to apply the patch with -p1
error: src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in 
index
error: src/test/results/clientpositive/udaf_example_group_concat.q.out: does 
not exist in index
error: src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java: does not exist 
in index
error: src/java/org/apache/hadoop/hive/ql/udf/UDFSubstr.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java: does not 
exist in index
error: src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java: 
does not exist in index
error: src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFCase.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFCoalesce.java: 
does not exist in index
error: src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIf.java: does 
not exist in index
error: src/test/queries/clientpositive/udf_coalesce.q: does not exist in index
error: src/test/results/clientpositive/cbo_rp_gby2_map_multi_distinct.q.out: 
does not exist in index
error: 
src/test/results/clientpositive/cbo_rp_groupby3_noskew_multi_distinct.q.out: 
does not exist in index
error: src/test/results/clientpositive/constprog_when_case.q.out: does not 
exist in index
error: src/test/results/clientpositive/count_dist_rewrite.q.out: does not exist 
in index
error: src/test/results/clientpositive/groupby11.q.out: does not exist in index
error: src/test/results/clientpositive/groupby2_map.q.out: does not exist in 
index
error: src/test/results/clientpositive/groupby2_map_multi_distinct.q.out: does 
not exist in index
error: src/test/results/clientpositive/groupby2_map_skew.q.out: does not exist 
in index
error: src/test/results/clientpositive/groupby2_noskew.q.out: does not exist in 
index
error: src/test/results/clientpositive/groupby2_noskew_multi_distinct.q.out: 
does not exist in index
error: src/test/results/clientpositive/groupby3_map.q.out: does not exist in 
index
error: src/test/results/clientpositive/groupby3_map_multi_distinct.q.out: does 
not exist in index
error: src/test/results/clientpositive/groupby3_map_skew.q.out: does not exist 
in index
error: src/test/results/clientpositive/groupby4.q.out: does not exist in index
error: src/test/results/clientpositive/groupby4_noskew.q.out: does not exist in 
index
error: src/test/results/clientpositive/groupby6.q.out: does not exist in index
error: src/test/results/clientpositive/groupby6_map.q.out: does not exist in 
index
error: src/test/results/clientpositive/groupby6_map_skew.q.out: does not exist 
in index
error: src/test/results/clientpositive/groupby6_noskew.q.out: does not exist in 
index
error: src/test/results/clientpositive/groupby8_map_skew.q.out: does not exist 

[jira] [Commented] (HIVE-22744) TezTask for the vertex with more than one outedge should have proportional sort memory

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041192#comment-17041192
 ] 

Hive QA commented on HIVE-22744:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993924/HIVE-22744.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18045 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.metadata.TestHiveRemote.testMetaStoreApiTiming 
(batchId=340)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20749/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20749/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20749/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993924 - PreCommit-HIVE-Build

> TezTask for the vertex with more than one outedge should have proportional 
> sort memory
> --
>
> Key: HIVE-22744
> URL: https://issues.apache.org/jira/browse/HIVE-22744
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22744.1.patch, HIVE-22744.2.patch, 
> HIVE-22744.3.patch
>
>
> TezTask for the vertex with more than one outedge should have proportional 
> sort memory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21961) Update jetty version to 9.4.x

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041189#comment-17041189
 ] 

Hive QA commented on HIVE-21961:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  6m 
20s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} storage-api in master has 58 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} shims/0.23 in master has 7 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
35s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
14s{color} | {color:blue} standalone-metastore/metastore-server in master has 
185 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} service in master has 51 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} contrib in master has 11 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} druid-handler in master has 3 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} branch/hcatalog no findbugs output file 
(hcatalog/target/findbugsXml.xml) {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} hcatalog/hcatalog-pig-adapter in master has 2 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/webhcat/java-client in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} hplsql in master has 161 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} llap-ext-client in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} kudu-handler in master has 1 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 18m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | 

[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Attachment: HIVE-22888.3.patch

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch, 
> HIVE-22888.3.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Attachment: HIVE-22888.3.patch

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch, 
> HIVE-22888.3.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Attachment: (was: HIVE-22888.3.patch)

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch, 
> HIVE-22888.3.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Attachment: (was: HIVE-22888.3.patch)

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch, 
> HIVE-22888.3.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=390077=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-390077
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 17:22
Start Date: 20/Feb/20 17:22
Worklog Time Spent: 10m 
  Work Description: cricket007 commented on issue #526: HIVE-21218: 
KafkaSerDe doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#issuecomment-589210115
 
 
   @davidov541 Could you review this?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 390077)
Time Spent: 3h 50m  (was: 3h 40m)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: Milan Baran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22915) java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument

2020-02-20 Thread David Lavati (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-22915:

Summary: java.lang.NoSuchMethodError: 
com.google.common.base.Preconditions.checkArgument  (was: hive is not running.)

> java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument
> ---
>
> Key: HIVE-22915
> URL: https://issues.apache.org/jira/browse/HIVE-22915
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.4
> Environment: Ubuntu 16.04
>Reporter: pradeepkumar
>Priority: Critical
>
> Hi Team,
> I am Not able to run hive. Getting following error on hive version above 3.X, 
> i tried all the versions. It is very critical issue.SLF4J: Class path 
> contains multiple SLF4J bindings.
>  SLF4J: Found binding in 
> [jar:file:/home/sreeramadasu/Downloads/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/home/sreeramadasu/Downloads/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an 
> explanation.
>  SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]
>  Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>  at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>  at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>  at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)
>  at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)
>  at org.apache.hadoop.mapred.JobConf.(JobConf.java:448)
>  at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:4045)
>  at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:4003)
>  at 
> org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:81)
>  at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:65)
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:702)
>  at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22915) hive is not running.

2020-02-20 Thread David Lavati (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041154#comment-17041154
 ] 

David Lavati commented on HIVE-22915:
-

This is like HIVE-22718 and many other similar issues: You have 2 incompatible 
versions of guava on your classpath. Maybe the Hadoop/Spark version or 
something else you're using is not compatible with this Hive version.

> hive is not running.
> 
>
> Key: HIVE-22915
> URL: https://issues.apache.org/jira/browse/HIVE-22915
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.4
> Environment: Ubuntu 16.04
>Reporter: pradeepkumar
>Priority: Critical
>
> Hi Team,
> I am Not able to run hive. Getting following error on hive version above 3.X, 
> i tried all the versions. It is very critical issue.SLF4J: Class path 
> contains multiple SLF4J bindings.
>  SLF4J: Found binding in 
> [jar:file:/home/sreeramadasu/Downloads/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/home/sreeramadasu/Downloads/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an 
> explanation.
>  SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]
>  Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>  at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>  at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>  at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)
>  at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)
>  at org.apache.hadoop.mapred.JobConf.(JobConf.java:448)
>  at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:4045)
>  at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:4003)
>  at 
> org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:81)
>  at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:65)
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:702)
>  at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22914) Make Hive Connection ZK Interactions Easier to Troubleshoot

2020-02-20 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-22914:
--
Status: Patch Available  (was: Open)

> Make Hive Connection ZK Interactions Easier to Troubleshoot
> ---
>
> Key: HIVE-22914
> URL: https://issues.apache.org/jira/browse/HIVE-22914
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.2, 4.0.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-22914.1.patch
>
>
> Add better logging and make errors more consistent and meaningful.
> Recently was trying to troubleshoot an issue where the ZK namespace of the 
> client and the HS2 were different and it was way too difficult to diagnose.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22914) Make Hive Connection ZK Interactions Easier to Troubleshoot

2020-02-20 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-22914:
--
Attachment: HIVE-22914.1.patch

> Make Hive Connection ZK Interactions Easier to Troubleshoot
> ---
>
> Key: HIVE-22914
> URL: https://issues.apache.org/jira/browse/HIVE-22914
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.1.2
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-22914.1.patch
>
>
> Add better logging and make errors more consistent and meaningful.
> Recently was trying to troubleshoot an issue where the ZK namespace of the 
> client and the HS2 were different and it was way too difficult to diagnose.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22915) hive is not running.

2020-02-20 Thread pradeepkumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pradeepkumar updated HIVE-22915:

Description: 
Hi Team,

I am Not able to run hive. Getting following error on hive version above 3.X, i 
tried all the versions. It is very critical issue.SLF4J: Class path contains 
multiple SLF4J bindings.
 SLF4J: Found binding in 
[jar:file:/home/sreeramadasu/Downloads/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/home/sreeramadasu/Downloads/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an 
explanation.
 SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
 Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
 at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)
 at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)
 at org.apache.hadoop.mapred.JobConf.(JobConf.java:448)
 at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:4045)
 at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:4003)
 at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:81)
 at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:65)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:702)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

 

 

  was:
Hi Team,

I am Not able to able hive. Getting following error on hive version above 3.X, 
i tried all the versions. It is very critical issue.SLF4J: Class path contains 
multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/sreeramadasu/Downloads/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/sreeramadasu/Downloads/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
 at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)
 at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)
 at org.apache.hadoop.mapred.JobConf.(JobConf.java:448)
 at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:4045)
 at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:4003)
 at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:81)
 at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:65)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:702)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

 

 


> hive is not running.
> 
>
> Key: HIVE-22915
> URL: https://issues.apache.org/jira/browse/HIVE-22915
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.4
> Environment: Ubuntu 16.04
>Reporter: pradeepkumar
>Priority: Critical
>
> Hi Team,
> I am Not able to run hive. Getting following error on hive version above 3.X, 
> i tried all the versions. It is very critical issue.SLF4J: Class path 
> contains multiple SLF4J bindings.
>  SLF4J: Found binding in 
> [jar:file:/home/sreeramadasu/Downloads/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> 

[jira] [Commented] (HIVE-22744) TezTask for the vertex with more than one outedge should have proportional sort memory

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041144#comment-17041144
 ] 

Hive QA commented on HIVE-22744:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} ql in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20749/dev-support/hive-personality.sh
 |
| git revision | master / faaf2c3 |
| Default Java | 1.8.0_111 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20749/yetus/branch-findbugs-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20749/yetus/whitespace-eol.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20749/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20749/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> TezTask for the vertex with more than one outedge should have proportional 
> sort memory
> --
>
> Key: HIVE-22744
> URL: https://issues.apache.org/jira/browse/HIVE-22744
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22744.1.patch, HIVE-22744.2.patch, 
> HIVE-22744.3.patch
>
>
> TezTask for the vertex with more than one outedge should have proportional 
> sort memory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22914) Make Hive Connection ZK Interactions Easier to Troubleshoot

2020-02-20 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor reassigned HIVE-22914:
-


> Make Hive Connection ZK Interactions Easier to Troubleshoot
> ---
>
> Key: HIVE-22914
> URL: https://issues.apache.org/jira/browse/HIVE-22914
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.2, 4.0.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>
> Add better logging and make errors more consistent and meaningful.
> Recently was trying to troubleshoot an issue where the ZK namespace of the 
> client and the HS2 were different and it was way too difficult to diagnose.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Status: Patch Available  (was: In Progress)

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch, 
> HIVE-22888.3.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Attachment: HIVE-22888.3.patch

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch, 
> HIVE-22888.3.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22831) Add option in HiveStrictManagedMigration to also move tables converted to external living in old WH

2020-02-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Szita updated HIVE-22831:
--
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks for the review Peter.

> Add option in HiveStrictManagedMigration to also move tables converted to 
> external living in old WH
> ---
>
> Key: HIVE-22831
> URL: https://issues.apache.org/jira/browse/HIVE-22831
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22831.0.patch, HIVE-22831.1.patch, 
> HIVE-22831.2.patch
>
>
> HiveStrictManagedMigration supports these use cases (among others) currently:
>  * convert managed tables to external (+set external purge)
>  * move managed tables from old to new warehouse root (HDFS)
> I propose we add a feature that combines both:
>  * convert managed tables (living in the old WH) to external and move them 
> (from old managed warehouse root) to a new path ( e.g. default external 
> warehouse location)
>  * this also applies to 'external' tables living inside the old WH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21961) Update jetty version to 9.4.x

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041116#comment-17041116
 ] 

Hive QA commented on HIVE-21961:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993898/HIVE-21961.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 200 failed/errored test(s), 16157 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.org.apache.hadoop.hive.cli.TestBeeLineDriver
 (batchId=305)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver
 (batchId=194)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver
 (batchId=195)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver
 (batchId=196)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver
 (batchId=197)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver
 (batchId=198)
org.apache.hadoop.hive.cli.TestErasureCodingHDFSCliDriver.org.apache.hadoop.hive.cli.TestErasureCodingHDFSCliDriver
 (batchId=203)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver
 (batchId=109)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver
 (batchId=110)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver
 (batchId=111)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver
 (batchId=112)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver
 (batchId=113)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver
 (batchId=114)
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver
 (batchId=303)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
 (batchId=204)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver
 (batchId=305)
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver
 (batchId=305)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=178)

[jira] [Commented] (HIVE-16355) Service: embedded mode should only be available if service is loaded onto the classpath

2020-02-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041114#comment-17041114
 ] 

Ádám Szita commented on HIVE-16355:
---

This patch introduced a file which is missing ASF license header - [~kgyrtkirk] 
can you please fix it?

> Service: embedded mode should only be available if service is loaded onto the 
> classpath
> ---
>
> Key: HIVE-16355
> URL: https://issues.apache.org/jira/browse/HIVE-16355
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore, Server Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-16355.06.patch, HIVE-16355.1.patch, 
> HIVE-16355.2.patch, HIVE-16355.2.patch, HIVE-16355.3.patch, 
> HIVE-16355.4.patch, HIVE-16355.4.patch, HIVE-16355.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I would like to relax the hard reference to 
> {{EmbeddedThriftBinaryCLIService}} to be only used in case {{service}} module 
> is loaded onto the classpath.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22376) Cancelled query still prints exception if it was stuck in waiting for lock

2020-02-20 Thread Aron Hamvas (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041102#comment-17041102
 ] 

Aron Hamvas commented on HIVE-22376:


No tests for modifying logging behaviour, and the license warning has nothing 
to do with the patch either. [~pvary], can you review?

> Cancelled query still prints exception if it was stuck in waiting for lock
> --
>
> Key: HIVE-22376
> URL: https://issues.apache.org/jira/browse/HIVE-22376
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Affects Versions: 3.1.2
>Reporter: Peter Vary
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-22376.patch
>
>
> The query waits for locks, then cancelled.
> It prints this to the logs, which is unnecessary and missleading:
> {code}
> apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:326)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:344)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: NoSuchLockException(message:No such lock lockid:272)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$check_lock_result$check_lock_resultStandardScheme.read(ThriftHiveMetastore.java)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$check_lock_result$check_lock_resultStandardScheme.read(ThriftHiveMetastore.java)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$check_lock_result.read(ThriftHiveMetastore.java)
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_check_lock(ThriftHiveMetastore.java:5730)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.check_lock(ThriftHiveMetastore.java:5717)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.checkLock(HiveMetaStoreClient.java:3128)
>   at sun.reflect.GeneratedMethodAccessor351.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy59.checkLock(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor351.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:)
>   at com.sun.proxy.$Proxy59.checkLock(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:115)
>   ... 25 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22763) 0 is accepted in 12-hour format during timestamp cast

2020-02-20 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041083#comment-17041083
 ] 

David Mollitor commented on HIVE-22763:
---

[~klcopp]  Thanks.

bq. the SQL range is 1-12 (or rather 12, 1..11)

Is it the range [1,11] or [1,12] ?  I would think that AM/PM the SQL range is 
[1,12] based one "not between 1 and 12."  Can you please clarify?

I don't know that I'd care about efficiency too much, if by efficiency you mean 
execution speed.  This runs in a Spark/Tez/MR context, so the simple solution 
is just to add one more task.  If however you're interested, I would probably 
just return an {{int}} 0 here instead of re-assigning the string value and then 
having to parse it.

> 0 is accepted in 12-hour format during timestamp cast
> -
>
> Key: HIVE-22763
> URL: https://issues.apache.org/jira/browse/HIVE-22763
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch, HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch
>
>
> Having a timestamp string in 12-hour format can be parsed if the hour is 0, 
> however, based on the [design 
> document|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit],
>  it should be rejected.
> h3. How to reproduce
> Run {code}select cast("2020-01-01 0 am 00" as timestamp format "-mm-dd 
> hh12 p.m. ss"){code}
> It shouldn' t be parsed, as the hour component is 0.
> h3. Spec
> ||Pattern||Meaning||Additional details||
> |HH12|Hour of day (1-12)|Same as HH|
> |HH|Hour of day (1-12)|{panel:borderStyle=none}
> - One digit inputs are possible in a string to datetime conversion but needs 
> to be surrounded by separators.
> - In a datetime to string conversion one digit hours are prefixed with a zero.
> - Error if provided hour is not between 1 and 12.
> - Displaying an unformatted timestamp in Impala uses the HH24 format 
> regardless if it was created using HH12.
> - If no AM/PM provided then defaults to AM.
> - In string to datetime conversion, conflicts with S and 
> HH24.{panel:borderStyle=none}|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22900) Predicate Push Down Of Like Filter While Fetching Partition Data From MetaStore

2020-02-20 Thread Syed Shameerur Rahman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041071#comment-17041071
 ] 

Syed Shameerur Rahman commented on HIVE-22900:
--

[~hashutosh] [~kgyrtkirk] [~gates] Please review.

> Predicate Push Down Of Like Filter While Fetching Partition Data From 
> MetaStore
> ---
>
> Key: HIVE-22900
> URL: https://issues.apache.org/jira/browse/HIVE-22900
> Project: Hive
>  Issue Type: New Feature
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22900.01.patch, HIVE-22900.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently PPD is disabled for like filter while fetching partition data from 
> metastore. The following patch covers all the test cases mentioned in 
> HIVE-5134



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22900) Predicate Push Down Of Like Filter While Fetching Partition Data From MetaStore

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041064#comment-17041064
 ] 

Hive QA commented on HIVE-22900:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993897/HIVE-22900.02.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18057 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20747/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20747/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20747/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993897 - PreCommit-HIVE-Build

> Predicate Push Down Of Like Filter While Fetching Partition Data From 
> MetaStore
> ---
>
> Key: HIVE-22900
> URL: https://issues.apache.org/jira/browse/HIVE-22900
> Project: Hive
>  Issue Type: New Feature
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22900.01.patch, HIVE-22900.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently PPD is disabled for like filter while fetching partition data from 
> metastore. The following patch covers all the test cases mentioned in 
> HIVE-5134



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Description: 
- Created extra (db, tbl, part) index on HIVE_LOCKS table;
- Replaced inner select under checkLocks using multiple IN statements with JOIN 
operator; 


generated query looks like :
{code}
SELECT LS.* FROM (
SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
HL_LOCK_TYPE FROM HIVE_LOCKS
WHERE HL_LOCK_EXT_ID < 333) LS
INNER JOIN (
SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
HL_LOCK_EXT_ID = 333) LBC
ON LS.HL_DB = LBC.HL_DB
AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
LBC.HL_TABLE
AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR LS.HL_PARTITION 
= LBC.HL_PARTITION))
WHERE (LBC.HL_LOCK_TYPE='e'
   AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
LBC.HL_TABLE IS NOT NULL )
OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
   AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
LIMIT 1;
{code}

  was:
- Created extra (db, tbl, part) index on HIVE_LOCKS table;
- Replaced inner select under checkLocks using multiple IN statements with JOIN 
operator; 


generated query looks like :
{code}
SELECT * FROM (
SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
HL_LOCK_TYPE FROM HIVE_LOCKS
WHERE HL_LOCK_EXT_ID < 333) LS
INNER JOIN (
SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
HL_LOCK_EXT_ID = 333) LBC
ON LS.HL_DB = LBC.HL_DB
AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
LBC.HL_TABLE
AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR LS.HL_PARTITION 
= LBC.HL_PARTITION))
WHERE (LBC.HL_LOCK_TYPE='e'
   AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
LBC.HL_TABLE IS NOT NULL )
OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
   AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
LIMIT 1;
{code}


> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-02-20 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22888:
--
Description: 
- Created extra (db, tbl, part) index on HIVE_LOCKS table;
- Replaced inner select under checkLocks using multiple IN statements with JOIN 
operator; 


generated query looks like :
{code}
SELECT * FROM (
SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
HL_LOCK_TYPE FROM HIVE_LOCKS
WHERE HL_LOCK_EXT_ID < 333) LS
INNER JOIN (
SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
HL_LOCK_EXT_ID = 333) LBC
ON LS.HL_DB = LBC.HL_DB
AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
LBC.HL_TABLE
AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR LS.HL_PARTITION 
= LBC.HL_PARTITION))
WHERE (LBC.HL_LOCK_TYPE='e'
   AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
LBC.HL_TABLE IS NOT NULL )
OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
   AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
LIMIT 1;
{code}

  was:
- Created extra (db, tbl, part) index on HIVE_LOCKS table;
- Replaced inner select under checkLocks using multiple IN statements with JOIN 
operator; 


generated query looks like :
{code}
SELECT LS.* FROM ( 
SELECT HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE, HL_TXNID FROM HIVE_LOCKS 
WHERE HL_LOCK_EXT_ID < 14138) LS 
INNER JOIN (
SELECT HL_DB, HL_TABLE, HL_PARTITION FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 
14138) LBC 
ON LS.HL_DB = LBC.HL_DB 
AND (LS.HL_TABLE IS NULL OR LS.HL_TABLE = LBC.HL_TABLE 
AND (LS.HL_PARTITION IS NULL OR LS.HL_PARTITION = LBC.HL_PARTITION))
WHERE LBC.HL_LOCK_TYPE='e'
   OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
   OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e' 
LIMIT 1;
{code}


> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.2.patch
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT * FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE (LBC.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND 
> LBC.HL_TABLE IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND NOT (LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22900) Predicate Push Down Of Like Filter While Fetching Partition Data From MetaStore

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041014#comment-17041014
 ] 

Hive QA commented on HIVE-22900:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
19s{color} | {color:blue} standalone-metastore/metastore-server in master has 
185 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
48s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
22s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 2 new + 520 unchanged - 4 fixed = 522 total (was 524) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20747/dev-support/hive-personality.sh
 |
| git revision | master / 2705e93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20747/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20747/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20747/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Predicate Push Down Of Like Filter While Fetching Partition Data From 
> MetaStore
> ---
>
> Key: HIVE-22900
> URL: https://issues.apache.org/jira/browse/HIVE-22900
> Project: Hive
>  Issue Type: New Feature
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22900.01.patch, HIVE-22900.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently PPD is disabled for like filter while fetching partition data from 
> metastore. The following patch covers all the test cases mentioned in 
> HIVE-5134



--
This message was sent by Atlassian Jira

[jira] [Commented] (HIVE-22840) Race condition in formatters of TimestampColumnVector and DateColumnVector

2020-02-20 Thread Shubham Chaurasia (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041011#comment-17041011
 ] 

Shubham Chaurasia commented on HIVE-22840:
--

[~abstractdog] [~jcamachorodriguez]

Can you please review ? 
Moved {{CalendarUtils}} from hive-common to storage-api to prevent cyclic 
dependency (hive-common already depends on storage-api).

> Race condition in formatters of TimestampColumnVector and DateColumnVector 
> ---
>
> Key: HIVE-22840
> URL: https://issues.apache.org/jira/browse/HIVE-22840
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: László Bodor
>Assignee: Shubham Chaurasia
>Priority: Major
> Attachments: HIVE-22840.1.patch, HIVE-22840.2.patch, HIVE-22840.patch
>
>
> HIVE-22405 added support for proleptic calendar. It uses java's 
> SimpleDateFormat/Calendar APIs which are not thread-safe and cause race in 
> some scenarios. 
> As a result of those race conditions, we see some exceptions like
> {code:java}
> 1) java.lang.NumberFormatException: For input string: "" 
> OR 
> java.lang.NumberFormatException: For input string: ".821582E.821582E44"
> OR
> 2) Caused by: java.lang.ArrayIndexOutOfBoundsException: -5325980
>   at 
> sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:453)
>   at 
> java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2397)
> {code}
> This issue is to address those thread-safety issues/race conditions.
> cc [~jcamachorodriguez] [~abstractdog] [~omalley]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22840) Race condition in formatters of TimestampColumnVector and DateColumnVector

2020-02-20 Thread Shubham Chaurasia (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Chaurasia updated HIVE-22840:
-
Attachment: HIVE-22840.patch

> Race condition in formatters of TimestampColumnVector and DateColumnVector 
> ---
>
> Key: HIVE-22840
> URL: https://issues.apache.org/jira/browse/HIVE-22840
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: László Bodor
>Assignee: Shubham Chaurasia
>Priority: Major
> Attachments: HIVE-22840.1.patch, HIVE-22840.2.patch, HIVE-22840.patch
>
>
> HIVE-22405 added support for proleptic calendar. It uses java's 
> SimpleDateFormat/Calendar APIs which are not thread-safe and cause race in 
> some scenarios. 
> As a result of those race conditions, we see some exceptions like
> {code:java}
> 1) java.lang.NumberFormatException: For input string: "" 
> OR 
> java.lang.NumberFormatException: For input string: ".821582E.821582E44"
> OR
> 2) Caused by: java.lang.ArrayIndexOutOfBoundsException: -5325980
>   at 
> sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:453)
>   at 
> java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2397)
> {code}
> This issue is to address those thread-safety issues/race conditions.
> cc [~jcamachorodriguez] [~abstractdog] [~omalley]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389956=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389956
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 14:18
Start Date: 20/Feb/20 14:18
Worklog Time Spent: 10m 
  Work Description: dlavati commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r382026320
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/parse/HiveTableName.java
 ##
 @@ -38,37 +38,22 @@ public HiveTableName(String catName, String dbName, String 
tableName) {
* @throws SemanticException
*/
   public static TableName of(Table table) throws SemanticException {
-return ofNullable(table.getTableName(), table.getDbName());
+return ofNullable(table.getTableName(), table.getDbName()); // todo: this 
shouldn't call nullable
   }
 
   /**
-   * Set a @{@link Table} object's table and db names based on the provided 
string.
-   * @param dbTable the dbtable string
+   * Set a @{@link Table} object's table and db names based on the provided 
tableName object.
+   * @param tableName the tableName object
* @param table the table to update
* @return the table
* @throws SemanticException
*/
-  public static Table setFrom(String dbTable, Table table) throws 
SemanticException{
-TableName name = ofNullable(dbTable);
-table.setTableName(name.getTable());
-table.setDbName(name.getDb());
+  public static Table setFrom(TableName tableName, Table table) throws 
SemanticException{
 
 Review comment:
   I went with this because Table is in ql.metadata, while TableName is in 
storage-api.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389956)
Time Spent: 1h 10m  (was: 1h)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389953=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389953
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 14:08
Start Date: 20/Feb/20 14:08
Worklog Time Spent: 10m 
  Work Description: dlavati commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r382020206
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/TableExport.java
 ##
 @@ -152,7 +152,7 @@ private void writeData(PartitionIterable partitions) 
throws SemanticException {
   if (tableSpec.tableHandle.isPartitioned()) {
 if (partitions == null) {
   throw new IllegalStateException("partitions cannot be null for 
partitionTable :"
-  + tableSpec.getTableName().getTable());
+  + tableSpec.getTableName().getNotEmptyDbTable());
 
 Review comment:
   I guess using the logic of `getNotEmptyDbTable` for `toString` would make 
the most sense then.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389953)
Time Spent: 1h  (was: 50m)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22899) Make sure qtests clean up copied files from test directories

2020-02-20 Thread Zoltan Chovan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Chovan updated HIVE-22899:
-
Attachment: HIVE-22899.5.patch

> Make sure qtests clean up copied files from test directories
> 
>
> Key: HIVE-22899
> URL: https://issues.apache.org/jira/browse/HIVE-22899
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Minor
> Attachments: HIVE-22899.2.patch, HIVE-22899.3.patch, 
> HIVE-22899.4.patch, HIVE-22899.5.patch, HIVE-22899.patch
>
>
> Several qtest files are copying schema or test files to the test directories 
> (such as ${system:test.tmp.dir} and 
> ${hiveconf:hive.metastore.warehouse.dir}), many times without changing the 
> name of the copied file. When the same files is copied by another qtest to 
> the same directory the copy and hence the test fails. This can lead to flaky 
> tests when any two of these qtests gets scheduled to the same batch.
>  
> In order to avoid these failures, we should make sure the files copied to the 
> test dirs have unique names and we should make sure these files are cleaned 
> up by the same qtest files that copies the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22899) Make sure qtests clean up copied files from test directories

2020-02-20 Thread Zoltan Chovan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Chovan updated HIVE-22899:
-
Attachment: (was: HIVE-22899.4.patch)

> Make sure qtests clean up copied files from test directories
> 
>
> Key: HIVE-22899
> URL: https://issues.apache.org/jira/browse/HIVE-22899
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Minor
> Attachments: HIVE-22899.2.patch, HIVE-22899.3.patch, 
> HIVE-22899.4.patch, HIVE-22899.5.patch, HIVE-22899.patch
>
>
> Several qtest files are copying schema or test files to the test directories 
> (such as ${system:test.tmp.dir} and 
> ${hiveconf:hive.metastore.warehouse.dir}), many times without changing the 
> name of the copied file. When the same files is copied by another qtest to 
> the same directory the copy and hence the test fails. This can lead to flaky 
> tests when any two of these qtests gets scheduled to the same batch.
>  
> In order to avoid these failures, we should make sure the files copied to the 
> test dirs have unique names and we should make sure these files are cleaned 
> up by the same qtest files that copies the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22721) Add option for queries to only read from LLAP cache

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040975#comment-17040975
 ] 

Hive QA commented on HIVE-22721:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993896/HIVE-22721.0.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18045 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20746/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20746/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20746/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993896 - PreCommit-HIVE-Build

> Add option for queries to only read from LLAP cache
> ---
>
> Key: HIVE-22721
> URL: https://issues.apache.org/jira/browse/HIVE-22721
> Project: Hive
>  Issue Type: Test
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22721.0.patch
>
>
> Testing features of LLAP cache sometimes requires to validate if e.g. a 
> particular table/partition is cached, or not.
> This is to avoid relying on counters that are dependent on the underlying 
> (ORC) file format (which may produce different number of bytes among its 
> different versions).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389930=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389930
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 13:32
Start Date: 20/Feb/20 13:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r381992821
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/cache/results/QueryResultsCache.java
 ##
 @@ -631,13 +632,13 @@ public long getSize() {
 }
   }
 
-  public void notifyTableChanged(String dbName, String tableName, long 
updateTime) {
-LOG.debug("Table changed: {}.{}, at {}", dbName, tableName, updateTime);
+  public void notifyTableChanged(TableName tableName, long updateTime) {
+LOG.debug("Table changed: {}, at {}", tableName.getNotEmptyDbTable(), 
updateTime);
 // Invalidate all cache entries using this table.
 List entriesToInvalidate = null;
 rwLock.writeLock().lock();
 try {
-  String key = (dbName.toLowerCase() + "." + tableName.toLowerCase());
+  String key = (tableName.getNotEmptyDbTable().toLowerCase());
 
 Review comment:
   we might want to consider to remove all these "toLowerCase"  calls;and 
instead make it a contract for tablenames; so it's enforced at the time the 
tablename is created
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389930)
Time Spent: 20m  (was: 10m)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389932=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389932
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 13:32
Start Date: 20/Feb/20 13:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r381994244
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/misc/AlterTableRenameDesc.java
 ##
 @@ -33,16 +33,16 @@
 public class AlterTableRenameDesc extends AbstractAlterTableDesc {
   private static final long serialVersionUID = 1L;
 
-  private final String newName;
+  private final TableName newName;
 
 Review comment:
   nit: newName -> newTableName
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389932)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389931=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389931
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 13:32
Start Date: 20/Feb/20 13:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r381993869
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/cache/results/QueryResultsCache.java
 ##
 @@ -989,7 +990,7 @@ public void accept(NotificationEvent event) {
   QueryResultsCache cache = QueryResultsCache.getInstance();
   if (cache != null) {
 long eventTime = event.getEventTime() * 1000L;
-cache.notifyTableChanged(dbName, tableName, eventTime);
+cache.notifyTableChanged(TableName.fromString(tableName, null, 
dbName), eventTime);
 
 Review comment:
   I wonder if it would make sense to leave out the category for nowit's 
not really used - instead of passing null everywhere we could have a separate 
method for (db,name)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389931)
Time Spent: 0.5h  (was: 20m)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389936=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389936
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 13:32
Start Date: 20/Feb/20 13:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r381998918
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/parse/HiveTableName.java
 ##
 @@ -38,37 +38,22 @@ public HiveTableName(String catName, String dbName, String 
tableName) {
* @throws SemanticException
*/
   public static TableName of(Table table) throws SemanticException {
-return ofNullable(table.getTableName(), table.getDbName());
+return ofNullable(table.getTableName(), table.getDbName()); // todo: this 
shouldn't call nullable
   }
 
   /**
-   * Set a @{@link Table} object's table and db names based on the provided 
string.
-   * @param dbTable the dbtable string
+   * Set a @{@link Table} object's table and db names based on the provided 
tableName object.
+   * @param tableName the tableName object
* @param table the table to update
* @return the table
* @throws SemanticException
*/
-  public static Table setFrom(String dbTable, Table table) throws 
SemanticException{
-TableName name = ofNullable(dbTable);
-table.setTableName(name.getTable());
-table.setDbName(name.getDb());
+  public static Table setFrom(TableName tableName, Table table) throws 
SemanticException{
 
 Review comment:
   this would be better served as an instance method - I guess it can't be 
added to Table...
   how abut something like `tableName.writeInto(table)`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389936)
Time Spent: 50m  (was: 40m)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389933=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389933
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 13:32
Start Date: 20/Feb/20 13:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r381996213
 
 

 ##
 File path: 
ql/src/test/results/clientnegative/create_external_transactional.q.out
 ##
 @@ -1 +1 @@
-FAILED: SemanticException transactional_external cannot be declared 
transactional because it's an external table
+FAILED: SemanticException default.transactional_external cannot be declared 
transactional because it's an external table
 
 Review comment:
   :+1:
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389933)
Time Spent: 40m  (was: 0.5h)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389935
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 13:32
Start Date: 20/Feb/20 13:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r381995705
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/TableExport.java
 ##
 @@ -152,7 +152,7 @@ private void writeData(PartitionIterable partitions) 
throws SemanticException {
   if (tableSpec.tableHandle.isPartitioned()) {
 if (partitions == null) {
   throw new IllegalStateException("partitions cannot be null for 
partitionTable :"
-  + tableSpec.getTableName().getTable());
+  + tableSpec.getTableName().getNotEmptyDbTable());
 
 Review comment:
   would it make a lot of changes if we would rely on TableName's toString() 
method for cases like thisI don't think we should retain the old exception 
messages at any cost
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389935)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22585) Clean up catalog/db/table name usage

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22585?focusedWorklogId=389934=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-389934
 ]

ASF GitHub Bot logged work on HIVE-22585:
-

Author: ASF GitHub Bot
Created on: 20/Feb/20 13:32
Start Date: 20/Feb/20 13:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #876: HIVE-22585: 
Clean up catalog/db/table name usage
URL: https://github.com/apache/hive/pull/876#discussion_r381997220
 
 

 ##
 File path: ql/src/test/results/clientpositive/alter_rename_table.q.out
 ##
 @@ -131,7 +131,7 @@ STAGE PLANS:
   Stage: Stage-0
 Rename Table
   table name: source.src
 
 Review comment:
   the old tablename doesn't qualified hat the "category"
   we should be consistent in showing or not showing it category
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 389934)

> Clean up catalog/db/table name usage
> 
>
> Key: HIVE-22585
> URL: https://issues.apache.org/jira/browse/HIVE-22585
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available, refactor
> Attachments: HIVE-22585.01.patch, HIVE-22585.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is a followup to HIVE-21198 to address some additional improvement ideas 
> for the TableName object mentioned in 
> [https://github.com/apache/hive/pull/550] and attempt to remove all the fishy 
> usages of db/tablenames, as a number of places still rely on certain state 
> changes/black magic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22897) Remove enforcing of package-info.java files from the rest of the checkstyle files

2020-02-20 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22897:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove enforcing of package-info.java files from the rest of the checkstyle 
> files
> -
>
> Key: HIVE-22897
> URL: https://issues.apache.org/jira/browse/HIVE-22897
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-22897.01.patch
>
>
> Follow-up Jira of HIVE-22876, enforcing is also present at:
> {code:java}
> ./storage-api/checkstyle/checkstyle.xml
> ./standalone-metastore/checkstyle/checkstyle.xml
> {code}
> Remove those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22897) Remove enforcing of package-info.java files from the rest of the checkstyle files

2020-02-20 Thread Miklos Gergely (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040957#comment-17040957
 ] 

Miklos Gergely commented on HIVE-22897:
---

Merged to master, thank you [~pvary] , [~kgyrtkirk]

> Remove enforcing of package-info.java files from the rest of the checkstyle 
> files
> -
>
> Key: HIVE-22897
> URL: https://issues.apache.org/jira/browse/HIVE-22897
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-22897.01.patch
>
>
> Follow-up Jira of HIVE-22876, enforcing is also present at:
> {code:java}
> ./storage-api/checkstyle/checkstyle.xml
> ./standalone-metastore/checkstyle/checkstyle.xml
> {code}
> Remove those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22006) Hive parquet timestamp compatibility, part 2

2020-02-20 Thread Karen Coppage (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040954#comment-17040954
 ] 

Karen Coppage commented on HIVE-22006:
--

No problem! Impala epic is IMPALA-5049 and I found 1 Spark issue: SPARK-26797

> Hive parquet timestamp compatibility, part 2
> 
>
> Key: HIVE-22006
> URL: https://issues.apache.org/jira/browse/HIVE-22006
> Project: Hive
>  Issue Type: Bug
>Affects Versions: All Versions
>Reporter: H. Vetinari
>Priority: Major
>
> The interaction between HIVE / IMPALA / SPARK writing timestamps is a major 
> source of headaches in every scenario where such interaction cannot be 
> avoided.
> HIVE-9482 added hive.parquet.timestamp.skip.conversion, which *only* affects 
> the *reading* of timestamps.
> It formulates the next steps as:
> > Later fix will change the write path to not convert, and stop the 
> > read-conversion even for files written by Hive itself.
> At the very least, HIVE needs a switch to also turn off the conversion on 
> writes. That would at least allow a setup where all three of HIVE / IMPALA / 
> SPARK can be configured not to convert on read/write, and can hence safely 
> work on the same data



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22721) Add option for queries to only read from LLAP cache

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040940#comment-17040940
 ] 

Hive QA commented on HIVE-22721:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
53s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} llap-server in master has 90 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20746/dev-support/hive-personality.sh
 |
| git revision | master / fd08239 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20746/yetus/patch-asflicense-problems.txt
 |
| modules | C: common ql llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20746/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add option for queries to only read from LLAP cache
> ---
>
> Key: HIVE-22721
> URL: https://issues.apache.org/jira/browse/HIVE-22721
> Project: Hive
>  Issue Type: Test
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22721.0.patch
>
>
> Testing features of LLAP cache sometimes requires to validate if e.g. a 
> particular table/partition is cached, or not.
> This is to avoid relying on counters that are dependent on the underlying 
> (ORC) file format (which may produce different number of bytes among its 
> different versions).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22897) Remove enforcing of package-info.java files from the rest of the checkstyle files

2020-02-20 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040935#comment-17040935
 ] 

Zoltan Haindrich commented on HIVE-22897:
-

+1


> Remove enforcing of package-info.java files from the rest of the checkstyle 
> files
> -
>
> Key: HIVE-22897
> URL: https://issues.apache.org/jira/browse/HIVE-22897
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-22897.01.patch
>
>
> Follow-up Jira of HIVE-22876, enforcing is also present at:
> {code:java}
> ./storage-api/checkstyle/checkstyle.xml
> ./standalone-metastore/checkstyle/checkstyle.xml
> {code}
> Remove those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22899) Make sure qtests clean up copied files from test directories

2020-02-20 Thread Zoltan Chovan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Chovan updated HIVE-22899:
-
Attachment: HIVE-22899.4.patch

> Make sure qtests clean up copied files from test directories
> 
>
> Key: HIVE-22899
> URL: https://issues.apache.org/jira/browse/HIVE-22899
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Minor
> Attachments: HIVE-22899.2.patch, HIVE-22899.3.patch, 
> HIVE-22899.4.patch, HIVE-22899.4.patch, HIVE-22899.patch
>
>
> Several qtest files are copying schema or test files to the test directories 
> (such as ${system:test.tmp.dir} and 
> ${hiveconf:hive.metastore.warehouse.dir}), many times without changing the 
> name of the copied file. When the same files is copied by another qtest to 
> the same directory the copy and hence the test fails. This can lead to flaky 
> tests when any two of these qtests gets scheduled to the same batch.
>  
> In order to avoid these failures, we should make sure the files copied to the 
> test dirs have unique names and we should make sure these files are cleaned 
> up by the same qtest files that copies the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22897) Remove enforcing of package-info.java files from the rest of the checkstyle files

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040910#comment-17040910
 ] 

Hive QA commented on HIVE-22897:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993880/HIVE-22897.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18045 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20745/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20745/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20745/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993880 - PreCommit-HIVE-Build

> Remove enforcing of package-info.java files from the rest of the checkstyle 
> files
> -
>
> Key: HIVE-22897
> URL: https://issues.apache.org/jira/browse/HIVE-22897
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-22897.01.patch
>
>
> Follow-up Jira of HIVE-22876, enforcing is also present at:
> {code:java}
> ./storage-api/checkstyle/checkstyle.xml
> ./standalone-metastore/checkstyle/checkstyle.xml
> {code}
> Remove those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21737) Upgrade Avro to version 1.9.2

2020-02-20 Thread Fokko Driesprong (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fokko Driesprong updated HIVE-21737:

Summary: Upgrade Avro to version 1.9.2  (was: Upgrade Avro to version 1.9.1)

> Upgrade Avro to version 1.9.2
> -
>
> Key: HIVE-21737
> URL: https://issues.apache.org/jira/browse/HIVE-21737
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ismaël Mejía
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
> Attachments: 0001-HIVE-21737-Bump-Apache-Avro-to-1.9.1.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner 
> version of Avro without Jackson in the public API. Worth the update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21737) Upgrade Avro to version 1.9.2

2020-02-20 Thread Fokko Driesprong (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fokko Driesprong updated HIVE-21737:

Description: Avro 1.9.2 was released recently. It brings a lot of fixes 
including a leaner version of Avro without Jackson in the public API. Worth the 
update.  (was: Avro 1.9.0 was released recently. It brings a lot of fixes 
including a leaner version of Avro without Jackson in the public API. Worth the 
update.)

> Upgrade Avro to version 1.9.2
> -
>
> Key: HIVE-21737
> URL: https://issues.apache.org/jira/browse/HIVE-21737
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ismaël Mejía
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
> Attachments: 0001-HIVE-21737-Bump-Apache-Avro-to-1.9.1.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Avro 1.9.2 was released recently. It brings a lot of fixes including a leaner 
> version of Avro without Jackson in the public API. Worth the update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22897) Remove enforcing of package-info.java files from the rest of the checkstyle files

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040841#comment-17040841
 ] 

Hive QA commented on HIVE-22897:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red}  0m  2s{color} | 
{color:red} The patch has 2 ill-formed XML file(s). {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  3m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20745/dev-support/hive-personality.sh
 |
| git revision | master / fd08239 |
| xml | http://104.198.109.242/logs//PreCommit-HIVE-Build-20745/yetus/xml.txt |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20745/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20745/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove enforcing of package-info.java files from the rest of the checkstyle 
> files
> -
>
> Key: HIVE-22897
> URL: https://issues.apache.org/jira/browse/HIVE-22897
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-22897.01.patch
>
>
> Follow-up Jira of HIVE-22876, enforcing is also present at:
> {code:java}
> ./storage-api/checkstyle/checkstyle.xml
> ./standalone-metastore/checkstyle/checkstyle.xml
> {code}
> Remove those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22376) Cancelled query still prints exception if it was stuck in waiting for lock

2020-02-20 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040833#comment-17040833
 ] 

Hive QA commented on HIVE-22376:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993876/HIVE-22376.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18045 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20744/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20744/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20744/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993876 - PreCommit-HIVE-Build

> Cancelled query still prints exception if it was stuck in waiting for lock
> --
>
> Key: HIVE-22376
> URL: https://issues.apache.org/jira/browse/HIVE-22376
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Affects Versions: 3.1.2
>Reporter: Peter Vary
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-22376.patch
>
>
> The query waits for locks, then cancelled.
> It prints this to the logs, which is unnecessary and missleading:
> {code}
> apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:326)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:344)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: NoSuchLockException(message:No such lock lockid:272)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$check_lock_result$check_lock_resultStandardScheme.read(ThriftHiveMetastore.java)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$check_lock_result$check_lock_resultStandardScheme.read(ThriftHiveMetastore.java)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$check_lock_result.read(ThriftHiveMetastore.java)
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_check_lock(ThriftHiveMetastore.java:5730)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.check_lock(ThriftHiveMetastore.java:5717)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.checkLock(HiveMetaStoreClient.java:3128)
>   at sun.reflect.GeneratedMethodAccessor351.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy59.checkLock(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor351.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:)
>   at com.sun.proxy.$Proxy59.checkLock(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:115)
>   ... 25 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22880) ACID: All delete event readers should ignore ORC SARGs

2020-02-20 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040731#comment-17040731
 ] 

Peter Vary edited comment on HIVE-22880 at 2/20/20 9:47 AM:


It would be a good idea to add a test case, if it is possible.


was (Author: pvary):
How hard would it be to create a test case?

> ACID: All delete event readers should ignore ORC SARGs
> --
>
> Key: HIVE-22880
> URL: https://issues.apache.org/jira/browse/HIVE-22880
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions, Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Gopal Vijayaraghavan
>Priority: Blocker
> Attachments: HIVE-22880.1.patch
>
>
> Delete delta readers should not apply any SARGs other than the ones related 
> to the transaction id ranges within the inserts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >