[jira] [Work logged] (HIVE-26995) Iceberg: Enhance time travel syntax with expressions

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26995?focusedWorklogId=841934&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841934
 ]

ASF GitHub Bot logged work on HIVE-26995:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 07:38
Start Date: 27/Jan/23 07:38
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on code in PR #3988:
URL: https://github.com/apache/hive/pull/3988#discussion_r1088657424


##
ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java:
##
@@ -1133,7 +1133,20 @@ private String processTable(QB qb, ASTNode tabref) 
throws SemanticException {
 
 if (asOfTimeIndex != -1 || asOfVersionIndex != -1) {
   String asOfVersion = asOfVersionIndex == -1 ? null : 
tabref.getChild(asOfVersionIndex).getChild(0).getText();
-  String asOfTime = asOfTimeIndex == -1 ? null : 
tabref.getChild(asOfTimeIndex).getChild(0).getText();
+  String asOfTime = null;
+  
+  if (asOfTimeIndex != -1) {
+ASTNode expr = (ASTNode) tabref.getChild(asOfTimeIndex).getChild(0);
+if (expr.getChildCount() > 0) {
+  ExprNodeDesc desc = genExprNodeDesc(expr, null, false, true);
+  if (desc instanceof ExprNodeConstantDesc) {
+ExprNodeConstantDesc c = (ExprNodeConstantDesc) desc;
+asOfTime = String.valueOf(c.getValue());
+  }

Review Comment:
   Should an exception be thrown if the expression can not be evaluated at 
compile time?
   
   It might be worth adding some negative test cases to cover this code path.





Issue Time Tracking
---

Worklog Id: (was: 841934)
Time Spent: 0.5h  (was: 20m)

> Iceberg: Enhance time travel syntax with expressions
> 
>
> Key: HIVE-26995
> URL: https://issues.apache.org/jira/browse/HIVE-26995
> Project: Hive
>  Issue Type: Task
>Reporter: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Allow expressions in time travel queries, such as 
> {code}
> FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP - interval '10' hours
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26599) Fix NPE encountered in second dump cycle of optimised bootstrap

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26599?focusedWorklogId=841933&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841933
 ]

ASF GitHub Bot logged work on HIVE-26599:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 07:36
Start Date: 27/Jan/23 07:36
Worklog Time Spent: 10m 
  Work Description: pudidic merged PR #3963:
URL: https://github.com/apache/hive/pull/3963




Issue Time Tracking
---

Worklog Id: (was: 841933)
Time Spent: 1h 10m  (was: 1h)

> Fix NPE encountered in second dump cycle of optimised bootstrap
> ---
>
> Key: HIVE-26599
> URL: https://issues.apache.org/jira/browse/HIVE-26599
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Vinit Patni
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> After creating reverse replication policy  after failover is completed from 
> Primary to DR cluster and DR takes over. First dump and load cycle of 
> optimised bootstrap is completing successfully, But We are encountering Null 
> pointer exception in the second dump cycle which is halting this reverse 
> replication and major blocker to test complete cycle of replication. 
> {code:java}
> Scheduled Query Executor(schedule:repl_reverse, execution_id:14)]: FAILED: 
> Execution Error, return code -101 from 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.metric.ReplicationMetricCollector.reportStageProgress(ReplicationMetricCollector.java:192)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpTable(ReplDumpTask.java:1458)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:961)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:290)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105)
> at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357)
> at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330)
> at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246)
> at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:749)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:504)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:498)
> at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:232){code}
> After doing RCA, we figured out that In second dump cycle on DR cluster when 
> StageStart method is invoked by code,  metrics corresponding to Tables is not 
> being registered (which should be registered as we are doing selective 
> bootstrap of tables for optimise bootstrap along with incremental dump) which 
> is causing NPE when it is trying to update the progress corresponding to this 
> metric latter on after bootstrap of table is completed. 
> Fix is to register the Tables metric before updating the progress.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26599) Fix NPE encountered in second dump cycle of optimised bootstrap

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26599?focusedWorklogId=841932&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841932
 ]

ASF GitHub Bot logged work on HIVE-26599:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 07:35
Start Date: 27/Jan/23 07:35
Worklog Time Spent: 10m 
  Work Description: pudidic commented on PR #3963:
URL: https://github.com/apache/hive/pull/3963#issuecomment-1406125663

   I'll merge this since it's a trivial change.




Issue Time Tracking
---

Worklog Id: (was: 841932)
Time Spent: 1h  (was: 50m)

> Fix NPE encountered in second dump cycle of optimised bootstrap
> ---
>
> Key: HIVE-26599
> URL: https://issues.apache.org/jira/browse/HIVE-26599
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Vinit Patni
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> After creating reverse replication policy  after failover is completed from 
> Primary to DR cluster and DR takes over. First dump and load cycle of 
> optimised bootstrap is completing successfully, But We are encountering Null 
> pointer exception in the second dump cycle which is halting this reverse 
> replication and major blocker to test complete cycle of replication. 
> {code:java}
> Scheduled Query Executor(schedule:repl_reverse, execution_id:14)]: FAILED: 
> Execution Error, return code -101 from 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.metric.ReplicationMetricCollector.reportStageProgress(ReplicationMetricCollector.java:192)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpTable(ReplDumpTask.java:1458)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:961)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:290)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105)
> at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357)
> at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330)
> at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246)
> at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:749)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:504)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:498)
> at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:232){code}
> After doing RCA, we figured out that In second dump cycle on DR cluster when 
> StageStart method is invoked by code,  metrics corresponding to Tables is not 
> being registered (which should be registered as we are doing selective 
> bootstrap of tables for optimise bootstrap along with incremental dump) which 
> is causing NPE when it is trying to update the progress corresponding to this 
> metric latter on after bootstrap of table is completed. 
> Fix is to register the Tables metric before updating the progress.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26933) Cleanup dump directory for eventId which was failed in previous dump cycle

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26933?focusedWorklogId=841918&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841918
 ]

ASF GitHub Bot logged work on HIVE-26933:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 06:39
Start Date: 27/Jan/23 06:39
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3984:
URL: https://github.com/apache/hive/pull/3984#issuecomment-1406088214

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3984)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3984&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3984&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3984&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3984&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3984&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3984&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841918)
Time Spent: 3h  (was: 2h 50m)

> Cleanup dump directory for eventId which was failed in previous dump cycle
> --
>
> Key: HIVE-26933
> URL: https://issues.apache.org/jira/browse/HIVE-26933
> Project: Hive
>  Issue Type: Improvement
>Reporter: Harshal Patel
>Assignee: Harshal Patel
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> # If Incremental Dump operation failes while dumping any event id  in the 
> staging directory. Then dump directory for this event id along with file 
> _dumpmetadata  still exists in the dump location. which is getting stored in 
> _events_dump file
>  # When user triggers dump operation for this policy again, It again resumes 
> dumping from failed event id, and tries to dump it again but as that event id 
> directory already created in previous cycle, it fails with the exception
> {noformat}
> [Scheduled Query Executor(schedule:repl_policytest7, execution_id:7

[jira] [Comment Edited] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez

2023-01-26 Thread AjaykumarDev (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680855#comment-17680855
 ] 

AjaykumarDev edited comment on HIVE-17502 at 1/27/23 6:36 AM:
--

[~vgarg] [~thejas] [~sershe] [~thai.bui]  
h4. [Vineet 
Garg|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=vgarg]

We are also facing similar issue in out Hive LLap Application as below ,
 # user1 executes the query and user2 also executes the query both users are 
using the same userid. As user1 executed the query first, he gets the session 
from thread pool however, in case of other user he gets the below error.

    Error : 2023-01-25T15:40:54,852 INFO [ATS Logger 0] hooks.ATSHook: Received 
pre-hook notification for 
:hive_20230125154054_3e24f02f-c0e5-4ad2-87a0-758d025cb02a 
2023-01-25T15:40:54,863 ERROR [HiveServer2-Background-Pool: Thread-228] 
exec.Task: Failed to execute tez graph. 
org.apache.hadoop.hive.ql.metadata.HiveException: The pool session 
sessionId=59675216-4e35-4872-8e22-835c50b8c23d, queueName=llap, user=hive, 
doAs=false, isOpen=true, isDefault=true, expires in 593431620ms should have 
been returned to the pool at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:546)
 ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:556)
 ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:150) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] 

This Jira was created to address the same issue and the patch was also 
available. So, when can we expect this patch to be available in any future hive 
release or any alternative is available ? Please let me know because the 
solution for this issue is very critical for success of my company project, the 
company has made big investment for hive llap project and we want to resolve 
this issue as soon as possible. Please let us know if you need more information 
about the same. 

 


was (Author: JIRAUSER298812):
[~vgarg] [~thejas] [~sershe] [~thai.bui] 

We are also facing similar issue in out Hive LLap Application as below ,
 # user1 executes the query and user2 also executes the query both users are 
using the same userid. As user1 executed the query first, he gets the session 
from thread pool however, in case of other user he gets the below error.

    Error : 2023-01-25T15:40:54,852 INFO [ATS Logger 0] hooks.ATSHook: Received 
pre-hook notification for 
:hive_20230125154054_3e24f02f-c0e5-4ad2-87a0-758d025cb02a 
2023-01-25T15:40:54,863 ERROR [HiveServer2-Background-Pool: Thread-228] 
exec.Task: Failed to execute tez graph. 
org.apache.hadoop.hive.ql.metadata.HiveException: The pool session 
sessionId=59675216-4e35-4872-8e22-835c50b8c23d, queueName=llap, user=hive, 
doAs=false, isOpen=true, isDefault=true, expires in 593431620ms should have 
been returned to the pool at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:546)
 ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:556)
 ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:150) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] at 
org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) 
[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292] 

This Jira was created to address the same issue and the patch was also 
available. So, when can we expect this patch to be available in any future hive 
release or any alternative is available ? Please let me know because the 
solution for this issue is very critical for success of my company project, the 
company has made big investment for hive llap project and we want to resolve 
this issue as soon as possible. Please let us know if you need mor

[jira] [Work logged] (HIVE-26606) Expose failover states in replication metrics

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26606?focusedWorklogId=841915&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841915
 ]

ASF GitHub Bot logged work on HIVE-26606:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 05:04
Start Date: 27/Jan/23 05:04
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3990:
URL: https://github.com/apache/hive/pull/3990#issuecomment-1406025535

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3990)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3990&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3990&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3990&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=CODE_SMELL)
 [3 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3990&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3990&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3990&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841915)
Time Spent: 1.5h  (was: 1h 20m)

> Expose failover states in replication metrics
> -
>
> Key: HIVE-26606
> URL: https://issues.apache.org/jira/browse/HIVE-26606
> Project: Hive
>  Issue Type: Improvement
>Reporter: Teddy Choi
>Assignee: Harshal Patel
>Priority: Major
>  Labels: pull-request-available
> Attachments: 
> HIVE-26606__Expose_failover_states_in_replication_metrics1.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Expose the state of failover in replication metrics,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26606) Expose failover states in replication metrics

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26606?focusedWorklogId=841913&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841913
 ]

ASF GitHub Bot logged work on HIVE-26606:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 04:08
Start Date: 27/Jan/23 04:08
Worklog Time Spent: 10m 
  Work Description: harshal-16 opened a new pull request, #3990:
URL: https://github.com/apache/hive/pull/3990

   * Added 2 New replication Types:
 - Pre Optimized BootStrap : 1st cycle of reverse replication
 - Optimized Bootstrap : 2nd cycle of reverse replication
   * Added both types into replication metric
   * Added unit test for corresponding changes
   * Added MetricMap to PRE_OPTIMIZED_BOOTSTRAP metriccollector
   * Initialized metric collector for SKIPPED entries so that database name 
will show up in replication_metrics table
   * Fixed RM_DUMP_EXECUTION_ID value at dump side and updated function calls 
in tests




Issue Time Tracking
---

Worklog Id: (was: 841913)
Time Spent: 1h 20m  (was: 1h 10m)

> Expose failover states in replication metrics
> -
>
> Key: HIVE-26606
> URL: https://issues.apache.org/jira/browse/HIVE-26606
> Project: Hive
>  Issue Type: Improvement
>Reporter: Teddy Choi
>Assignee: Harshal Patel
>Priority: Major
>  Labels: pull-request-available
> Attachments: 
> HIVE-26606__Expose_failover_states_in_replication_metrics1.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Expose the state of failover in replication metrics,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26606) Expose failover states in replication metrics

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26606?focusedWorklogId=841912&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841912
 ]

ASF GitHub Bot logged work on HIVE-26606:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 04:07
Start Date: 27/Jan/23 04:07
Worklog Time Spent: 10m 
  Work Description: harshal-16 closed pull request #3978: HIVE-26606: 
Expose failover states in replication metrics
URL: https://github.com/apache/hive/pull/3978




Issue Time Tracking
---

Worklog Id: (was: 841912)
Time Spent: 1h 10m  (was: 1h)

> Expose failover states in replication metrics
> -
>
> Key: HIVE-26606
> URL: https://issues.apache.org/jira/browse/HIVE-26606
> Project: Hive
>  Issue Type: Improvement
>Reporter: Teddy Choi
>Assignee: Harshal Patel
>Priority: Major
>  Labels: pull-request-available
> Attachments: 
> HIVE-26606__Expose_failover_states_in_replication_metrics1.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Expose the state of failover in replication metrics,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26687) INSERT query with array type failing with SemanticException

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26687?focusedWorklogId=841910&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841910
 ]

ASF GitHub Bot logged work on HIVE-26687:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 03:37
Start Date: 27/Jan/23 03:37
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3923:
URL: https://github.com/apache/hive/pull/3923#issuecomment-1405976863

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3923)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3923&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3923&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3923&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3923&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3923&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3923&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841910)
Time Spent: 1h  (was: 50m)

> INSERT query with array type failing with SemanticException
> -
>
> Key: HIVE-26687
> URL: https://issues.apache.org/jira/browse/HIVE-26687
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Manthan B Y
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> *Steps to reproduce:*
> {code:java}
> DROP TABLE IF EXISTS default.tbl_oGSJ;
> CREATE TABLE default.tbl_oGSJ (c1 array);
> INSERT INTO default.tbl_oGSJ(c1) VALUES (array(55,54)); {code}
> *Error:*
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Cannot insert into target table because column number/types are different 
> 'TOK_TMP_FILE': Cannot convert column 0 from array to array. 
> (state=42000,code=4) {code}
> The same is the case for bigint, tinyint as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez

2023-01-26 Thread AjaykumarDev (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681170#comment-17681170
 ] 

AjaykumarDev commented on HIVE-17502:
-

Any update on above.

> Reuse of default session should not throw an exception in LLAP w/ Tez
> -
>
> Key: HIVE-17502
> URL: https://issues.apache.org/jira/browse/HIVE-17502
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tez
>Affects Versions: 2.1.1, 2.2.0
> Environment: HDP 2.6.1.0-129, Hue 4
>Reporter: Thai Bui
>Assignee: Thai Bui
>Priority: Major
> Attachments: HIVE-17502.2.patch, HIVE-17502.3.patch, HIVE-17502.patch
>
>
> Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be 
> skipped mostly because of this line 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365.
> However, some clients such as Hue 4, allow multiple sessions to be used per 
> user. Under this configuration, a Thrift client will send a request to either 
> reuse or open a new session. The reuse request could include the session id 
> of a currently used snippet being executed in Hue, this causes HS2 to throw 
> an exception:
> {noformat}
> 2017-09-10T17:51:36,548 INFO  [Thread-89]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(512)) - The current user: 
> hive, session user: hive
> 2017-09-10T17:51:36,549 ERROR [Thread-89]: exec.Task 
> (TezTask.java:execute(232)) - Failed to execute tez graph.
> org.apache.hadoop.hive.ql.metadata.HiveException: The pool session 
> sessionId=5b61a578-6336-41c5-860d-9838166f97fe, queueName=llap, user=hive, 
> doAs=false, isOpen=true, isDefault=true, expires in 591015330ms should have 
> been returned to the pool
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:534)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:544)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:147) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
> {noformat}
> Note that every query is issued as a single 'hive' user to share the LLAP 
> daemon pool, a set of pre-determined number of AMs is initialized at setup 
> time. Thus, HS2 should allow new sessions from a Thrift client to be used out 
> of the pool, or an existing session to be skipped and an unused session from 
> the pool to be returned. The logic to throw an exception in the  
> `canWorkWithSameSession` doesn't make sense to me.
> I have a solution to fix this issue in my local branch at 
> https://github.com/thaibui/hive/commit/078a521b9d0906fe6c0323b63e567f6eee2f3a70.
>  When applied, the log will become like so
> {noformat}
> 2017-09-10T09:15:33,578 INFO  [Thread-239]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(533)) - Skipping default 
> session sessionId=6638b1da-0f8a-405e-85f0-9586f484e6de, queueName=llap, 
> user=hive, doAs=false, isOpen=true, isDefault=true, expires in 591868732ms 
> since it is being used.
> {noformat}
> A test case is provided in my branch to demonstrate how it works. If possible 
> I would like this patch to be applied to version 2.1, 2.2 and master. Since 
> we are using 2.1 LLAP in production with Hue 4, this patch is critical to our 
> success.
> Alternatively, if this patch is too broad in scope, I propose adding an 
> option to allow "skipping of currently used default sessions". With this new 
> option default to "false", existing behavior won't change unless the option 
> is turned on.
> I will prepare an official path if this change to master &/ the other 
> branches is acceptable. I'm not an contributor &/ committer, this will be my 
> first time contributing to Hive and the Apache foundation. Any early review 
> is greatly appreciated, thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez

2023-01-26 Thread AjaykumarDev (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AjaykumarDev updated HIVE-17502:

Status: In Progress  (was: Patch Available)

> Reuse of default session should not throw an exception in LLAP w/ Tez
> -
>
> Key: HIVE-17502
> URL: https://issues.apache.org/jira/browse/HIVE-17502
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tez
>Affects Versions: 2.2.0, 2.1.1
> Environment: HDP 2.6.1.0-129, Hue 4
>Reporter: Thai Bui
>Assignee: Thai Bui
>Priority: Major
> Attachments: HIVE-17502.2.patch, HIVE-17502.3.patch, HIVE-17502.patch
>
>
> Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be 
> skipped mostly because of this line 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365.
> However, some clients such as Hue 4, allow multiple sessions to be used per 
> user. Under this configuration, a Thrift client will send a request to either 
> reuse or open a new session. The reuse request could include the session id 
> of a currently used snippet being executed in Hue, this causes HS2 to throw 
> an exception:
> {noformat}
> 2017-09-10T17:51:36,548 INFO  [Thread-89]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(512)) - The current user: 
> hive, session user: hive
> 2017-09-10T17:51:36,549 ERROR [Thread-89]: exec.Task 
> (TezTask.java:execute(232)) - Failed to execute tez graph.
> org.apache.hadoop.hive.ql.metadata.HiveException: The pool session 
> sessionId=5b61a578-6336-41c5-860d-9838166f97fe, queueName=llap, user=hive, 
> doAs=false, isOpen=true, isDefault=true, expires in 591015330ms should have 
> been returned to the pool
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:534)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:544)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:147) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
> {noformat}
> Note that every query is issued as a single 'hive' user to share the LLAP 
> daemon pool, a set of pre-determined number of AMs is initialized at setup 
> time. Thus, HS2 should allow new sessions from a Thrift client to be used out 
> of the pool, or an existing session to be skipped and an unused session from 
> the pool to be returned. The logic to throw an exception in the  
> `canWorkWithSameSession` doesn't make sense to me.
> I have a solution to fix this issue in my local branch at 
> https://github.com/thaibui/hive/commit/078a521b9d0906fe6c0323b63e567f6eee2f3a70.
>  When applied, the log will become like so
> {noformat}
> 2017-09-10T09:15:33,578 INFO  [Thread-239]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(533)) - Skipping default 
> session sessionId=6638b1da-0f8a-405e-85f0-9586f484e6de, queueName=llap, 
> user=hive, doAs=false, isOpen=true, isDefault=true, expires in 591868732ms 
> since it is being used.
> {noformat}
> A test case is provided in my branch to demonstrate how it works. If possible 
> I would like this patch to be applied to version 2.1, 2.2 and master. Since 
> we are using 2.1 LLAP in production with Hue 4, this patch is critical to our 
> success.
> Alternatively, if this patch is too broad in scope, I propose adding an 
> option to allow "skipping of currently used default sessions". With this new 
> option default to "false", existing behavior won't change unless the option 
> is turned on.
> I will prepare an official path if this change to master &/ the other 
> branches is acceptable. I'm not an contributor &/ committer, this will be my 
> first time contributing to Hive and the Apache foundation. Any early review 
> is greatly appreciated, thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26687) INSERT query with array type failing with SemanticException

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26687?focusedWorklogId=841905&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841905
 ]

ASF GitHub Bot logged work on HIVE-26687:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 02:48
Start Date: 27/Jan/23 02:48
Worklog Time Spent: 10m 
  Work Description: manthanmtg commented on code in PR #3923:
URL: https://github.com/apache/hive/pull/3923#discussion_r1088541693


##
ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java:
##
@@ -8722,8 +8722,18 @@ private ExprNodeDesc handleConversion(StructField 
tableField, ColumnInfo rowFiel
   // need to do some conversions here
   conversion.set(true);
   if (tableFieldTypeInfo.getCategory() != Category.PRIMITIVE) {
-// cannot convert to complex types
-column = null;
+// handle array in case of complex types
+String array_type_prefix = "array<";

Review Comment:
   Sure, will update





Issue Time Tracking
---

Worklog Id: (was: 841905)
Time Spent: 50m  (was: 40m)

> INSERT query with array type failing with SemanticException
> -
>
> Key: HIVE-26687
> URL: https://issues.apache.org/jira/browse/HIVE-26687
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Manthan B Y
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> *Steps to reproduce:*
> {code:java}
> DROP TABLE IF EXISTS default.tbl_oGSJ;
> CREATE TABLE default.tbl_oGSJ (c1 array);
> INSERT INTO default.tbl_oGSJ(c1) VALUES (array(55,54)); {code}
> *Error:*
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Cannot insert into target table because column number/types are different 
> 'TOK_TMP_FILE': Cannot convert column 0 from array to array. 
> (state=42000,code=4) {code}
> The same is the case for bigint, tinyint as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26948) Backport HIVE-21456 to branch-3

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26948:
--
Labels: pull-request-available  (was: )

> Backport HIVE-21456 to branch-3
> ---
>
> Key: HIVE-26948
> URL: https://issues.apache.org/jira/browse/HIVE-26948
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore, Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HIVE-21456 adds support to connect to Hive metastore over http transport. 
> This is a very useful feature especially in cloud based environments. 
> Creating this ticket to backport it to branch-3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26948) Backport HIVE-21456 to branch-3

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26948?focusedWorklogId=841896&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841896
 ]

ASF GitHub Bot logged work on HIVE-26948:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 01:03
Start Date: 27/Jan/23 01:03
Worklog Time Spent: 10m 
  Work Description: vihangk1 opened a new pull request, #3989:
URL: https://github.com/apache/hive/pull/3989

   
   
   ### What changes were proposed in this pull request?
   This PR backports HIVE-21456 to branch-3. To resolve some conflicts, 
following changes were done in addition to the original PR when compared to the 
master branch.
   1. pom.xml --> updates the httpcomponents library version to 4.4.13 which is 
same as in master branch.
   2. TestSSL --> All the tests are originally marked as Ignored before this 
PR. This PR marks individual tests in the suite as Ignored to keep the same 
behavior as before and then adds a new additional test which works for http 
metastore client.
   
   
   ### Why are the changes needed?
   Backport http transport support for Hive metastore in branch-3 so that users 
don't have to do a major upgrade to get this improvement.
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   Backported the tests from the original PR and made sure that they work.
   




Issue Time Tracking
---

Worklog Id: (was: 841896)
Remaining Estimate: 0h
Time Spent: 10m

> Backport HIVE-21456 to branch-3
> ---
>
> Key: HIVE-26948
> URL: https://issues.apache.org/jira/browse/HIVE-26948
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore, Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HIVE-21456 adds support to connect to Hive metastore over http transport. 
> This is a very useful feature especially in cloud based environments. 
> Creating this ticket to backport it to branch-3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-25616) Backport HIVE-24741 to Hive 2, 3

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25616?focusedWorklogId=841885&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841885
 ]

ASF GitHub Bot logged work on HIVE-25616:
-

Author: ASF GitHub Bot
Created on: 27/Jan/23 00:21
Start Date: 27/Jan/23 00:21
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on PR #3602:
URL: https://github.com/apache/hive/pull/3602#issuecomment-1405849812

   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.




Issue Time Tracking
---

Worklog Id: (was: 841885)
Time Spent: 3h 50m  (was: 3h 40m)

> Backport HIVE-24741 to Hive 2, 3 
> -
>
> Key: HIVE-25616
> URL: https://issues.apache.org/jira/browse/HIVE-25616
> Project: Hive
>  Issue Type: Improvement
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> HIVE-24741 adds a major improvement to the `{{get_partitions_ps_with_auth}}` 
> API that is used by Spark to retrieve all partitions of the table.
> This has caused problems in Spark 3 - running on a Hive 2.3.x metastore. 
> Patched this in my org and it helped get past large partitioned tables having 
> problems reading metadata through Spark.
> [~vihangk1], thoughts?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26989) Fix predicate pushdown for Timestamp with TZ

2023-01-26 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan resolved HIVE-26989.
-
Resolution: Fixed

> Fix predicate pushdown for Timestamp with TZ
> 
>
> Key: HIVE-26989
> URL: https://issues.apache.org/jira/browse/HIVE-26989
> Project: Hive
>  Issue Type: Task
>  Components: Hive, Iceberg integration
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Running a query which is filtering for {{TIMESTAMP WITH LOCAL TIME ZONE}} 
> returns the correct results but the predicate is not pushed to Iceberg.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26995) Iceberg: Enhance time travel syntax with expressions

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26995?focusedWorklogId=841867&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841867
 ]

ASF GitHub Bot logged work on HIVE-26995:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 21:12
Start Date: 26/Jan/23 21:12
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3988:
URL: https://github.com/apache/hive/pull/3988#issuecomment-1405657612

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3988)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3988&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3988&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3988&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3988&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3988&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3988&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841867)
Time Spent: 20m  (was: 10m)

> Iceberg: Enhance time travel syntax with expressions
> 
>
> Key: HIVE-26995
> URL: https://issues.apache.org/jira/browse/HIVE-26995
> Project: Hive
>  Issue Type: Task
>Reporter: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Allow expressions in time travel queries, such as 
> {code}
> FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP - interval '10' hours
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26400) Provide docker images for Hive

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26400?focusedWorklogId=841862&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841862
 ]

ASF GitHub Bot logged work on HIVE-26400:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:49
Start Date: 26/Jan/23 20:49
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #3448:
URL: https://github.com/apache/hive/pull/3448#discussion_r1088324666


##
dev-support/docker/Dockerfile:
##
@@ -0,0 +1,53 @@
+#

Review Comment:
   i think, it should be under `packaging/src/docker`





Issue Time Tracking
---

Worklog Id: (was: 841862)
Time Spent: 7h  (was: 6h 50m)

> Provide docker images for Hive
> --
>
> Key: HIVE-26400
> URL: https://issues.apache.org/jira/browse/HIVE-26400
> Project: Hive
>  Issue Type: Sub-task
>  Components: Build Infrastructure
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Blocker
>  Labels: hive-4.0.0-must, pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Make Apache Hive be able to run inside docker container in pseudo-distributed 
> mode, with MySQL/Derby as its back database, provide the following:
>  * Quick-start/Debugging/Prepare a test env for Hive;
>  * Tools to build target image with specified version of Hive and its 
> dependencies;
>  * Images can be used as the basis for the Kubernetes operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-26 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko resolved HIVE-26794.
---
Fix Version/s: 4.0.0
   Resolution: Fixed

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-26 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681117#comment-17681117
 ] 

Denys Kuzmenko commented on HIVE-26794:
---

Merged to master
[~dengzh] thanks for the patch and [~cnauroth] for the review!

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=841860&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841860
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:42
Start Date: 26/Jan/23 20:42
Worklog Time Spent: 10m 
  Work Description: deniskuzZ merged PR #3817:
URL: https://github.com/apache/hive/pull/3817




Issue Time Tracking
---

Worklog Id: (was: 841860)
Time Spent: 5h 40m  (was: 5.5h)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=841853&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841853
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:34
Start Date: 26/Jan/23 20:34
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #3817:
URL: https://github.com/apache/hive/pull/3817#discussion_r1088313402


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java:
##
@@ -6188,6 +6188,12 @@ public Connection getConnection(String username, String 
password) throws SQLExce
 connectionProps.setProperty("user", username);
 connectionProps.setProperty("password", password);
 Connection conn = driver.connect(connString, connectionProps);
+String prepareStmt = dbProduct != null ? dbProduct.getPrepareTxnStmt() 
: null;

Review Comment:
   👍 





Issue Time Tracking
---

Worklog Id: (was: 841853)
Time Spent: 5.5h  (was: 5h 20m)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=841850&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841850
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:33
Start Date: 26/Jan/23 20:33
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #3817:
URL: https://github.com/apache/hive/pull/3817#discussion_r1088312414


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/datasource/HikariCPDataSourceProvider.java:
##
@@ -72,6 +72,17 @@ public DataSource create(Configuration hdpConfig, int 
maxPoolSize) throws SQLExc
   config.setPoolName(poolName);
 }
 
+// It's kind of a waste to create a fixed size connection pool as same as 
the TxnHandler#connPool,
+// TxnHandler#connPoolMutex is mostly used for MutexAPI that is primarily 
designed to
+// provide coarse-grained mutex support to maintenance tasks running 
inside the Metastore,
+// add minimumIdle=2 and idleTimeout=5min to the pool, so that the 
connection pool can retire
+// the idle connection aggressively, this will make Metastore more 
scalable especially if
+// there is a leader in the warehouse.
+if ("mutex".equals(poolName)) {

Review Comment:
   👍 





Issue Time Tracking
---

Worklog Id: (was: 841850)
Time Spent: 5h 20m  (was: 5h 10m)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26661) Support partition filter for char and varchar types on Hive metastore

2023-01-26 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko resolved HIVE-26661.
---
Fix Version/s: 4.0.0
   Resolution: Fixed

> Support partition filter for char and varchar types on Hive metastore
> -
>
> Key: HIVE-26661
> URL: https://issues.apache.org/jira/browse/HIVE-26661
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0-alpha-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HMS stores partition value as string in backend database, so it supports 
> "equal filter" for string (include date type) and integer types, and other 
> filter (for example greater than > ) for only string type.
> The char and varchar types can also considered as string, and we can support 
> the partition filter for them on HMS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26661) Support partition filter for char and varchar types on Hive metastore

2023-01-26 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681110#comment-17681110
 ] 

Denys Kuzmenko commented on HIVE-26661:
---

Merged to master.
[~wechar] thanks for the patch and [~cnauroth] for the review!

> Support partition filter for char and varchar types on Hive metastore
> -
>
> Key: HIVE-26661
> URL: https://issues.apache.org/jira/browse/HIVE-26661
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0-alpha-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HMS stores partition value as string in backend database, so it supports 
> "equal filter" for string (include date type) and integer types, and other 
> filter (for example greater than > ) for only string type.
> The char and varchar types can also considered as string, and we can support 
> the partition filter for them on HMS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26661) Support partition filter for char and varchar types on Hive metastore

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26661?focusedWorklogId=841848&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841848
 ]

ASF GitHub Bot logged work on HIVE-26661:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:23
Start Date: 26/Jan/23 20:23
Worklog Time Spent: 10m 
  Work Description: deniskuzZ merged PR #3696:
URL: https://github.com/apache/hive/pull/3696




Issue Time Tracking
---

Worklog Id: (was: 841848)
Time Spent: 1h 10m  (was: 1h)

> Support partition filter for char and varchar types on Hive metastore
> -
>
> Key: HIVE-26661
> URL: https://issues.apache.org/jira/browse/HIVE-26661
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0-alpha-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> HMS stores partition value as string in backend database, so it supports 
> "equal filter" for string (include date type) and integer types, and other 
> filter (for example greater than > ) for only string type.
> The char and varchar types can also considered as string, and we can support 
> the partition filter for them on HMS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26661) Support partition filter for char and varchar types on Hive metastore

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26661?focusedWorklogId=841849&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841849
 ]

ASF GitHub Bot logged work on HIVE-26661:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:23
Start Date: 26/Jan/23 20:23
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on PR #3696:
URL: https://github.com/apache/hive/pull/3696#issuecomment-1405606340

   Thanks @wecharyu! merged




Issue Time Tracking
---

Worklog Id: (was: 841849)
Time Spent: 1h 20m  (was: 1h 10m)

> Support partition filter for char and varchar types on Hive metastore
> -
>
> Key: HIVE-26661
> URL: https://issues.apache.org/jira/browse/HIVE-26661
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0-alpha-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HMS stores partition value as string in backend database, so it supports 
> "equal filter" for string (include date type) and integer types, and other 
> filter (for example greater than > ) for only string type.
> The char and varchar types can also considered as string, and we can support 
> the partition filter for them on HMS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26687) INSERT query with array type failing with SemanticException

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26687?focusedWorklogId=841847&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841847
 ]

ASF GitHub Bot logged work on HIVE-26687:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:15
Start Date: 26/Jan/23 20:15
Worklog Time Spent: 10m 
  Work Description: soumyakanti3578 commented on code in PR #3923:
URL: https://github.com/apache/hive/pull/3923#discussion_r1088298367


##
ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java:
##
@@ -8722,8 +8722,18 @@ private ExprNodeDesc handleConversion(StructField 
tableField, ColumnInfo rowFiel
   // need to do some conversions here
   conversion.set(true);
   if (tableFieldTypeInfo.getCategory() != Category.PRIMITIVE) {
-// cannot convert to complex types
-column = null;
+// handle array in case of complex types
+String array_type_prefix = "array<";

Review Comment:
   nit: For variable names, please use CamelCase. 





Issue Time Tracking
---

Worklog Id: (was: 841847)
Time Spent: 40m  (was: 0.5h)

> INSERT query with array type failing with SemanticException
> -
>
> Key: HIVE-26687
> URL: https://issues.apache.org/jira/browse/HIVE-26687
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Manthan B Y
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> *Steps to reproduce:*
> {code:java}
> DROP TABLE IF EXISTS default.tbl_oGSJ;
> CREATE TABLE default.tbl_oGSJ (c1 array);
> INSERT INTO default.tbl_oGSJ(c1) VALUES (array(55,54)); {code}
> *Error:*
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Cannot insert into target table because column number/types are different 
> 'TOK_TMP_FILE': Cannot convert column 0 from array to array. 
> (state=42000,code=4) {code}
> The same is the case for bigint, tinyint as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26537) Deprecate older APIs in the HMS

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26537?focusedWorklogId=841846&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841846
 ]

ASF GitHub Bot logged work on HIVE-26537:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 20:02
Start Date: 26/Jan/23 20:02
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3599:
URL: https://github.com/apache/hive/pull/3599#issuecomment-1405576629

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3599)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG)
 
[![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png
 
'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG)
 [1 
Bug](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL)
 [92 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3599&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3599&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841846)
Time Spent: 4h 50m  (was: 4h 40m)

> Deprecate older APIs in the HMS
> ---
>
> Key: HIVE-26537
> URL: https://issues.apache.org/jira/browse/HIVE-26537
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Critical
>  Labels: hive-4.0.0-must, pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This Jira is to track the clean-up(deprecate older APIs and point the HMS 
> client to the newer APIs) work in the hive metastore server.
> More details will be added here soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26957) Add convertCharset(s, from, to) function

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26957?focusedWorklogId=841845&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841845
 ]

ASF GitHub Bot logged work on HIVE-26957:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 19:46
Start Date: 26/Jan/23 19:46
Worklog Time Spent: 10m 
  Work Description: soumyakanti3578 commented on PR #3982:
URL: https://github.com/apache/hive/pull/3982#issuecomment-1405537138

   There are also some unused imports which should be removed. It's also a good 
idea to check the [code 
smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=CODE_SMELL)
 and resolve them whenever possible.




Issue Time Tracking
---

Worklog Id: (was: 841845)
Time Spent: 1h 20m  (was: 1h 10m)

> Add convertCharset(s, from, to) function
> 
>
> Key: HIVE-26957
> URL: https://issues.apache.org/jira/browse/HIVE-26957
> Project: Hive
>  Issue Type: New Feature
>Reporter: Bingye Chen
>Assignee: Bingye Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add convertCharset(s, from, to) function.
> The function converts the string `s` from the `from` charset to the `to` 
> charset.It is already implemented in clickhouse.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26957) Add convertCharset(s, from, to) function

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26957?focusedWorklogId=841843&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841843
 ]

ASF GitHub Bot logged work on HIVE-26957:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 19:37
Start Date: 26/Jan/23 19:37
Worklog Time Spent: 10m 
  Work Description: soumyakanti3578 commented on code in PR #3982:
URL: https://github.com/apache/hive/pull/3982#discussion_r1088264787


##
ql/src/test/queries/clientpositive/udf_convert_charset.q:
##
@@ -0,0 +1,9 @@
+DESCRIBE FUNCTION convertCharset;
+DESC FUNCTION EXTENDED convertCharset;
+
+explain select convertCharset('TestConvertCharset1', 'UTF-8', 'US-ASCII');
+
+select
+convertCharset('TestConvertCharset1', 'UTF-8', 'US-ASCII'),
+convertCharset('TestConvertCharset2', cast('UTF-8' as varchar(10)), 
'US-ASCII'),
+convertCharset('TestConvertCharset3', cast('UTF-8' as char(5)), 'US-ASCII');

Review Comment:
   nit: please add a new line at the end of the file





Issue Time Tracking
---

Worklog Id: (was: 841843)
Time Spent: 1h 10m  (was: 1h)

> Add convertCharset(s, from, to) function
> 
>
> Key: HIVE-26957
> URL: https://issues.apache.org/jira/browse/HIVE-26957
> Project: Hive
>  Issue Type: New Feature
>Reporter: Bingye Chen
>Assignee: Bingye Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add convertCharset(s, from, to) function.
> The function converts the string `s` from the `from` charset to the `to` 
> charset.It is already implemented in clickhouse.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26957) Add convertCharset(s, from, to) function

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26957?focusedWorklogId=841842&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841842
 ]

ASF GitHub Bot logged work on HIVE-26957:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 19:34
Start Date: 26/Jan/23 19:34
Worklog Time Spent: 10m 
  Work Description: soumyakanti3578 commented on code in PR #3982:
URL: https://github.com/apache/hive/pull/3982#discussion_r1088261639


##
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConvertCharset.java:
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.udf.generic;
+
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.charset.CharacterCodingException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.CharsetEncoder;
+import java.nio.charset.CodingErrorAction;
+
+import org.apache.hadoop.hive.ql.exec.Description;
+import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
+import org.apache.hadoop.hive.ql.exec.UDFArgumentLengthException;
+import org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
+import org.apache.hadoop.hive.serde2.objectinspector.ConstantObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector.Category;
+import org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;
+import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;
+import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils;
+import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.PrimitiveGrouping;
+
+@Description(name = "convertCharset", value = "_FUNC_(str, str, str) - 
Converts the first argument from the second argument character set to the third 
argument character set", extended =
+"Possible options for the character set are 'US-ASCII', 'ISO-8859-1',\n"
++ "'UTF-8', 'UTF-16BE', 'UTF-16LE', and 'UTF-16'. If either argument\n"
++ "is null, the result will also be null") public class 
GenericUDFConvertCharset extends GenericUDF {
+  private transient CharsetEncoder encoder = null;
+  private transient CharsetDecoder decoder = null;
+  private transient PrimitiveObjectInspector stringOI = null;
+  private transient PrimitiveObjectInspector fromCharsetOI = null;
+  private transient PrimitiveObjectInspector toCharsetOI = null;
+
+  @Override public ObjectInspector initialize(ObjectInspector[] arguments) 
throws UDFArgumentException {
+if (arguments.length != 3) {
+  throw new UDFArgumentLengthException("ConvertCharset() requires exactly 
three arguments");
+}
+
+if (arguments[0].getCategory() != ObjectInspector.Category.PRIMITIVE
+|| PrimitiveObjectInspectorUtils.PrimitiveGrouping.STRING_GROUP
+!= PrimitiveObjectInspectorUtils.getPrimitiveGrouping(
+((PrimitiveObjectInspector) arguments[0]).getPrimitiveCategory())) {
+  throw new UDFArgumentTypeException(0, "The first argument to 
ConvertCharset() must be a string/varchar");
+}
+
+stringOI = (PrimitiveObjectInspector) arguments[0];
+
+if (arguments[1].getCategory() != ObjectInspector.Category.PRIMITIVE
+|| PrimitiveObjectInspectorUtils.PrimitiveGrouping.STRING_GROUP
+!= PrimitiveObjectInspectorUtils.getPrimitiveGrouping(
+((PrimitiveObjectInspector) arguments[1]).getPrimitiveCategory())) {
+  throw new UDFArgumentTypeException(1, "The second argument to 
ConvertCharset() must be a string/varchar");
+}
+
+fromCharsetOI = (PrimitiveObjectInspector) arguments[1];
+
+if (arguments[2].getCategory() != ObjectInspector.Category.PRIMITIVE
+|| PrimitiveObjectInspectorUtils.PrimitiveGrouping.STRING_GROUP
+!= PrimitiveObjectInspectorUtils.getPrimitiveGrouping(
+((PrimitiveObjectInspec

[jira] [Updated] (HIVE-26995) Iceberg: Enhance time travel syntax with expressions

2023-01-26 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-26995:
--
Description: 
Allow expressions in time travel queries, such as 
{code}
FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP - interval '10' hours
{code}

  was:
Use expressions in time travel queries, such as 
{code}
FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP - interval '10' hours
{code}


> Iceberg: Enhance time travel syntax with expressions
> 
>
> Key: HIVE-26995
> URL: https://issues.apache.org/jira/browse/HIVE-26995
> Project: Hive
>  Issue Type: Task
>Reporter: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Allow expressions in time travel queries, such as 
> {code}
> FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP - interval '10' hours
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26995) Iceberg: Enhance time travel syntax with expressions

2023-01-26 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-26995:
--
Description: 
Use expressions in time travel queries, such as 
{code}
FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP - interval '10' hours
{code}

> Iceberg: Enhance time travel syntax with expressions
> 
>
> Key: HIVE-26995
> URL: https://issues.apache.org/jira/browse/HIVE-26995
> Project: Hive
>  Issue Type: Task
>Reporter: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Use expressions in time travel queries, such as 
> {code}
> FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP - interval '10' hours
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26995) Iceberg: Enhance time travel syntax with expressions

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26995:
--
Labels: pull-request-available  (was: )

> Iceberg: Enhance time travel syntax with expressions
> 
>
> Key: HIVE-26995
> URL: https://issues.apache.org/jira/browse/HIVE-26995
> Project: Hive
>  Issue Type: Task
>Reporter: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26995) Iceberg: Enhance time travel syntax with expressions

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26995?focusedWorklogId=841840&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841840
 ]

ASF GitHub Bot logged work on HIVE-26995:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 19:26
Start Date: 26/Jan/23 19:26
Worklog Time Spent: 10m 
  Work Description: deniskuzZ opened a new pull request, #3988:
URL: https://github.com/apache/hive/pull/3988

   
   
   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   




Issue Time Tracking
---

Worklog Id: (was: 841840)
Remaining Estimate: 0h
Time Spent: 10m

> Iceberg: Enhance time travel syntax with expressions
> 
>
> Key: HIVE-26995
> URL: https://issues.apache.org/jira/browse/HIVE-26995
> Project: Hive
>  Issue Type: Task
>Reporter: Denys Kuzmenko
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26994) Upgrade DBCP to DBCP2 in branch-3

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26994?focusedWorklogId=841838&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841838
 ]

ASF GitHub Bot logged work on HIVE-26994:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 19:18
Start Date: 26/Jan/23 19:18
Worklog Time Spent: 10m 
  Work Description: Aggarwal-Raghav opened a new pull request, #3987:
URL: https://github.com/apache/hive/pull/3987

   
   ### What changes were proposed in this pull request?
   
   Upgrade commons-dbcp to commons-dbcp2 2.7.0 version.
   
   
   ### Why are the changes needed?
   
Because of antiquated commons-dbcp version, upgrading it to commons-dbcp2
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   NO
   
   
   ### How was this patch tested?
   
   On local machine
   




Issue Time Tracking
---

Worklog Id: (was: 841838)
Remaining Estimate: 0h
Time Spent: 10m

> Upgrade DBCP to DBCP2 in branch-3
> -
>
> Key: HIVE-26994
> URL: https://issues.apache.org/jira/browse/HIVE-26994
> Project: Hive
>  Issue Type: Improvement
>Reporter: Raghav Aggarwal
>Assignee: Raghav Aggarwal
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26994) Upgrade DBCP to DBCP2 in branch-3

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26994:
--
Labels: pull-request-available  (was: )

> Upgrade DBCP to DBCP2 in branch-3
> -
>
> Key: HIVE-26994
> URL: https://issues.apache.org/jira/browse/HIVE-26994
> Project: Hive
>  Issue Type: Improvement
>Reporter: Raghav Aggarwal
>Assignee: Raghav Aggarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26994) Upgrade DBCP to DBCP2 in branch-3

2023-01-26 Thread Raghav Aggarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raghav Aggarwal updated HIVE-26994:
---
Affects Version/s: (was: 3.2.0)

> Upgrade DBCP to DBCP2 in branch-3
> -
>
> Key: HIVE-26994
> URL: https://issues.apache.org/jira/browse/HIVE-26994
> Project: Hive
>  Issue Type: Improvement
>Reporter: Raghav Aggarwal
>Assignee: Raghav Aggarwal
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26994) Upgrade DBCP to DBCP2 in branch-3

2023-01-26 Thread Raghav Aggarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raghav Aggarwal reassigned HIVE-26994:
--


> Upgrade DBCP to DBCP2 in branch-3
> -
>
> Key: HIVE-26994
> URL: https://issues.apache.org/jira/browse/HIVE-26994
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Raghav Aggarwal
>Assignee: Raghav Aggarwal
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26989) Fix predicate pushdown for Timestamp with TZ

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26989?focusedWorklogId=841836&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841836
 ]

ASF GitHub Bot logged work on HIVE-26989:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 18:58
Start Date: 26/Jan/23 18:58
Worklog Time Spent: 10m 
  Work Description: ramesh0201 merged PR #3985:
URL: https://github.com/apache/hive/pull/3985




Issue Time Tracking
---

Worklog Id: (was: 841836)
Time Spent: 40m  (was: 0.5h)

> Fix predicate pushdown for Timestamp with TZ
> 
>
> Key: HIVE-26989
> URL: https://issues.apache.org/jira/browse/HIVE-26989
> Project: Hive
>  Issue Type: Task
>  Components: Hive, Iceberg integration
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Running a query which is filtering for {{TIMESTAMP WITH LOCAL TIME ZONE}} 
> returns the correct results but the predicate is not pushed to Iceberg.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26035) Explore moving to directsql for ObjectStore::addPartitions

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26035?focusedWorklogId=841829&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841829
 ]

ASF GitHub Bot logged work on HIVE-26035:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 18:34
Start Date: 26/Jan/23 18:34
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera commented on code in PR #3905:
URL: https://github.com/apache/hive/pull/3905#discussion_r1087110063


##
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java:
##
@@ -692,6 +692,8 @@ public enum ConfVars {
 "Default transaction isolation level for identity generation."),
 
DATANUCLEUS_USE_LEGACY_VALUE_STRATEGY("datanucleus.rdbms.useLegacyNativeValueStrategy",
 "datanucleus.rdbms.useLegacyNativeValueStrategy", true, ""),
+DATANUCLEUS_QUERY_SQL_ALLOWALL("datanucleus.query.sql.allowAll", 
"datanucleus.query.sql.allowAll",
+true, "Allow insert, update and delete operations from JDO SQL"),

Review Comment:
   Can this description be more detailed? Like the performance impact of this 
config.



##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlInsertPart.java:
##
@@ -0,0 +1,835 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore;
+
+import static org.apache.commons.lang3.StringUtils.repeat;
+import static org.apache.hadoop.hive.metastore.Batchable.NO_BATCHING;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import javax.jdo.PersistenceManager;
+
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.model.MColumnDescriptor;
+import org.apache.hadoop.hive.metastore.model.MFieldSchema;
+import org.apache.hadoop.hive.metastore.model.MOrder;
+import org.apache.hadoop.hive.metastore.model.MPartition;
+import org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege;
+import org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
+import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
+import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
+import org.apache.hadoop.hive.metastore.model.MStringList;
+import org.datanucleus.ExecutionContext;
+import org.datanucleus.api.jdo.JDOPersistenceManager;
+import org.datanucleus.identity.DatastoreId;
+import org.datanucleus.metadata.AbstractClassMetaData;
+import org.datanucleus.metadata.IdentityType;
+
+/**
+ * This class contains the methods to insert into tables on the underlying 
database using direct SQL
+ */
+class DirectSqlInsertPart {
+  private final PersistenceManager pm;
+  private final DatabaseProduct dbType;
+  private final int batchSize;
+
+  public DirectSqlInsertPart(PersistenceManager pm, DatabaseProduct dbType, 
int batchSize) {
+this.pm = pm;
+this.dbType = dbType;
+this.batchSize = batchSize;
+  }
+
+  /**
+   * Interface to execute multiple row insert query in batch for direct SQL
+   */
+  interface BatchExecutionContext {
+void execute(String batchQueryText, int batchRowCount, int 
batchParamCount) throws MetaException;
+  }
+
+  private Long getDataStoreId(Class modelClass) throws MetaException {

Review Comment:
   Can you add java docs for all the newly added methods?



##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlInsertPart.java:
##
@@ -0,0 +1,835 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ 

[jira] [Work logged] (HIVE-26988) Apache website add redirects to search engine cached pages

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26988?focusedWorklogId=841827&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841827
 ]

ASF GitHub Bot logged work on HIVE-26988:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 18:16
Start Date: 26/Jan/23 18:16
Worklog Time Spent: 10m 
  Work Description: simhadri-g commented on code in PR #3:
URL: https://github.com/apache/hive-site/pull/3#discussion_r1088190111


##
content/Developement/gettingStarted.md:
##
@@ -2,6 +2,7 @@
 title: "GettingStarted"
 date: 2023-01-10T12:35:11+05:30
 draft: false
+aliases: [/GettingStarted]

Review Comment:
   You are right, aliase is not needed for the Getting started page. But I 
think we will need this for the other pages, at least until google claws the 
hive website and updates its search index.
   
   The second part, pointing the wiki to 
https://cwiki.apache.org/confluence/display/Hive/GettingStarted  is being 
addressed here by Mahesh - https://issues.apache.org/jira/browse/HIVE-26983
   
   Thanks! 





Issue Time Tracking
---

Worklog Id: (was: 841827)
Time Spent: 1h  (was: 50m)

> Apache website add redirects to search engine cached pages
> --
>
> Key: HIVE-26988
> URL: https://issues.apache.org/jira/browse/HIVE-26988
> Project: Hive
>  Issue Type: Bug
>  Components: Website
>Reporter: Simhadri Govindappa
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> 1. Some of the links are broken. 
> 2. The search engine has cached a few pages - 
> Example: search engine points to [https://hive.apache.org/mailing_lists.html] 
> but this page is moved to [https://hive.apache.org//community/mailinglists/] .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26988) Apache website add redirects to search engine cached pages

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26988?focusedWorklogId=841826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841826
 ]

ASF GitHub Bot logged work on HIVE-26988:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 18:14
Start Date: 26/Jan/23 18:14
Worklog Time Spent: 10m 
  Work Description: simhadri-g commented on code in PR #3:
URL: https://github.com/apache/hive-site/pull/3#discussion_r1088190111


##
content/Developement/gettingStarted.md:
##
@@ -2,6 +2,7 @@
 title: "GettingStarted"
 date: 2023-01-10T12:35:11+05:30
 draft: false
+aliases: [/GettingStarted]

Review Comment:
   You are right, aliases is not needed for the Getting started page. We need 
these for the other pages at least until google claws the hive website and 
updates its search index.
   
   Pointing the wiki to 
https://cwiki.apache.org/confluence/display/Hive/GettingStarted  is being 
addressed here by Mahesh - https://issues.apache.org/jira/browse/HIVE-26983
   
   Thanks! 





Issue Time Tracking
---

Worklog Id: (was: 841826)
Time Spent: 50m  (was: 40m)

> Apache website add redirects to search engine cached pages
> --
>
> Key: HIVE-26988
> URL: https://issues.apache.org/jira/browse/HIVE-26988
> Project: Hive
>  Issue Type: Bug
>  Components: Website
>Reporter: Simhadri Govindappa
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> 1. Some of the links are broken. 
> 2. The search engine has cached a few pages - 
> Example: search engine points to [https://hive.apache.org/mailing_lists.html] 
> but this page is moved to [https://hive.apache.org//community/mailinglists/] .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26928) LlapIoImpl::getParquetFooterBuffersFromCache throws exception when metadata cache is disabled

2023-01-26 Thread Simhadri Govindappa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simhadri Govindappa resolved HIVE-26928.

Resolution: Fixed

Change merged to master. 
Thank you [~dkuzmenko]  and [~ayushtkn]  for the review!!!

> LlapIoImpl::getParquetFooterBuffersFromCache throws exception when metadata 
> cache is disabled
> -
>
> Key: HIVE-26928
> URL: https://issues.apache.org/jira/browse/HIVE-26928
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration
>Reporter: Rajesh Balamohan
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When metadata / LLAP cache is disabled, "iceberg + parquet" throws the 
> following error. "{color:#5a656d}hive.llap.io.memory.mode=none"{color}
> It should check for "metadatacache" correctly or fix it in LlapIoImpl.
>  
> {noformat}
> Caused by: java.lang.NullPointerException: Metadata cache must not be null
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:897)
>     at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl.getParquetFooterBuffersFromCache(LlapIoImpl.java:467)
>     at 
> org.apache.iceberg.mr.hive.vector.HiveVectorizedReader.parquetRecordReader(HiveVectorizedReader.java:227)
>     at 
> org.apache.iceberg.mr.hive.vector.HiveVectorizedReader.reader(HiveVectorizedReader.java:162)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65)
>     at 
> org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77)
>     at 
> org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196)
>     at 
> org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.openVectorized(IcebergInputFormat.java:331)
>     at 
> org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.open(IcebergInputFormat.java:377)
>     at 
> org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.nextTask(IcebergInputFormat.java:270)
>     at 
> org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.initialize(IcebergInputFormat.java:266)
>     at 
> org.apache.iceberg.mr.mapred.AbstractMapredIcebergRecordReader.(AbstractMapredIcebergRecordReader.java:40)
>     at 
> org.apache.iceberg.mr.hive.vector.HiveIcebergVectorizedRecordReader.(HiveIcebergVectorizedRecordReader.java:41)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26957) Add convertCharset(s, from, to) function

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26957?focusedWorklogId=841746&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841746
 ]

ASF GitHub Bot logged work on HIVE-26957:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 13:52
Start Date: 26/Jan/23 13:52
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3982:
URL: https://github.com/apache/hive/pull/3982#issuecomment-1405036525

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3982)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3982&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3982&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3982&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=CODE_SMELL)
 [9 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3982&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3982&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3982&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841746)
Time Spent: 50m  (was: 40m)

> Add convertCharset(s, from, to) function
> 
>
> Key: HIVE-26957
> URL: https://issues.apache.org/jira/browse/HIVE-26957
> Project: Hive
>  Issue Type: New Feature
>Reporter: Bingye Chen
>Assignee: Bingye Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Add convertCharset(s, from, to) function.
> The function converts the string `s` from the `from` charset to the `to` 
> charset.It is already implemented in clickhouse.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26935) Expose root cause of MetaException to client sides

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26935?focusedWorklogId=841732&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841732
 ]

ASF GitHub Bot logged work on HIVE-26935:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 12:39
Start Date: 26/Jan/23 12:39
Worklog Time Spent: 10m 
  Work Description: zabetak commented on code in PR #3938:
URL: https://github.com/apache/hive/pull/3938#discussion_r1087810684


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java:
##
@@ -203,11 +203,12 @@ public Result invokeInternal(final Object proxy, final 
Method method, final Obje
 }
   }
 
-  if (retryCount >= retryLimit) {
+  Throwable rootCause = ExceptionUtils.getRootCause(caughtException);
+  String errorMessage = ExceptionUtils.getMessage(caughtException) +

Review Comment:
   Users are not necessarily developers; throwing exceptions back to them 
should be the norm. Consider for instance some of the most popular DBMS 
(Postgres, Oracle, MSSQL, etc); I don't think anyone is exposing exceptions to 
the user.
   
   This is a minor comment not really blocking but I wanted to mention that 
propagating exceptions to the client is not the ideal solution.





Issue Time Tracking
---

Worklog Id: (was: 841732)
Time Spent: 1h 50m  (was: 1h 40m)

> Expose root cause of MetaException to client sides
> --
>
> Key: HIVE-26935
> URL: https://issues.apache.org/jira/browse/HIVE-26935
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0-alpha-2
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> MetaException is generated by thrift, and only {{message}} filed will be 
> transport to client, we should expose the root cause in message to the 
> clients with following advantages:
>  * More friendly for user troubleshooting
>  * Some root cause is unrecoverable, exposing it can disable the unnecessary 
> retry.
> *How to Reproduce:*
>  - Step 1: Disable direct sql for HMS for our test case.
>  - Step 2: Add an illegal {{PART_COL_STATS}} for a partition,
>  - Step 3: Try to {{drop table}} with Spark.
> The exception in Hive metastore is:
> {code:sh}
> 2023-01-11T17:13:51,259 ERROR [Metastore-Handler-Pool: Thread-39]: 
> metastore.ObjectStore (ObjectStore.java:run(4369)) - 
> javax.jdo.JDOUserException: One or more instances could not be deleted
> at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:625)
>  ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.deletePersistentInternal(JDOQuery.java:530) 
> ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.deletePersistentAll(JDOQuery.java:499) 
> ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.apache.hadoop.hive.metastore.QueryWrapper.deletePersistentAll(QueryWrapper.java:108)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsNoTxn(ObjectStore.java:4207)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.access$1000(ObjectStore.java:285)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$7.run(ObjectStore.java:3086) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.Batchable.runBatched(Batchable.java:74) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsViaJdo(ObjectStore.java:3074)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.access$400(ObjectStore.java:285) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3058)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3050)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:4362)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsInternal(ObjectStore.java:3061)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitions(ObjectStore.java:3040)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>

[jira] [Work logged] (HIVE-26935) Expose root cause of MetaException to client sides

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26935?focusedWorklogId=841729&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841729
 ]

ASF GitHub Bot logged work on HIVE-26935:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 12:33
Start Date: 26/Jan/23 12:33
Worklog Time Spent: 10m 
  Work Description: zabetak commented on code in PR #3938:
URL: https://github.com/apache/hive/pull/3938#discussion_r1087803841


##
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java:
##
@@ -264,15 +264,18 @@ public Object run() throws MetaException {
 return ret;
   }
 
-  private static boolean isRecoverableMetaException(MetaException e) {
-String m = e.getMessage();
-if (m == null) {
+  public static boolean isRecoverableMessage(String exceptionMsg) {

Review Comment:
   I would rather leave things as they are right now as far as it concerns this 
method. If we want to change the behavior of the server let's discuss under a 
dedicated JIRA.





Issue Time Tracking
---

Worklog Id: (was: 841729)
Time Spent: 1h 40m  (was: 1.5h)

> Expose root cause of MetaException to client sides
> --
>
> Key: HIVE-26935
> URL: https://issues.apache.org/jira/browse/HIVE-26935
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0-alpha-2
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> MetaException is generated by thrift, and only {{message}} filed will be 
> transport to client, we should expose the root cause in message to the 
> clients with following advantages:
>  * More friendly for user troubleshooting
>  * Some root cause is unrecoverable, exposing it can disable the unnecessary 
> retry.
> *How to Reproduce:*
>  - Step 1: Disable direct sql for HMS for our test case.
>  - Step 2: Add an illegal {{PART_COL_STATS}} for a partition,
>  - Step 3: Try to {{drop table}} with Spark.
> The exception in Hive metastore is:
> {code:sh}
> 2023-01-11T17:13:51,259 ERROR [Metastore-Handler-Pool: Thread-39]: 
> metastore.ObjectStore (ObjectStore.java:run(4369)) - 
> javax.jdo.JDOUserException: One or more instances could not be deleted
> at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:625)
>  ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.deletePersistentInternal(JDOQuery.java:530) 
> ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.deletePersistentAll(JDOQuery.java:499) 
> ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.apache.hadoop.hive.metastore.QueryWrapper.deletePersistentAll(QueryWrapper.java:108)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsNoTxn(ObjectStore.java:4207)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.access$1000(ObjectStore.java:285)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$7.run(ObjectStore.java:3086) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.Batchable.runBatched(Batchable.java:74) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsViaJdo(ObjectStore.java:3074)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.access$400(ObjectStore.java:285) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3058)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3050)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:4362)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsInternal(ObjectStore.java:3061)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitions(ObjectStore.java:3040)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_332]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_332]
> at 
> sun.reflect.Dele

[jira] [Work logged] (HIVE-26935) Expose root cause of MetaException to client sides

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26935?focusedWorklogId=841726&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841726
 ]

ASF GitHub Bot logged work on HIVE-26935:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 12:30
Start Date: 26/Jan/23 12:30
Worklog Time Spent: 10m 
  Work Description: zabetak commented on code in PR #3938:
URL: https://github.com/apache/hive/pull/3938#discussion_r1087799467


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java:
##
@@ -203,11 +203,12 @@ public Result invokeInternal(final Object proxy, final 
Method method, final Obje
 }
   }
 
-  if (retryCount >= retryLimit) {
+  Throwable rootCause = ExceptionUtils.getRootCause(caughtException);
+  String errorMessage = ExceptionUtils.getMessage(caughtException) +
+  (rootCause == null ? "" : ("\nRoot cause: " + rootCause));
+  if (!RetryingMetaStoreClient.isRecoverableMessage(errorMessage) ||  
retryCount >= retryLimit) {
 LOG.error("HMSHandler Fatal error: " + 
ExceptionUtils.getStackTrace(caughtException));
-MetaException me = new MetaException(caughtException.toString());
-me.initCause(caughtException);

Review Comment:
   I know but the exception is more useful for the server than it is for the 
client thus it should be complete. The people that will debug the problem are 
looking in the server logs not in the client logs.





Issue Time Tracking
---

Worklog Id: (was: 841726)
Time Spent: 1.5h  (was: 1h 20m)

> Expose root cause of MetaException to client sides
> --
>
> Key: HIVE-26935
> URL: https://issues.apache.org/jira/browse/HIVE-26935
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0-alpha-2
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> MetaException is generated by thrift, and only {{message}} filed will be 
> transport to client, we should expose the root cause in message to the 
> clients with following advantages:
>  * More friendly for user troubleshooting
>  * Some root cause is unrecoverable, exposing it can disable the unnecessary 
> retry.
> *How to Reproduce:*
>  - Step 1: Disable direct sql for HMS for our test case.
>  - Step 2: Add an illegal {{PART_COL_STATS}} for a partition,
>  - Step 3: Try to {{drop table}} with Spark.
> The exception in Hive metastore is:
> {code:sh}
> 2023-01-11T17:13:51,259 ERROR [Metastore-Handler-Pool: Thread-39]: 
> metastore.ObjectStore (ObjectStore.java:run(4369)) - 
> javax.jdo.JDOUserException: One or more instances could not be deleted
> at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:625)
>  ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.deletePersistentInternal(JDOQuery.java:530) 
> ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.datanucleus.api.jdo.JDOQuery.deletePersistentAll(JDOQuery.java:499) 
> ~[datanucleus-api-jdo-5.2.8.jar:?]
> at 
> org.apache.hadoop.hive.metastore.QueryWrapper.deletePersistentAll(QueryWrapper.java:108)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsNoTxn(ObjectStore.java:4207)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.access$1000(ObjectStore.java:285)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$7.run(ObjectStore.java:3086) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.Batchable.runBatched(Batchable.java:74) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsViaJdo(ObjectStore.java:3074)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.access$400(ObjectStore.java:285) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3058)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3050)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:4362)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsInternal(ObjectStore.

[jira] [Work logged] (HIVE-26988) Apache website add redirects to search engine cached pages

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26988?focusedWorklogId=841718&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841718
 ]

ASF GitHub Bot logged work on HIVE-26988:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 11:59
Start Date: 26/Jan/23 11:59
Worklog Time Spent: 10m 
  Work Description: zabetak commented on code in PR #3:
URL: https://github.com/apache/hive-site/pull/3#discussion_r1087749617


##
content/Developement/gettingStarted.md:
##
@@ -2,6 +2,7 @@
 title: "GettingStarted"
 date: 2023-01-10T12:35:11+05:30
 draft: false
+aliases: [/GettingStarted]

Review Comment:
   Was there ever a link:
   `https://hive.apache.org/GettingStarted`?
   
   I just tried the following and does not work either.
   `https://apache.github.io/hive-site/GettingStarted`



##
content/Developement/gettingStarted.md:
##
@@ -2,6 +2,7 @@
 title: "GettingStarted"
 date: 2023-01-10T12:35:11+05:30
 draft: false
+aliases: [/GettingStarted]

Review Comment:
   I think we don't need an alias. We just need to fix:
   
https://github.com/apache/hive-site/blob/6097c3f2cf8e0186a39f172ac8165509b1457a6e/content/Developement/gettingStarted.md?plain=1#L55
   
   to point to the wiki:
   https://cwiki.apache.org/confluence/display/Hive/GettingStarted





Issue Time Tracking
---

Worklog Id: (was: 841718)
Time Spent: 40m  (was: 0.5h)

> Apache website add redirects to search engine cached pages
> --
>
> Key: HIVE-26988
> URL: https://issues.apache.org/jira/browse/HIVE-26988
> Project: Hive
>  Issue Type: Bug
>  Components: Website
>Reporter: Simhadri Govindappa
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> 1. Some of the links are broken. 
> 2. The search engine has cached a few pages - 
> Example: search engine points to [https://hive.apache.org/mailing_lists.html] 
> but this page is moved to [https://hive.apache.org//community/mailinglists/] .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26984) Deprecate public HiveConf constructors

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26984?focusedWorklogId=841714&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841714
 ]

ASF GitHub Bot logged work on HIVE-26984:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 11:22
Start Date: 26/Jan/23 11:22
Worklog Time Spent: 10m 
  Work Description: zabetak commented on code in PR #3983:
URL: https://github.com/apache/hive/pull/3983#discussion_r1087707504


##
common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:
##
@@ -6322,29 +6322,67 @@ public ZooKeeperHiveHelper getZKConfig() {
   .trustStorePassword(trustStorePassword).build();
   }
 
+  public static HiveConf create() {
+return new HiveConf();
+  }
+
+  public static HiveConf create(Class cls) {
+return new HiveConf(cls);
+  }
+
+  public static HiveConf create(Configuration other, Class cls) {
+return new HiveConf(other, cls);
+  }
+
+
+  public static HiveConf create(HiveConf other) {
+return new HiveConf(other);
+  }
+
+  /**
+   * Instantiating HiveConf is deprecated. Please use
+   * HiveConf#create() to construct a Configuration,
+   * this method will become private eventually.
+   * @deprecated Please use create method instead.
+   */
   public HiveConf() {

Review Comment:
   Please include the `@Deprecated` annotation it is useful for some purposes.
   
   "Instantiating HiveConf is deprecated" phrase is redundant. IDEs and 
compilers will generate automatically similar warnings when calling this method.
   
   "Instatiating...eventually" the whole paragraph concerns deprecation so it 
should be associated with the `@deprecated` tag.
   
   Based on [1] and the comments above, I would suggest the following:
   ```java
 /**
  * @deprecated This method will become private eventually; Use {@link 
#create()} instead. 
  */
 @Deprecated
 public HiveConf() {
   ```
   
   [1] 
https://docs.oracle.com/javase/7/docs/technotes/guides/javadoc/deprecation/deprecation.html
   



##
common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:
##
@@ -6322,29 +6322,67 @@ public ZooKeeperHiveHelper getZKConfig() {
   .trustStorePassword(trustStorePassword).build();
   }
 
+  public static HiveConf create() {
+return new HiveConf();
+  }
+
+  public static HiveConf create(Class cls) {
+return new HiveConf(cls);
+  }
+
+  public static HiveConf create(Configuration other, Class cls) {
+return new HiveConf(other, cls);
+  }
+
+
+  public static HiveConf create(HiveConf other) {
+return new HiveConf(other);
+  }
+
+  /**
+   * Instantiating HiveConf is deprecated. Please use
+   * HiveConf#create() to construct a Configuration,
+   * this method will become private eventually.

Review Comment:
   If we make those private then we are breaking inheritance which is fine by 
me but wanted to mention it just in case.
   
   For sanity reasons I would also attempt to run the precommit tests with all 
the constructors private to see if there are any hidden reflection calls or 
other exotic usages.





Issue Time Tracking
---

Worklog Id: (was: 841714)
Time Spent: 40m  (was: 0.5h)

> Deprecate public HiveConf constructors
> --
>
> Key: HIVE-26984
> URL: https://issues.apache.org/jira/browse/HIVE-26984
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> From time to time we investigate configuration object problems that are hard 
> to investigate. We can improve this area, e.g. with HIVE-26985, but first, we 
> need to introduce a public static factory method to hook into the creation 
> process. I can see this pattern in another projects as well, like: 
> HBaseConfiguration.
> Creating custom HiveConf subclasses can be useful because putting optional 
> (say: if else branches or whatever) stuff into the original HiveConf object's 
> hot codepaths can turn it less performant instantly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26990) Upgrade Iceberg to 1.1.0

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26990?focusedWorklogId=841708&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841708
 ]

ASF GitHub Bot logged work on HIVE-26990:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 10:41
Start Date: 26/Jan/23 10:41
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3968:
URL: https://github.com/apache/hive/pull/3968#issuecomment-1404824743

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3968)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3968&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3968&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3968&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=CODE_SMELL)
 [3 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3968&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3968&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3968&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841708)
Remaining Estimate: 0h
Time Spent: 10m

> Upgrade Iceberg to 1.1.0
> 
>
> Key: HIVE-26990
> URL: https://issues.apache.org/jira/browse/HIVE-26990
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration
>Reporter: Zsolt Miskolczi
>Assignee: Zsolt Miskolczi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Iceberg integration currently uses Iceberg 1.0.0
> Upgrade it to 1.1.0 to be able to utilise new features.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26990) Upgrade Iceberg to 1.1.0

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26990:
--
Labels: pull-request-available  (was: )

> Upgrade Iceberg to 1.1.0
> 
>
> Key: HIVE-26990
> URL: https://issues.apache.org/jira/browse/HIVE-26990
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration
>Reporter: Zsolt Miskolczi
>Assignee: Zsolt Miskolczi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Iceberg integration currently uses Iceberg 1.0.0
> Upgrade it to 1.1.0 to be able to utilise new features.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26991) typos in method and field names

2023-01-26 Thread Michal Lorek (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Lorek reassigned HIVE-26991:
---


> typos in method and field names
> ---
>
> Key: HIVE-26991
> URL: https://issues.apache.org/jira/browse/HIVE-26991
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>
> typos in
> ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLTask.java
> [ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java|https://github.com/apache/hive/pull/3945/files#diff-ecf0bd4d24a899907e8d368d37d3f4945fd1a323a9da9b607b8444ab9793140d]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HIVE-26990) Upgrade Iceberg to 1.1.0

2023-01-26 Thread Zsolt Miskolczi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-26990 started by Zsolt Miskolczi.
--
> Upgrade Iceberg to 1.1.0
> 
>
> Key: HIVE-26990
> URL: https://issues.apache.org/jira/browse/HIVE-26990
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration
>Reporter: Zsolt Miskolczi
>Assignee: Zsolt Miskolczi
>Priority: Major
>
> Iceberg integration currently uses Iceberg 1.0.0
> Upgrade it to 1.1.0 to be able to utilise new features.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26990) Upgrade Iceberg to 1.1.0

2023-01-26 Thread Zsolt Miskolczi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Miskolczi reassigned HIVE-26990:
--


> Upgrade Iceberg to 1.1.0
> 
>
> Key: HIVE-26990
> URL: https://issues.apache.org/jira/browse/HIVE-26990
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration
>Reporter: Zsolt Miskolczi
>Assignee: Zsolt Miskolczi
>Priority: Major
>
> Iceberg integration currently uses Iceberg 1.0.0
> Upgrade it to 1.1.0 to be able to utilise new features.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26986) A DAG created by OperatorGraph is not equal to the Tez DAG.

2023-01-26 Thread Seonggon Namgung (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680933#comment-17680933
 ] 

Seonggon Namgung commented on HIVE-26986:
-

The attatched image files("Query71 TezDAG.png" and "Query71 OperatorGraph.png") 
show Tez DAG and OperatorGraph of TPC-DS query71.
I set tez.generate.debug.artifacts to get a dot file of Tez DAG.
The OperatorGraph is created after ParallelEdgeFixer is applied.

The number of clusters in the OperatorGraph is 10, but the number of vertices 
in the Tez DAG is 12.
The difference comes from cluster 3 of the OperatorGraph, which contains 3 TS 
operators and a UNION operator.

Current OperatorGraph creates a singleton cluster for each operator and merges 
parent operator's cluster to child operator's cluster unless parent operator is 
ReduceSink operator.
As a result, there can be a cluster with multiple root operators, which cannot 
form a single vertex in Tez DAG.
This inequality between Tez DAG and OperatorGraph makes false-positive errors 
when detecting parallel edges and leads to insertion of unnecessary 
concentrator RS.

> A DAG created by OperatorGraph is not equal to the Tez DAG.
> ---
>
> Key: HIVE-26986
> URL: https://issues.apache.org/jira/browse/HIVE-26986
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0-alpha-2
>Reporter: Seonggon Namgung
>Assignee: Seonggon Namgung
>Priority: Major
> Attachments: Query71 OperatorGraph.png, Query71 TezDAG.png
>
>
> A DAG created by OperatorGraph is not equal to the corresponding DAG that is 
> submitted to Tez.
> Because of this problem, ParallelEdgeFixer reports a pair of normal edges to 
> a parallel edge.
> We observe this problem by comparing OperatorGraph and Tez DAG when running 
> TPC-DS query 71 on 1TB ORC format managed table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26986) A DAG created by OperatorGraph is not equal to the Tez DAG.

2023-01-26 Thread Seonggon Namgung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seonggon Namgung updated HIVE-26986:

Attachment: Query71 OperatorGraph.png
Query71 TezDAG.png

> A DAG created by OperatorGraph is not equal to the Tez DAG.
> ---
>
> Key: HIVE-26986
> URL: https://issues.apache.org/jira/browse/HIVE-26986
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0-alpha-2
>Reporter: Seonggon Namgung
>Assignee: Seonggon Namgung
>Priority: Major
> Attachments: Query71 OperatorGraph.png, Query71 TezDAG.png
>
>
> A DAG created by OperatorGraph is not equal to the corresponding DAG that is 
> submitted to Tez.
> Because of this problem, ParallelEdgeFixer reports a pair of normal edges to 
> a parallel edge.
> We observe this problem by comparing OperatorGraph and Tez DAG when running 
> TPC-DS query 71 on 1TB ORC format managed table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26989) Fix predicate pushdown for Timestamp with TZ

2023-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26989?focusedWorklogId=841680&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841680
 ]

ASF GitHub Bot logged work on HIVE-26989:
-

Author: ASF GitHub Bot
Created on: 26/Jan/23 08:41
Start Date: 26/Jan/23 08:41
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3985:
URL: https://github.com/apache/hive/pull/3985#issuecomment-1404697050

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3985)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3985&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3985&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3985&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3985&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3985&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3985&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 841680)
Time Spent: 0.5h  (was: 20m)

> Fix predicate pushdown for Timestamp with TZ
> 
>
> Key: HIVE-26989
> URL: https://issues.apache.org/jira/browse/HIVE-26989
> Project: Hive
>  Issue Type: Task
>  Components: Hive, Iceberg integration
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Running a query which is filtering for {{TIMESTAMP WITH LOCAL TIME ZONE}} 
> returns the correct results but the predicate is not pushed to Iceberg.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)