[jira] [Work logged] (HIVE-24063) SqlFunctionConverter#getHiveUDF handles cast before geting FunctionInfo

2020-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24063?focusedWorklogId=499899=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499899
 ]

ASF GitHub Bot logged work on HIVE-24063:
-

Author: ASF GitHub Bot
Created on: 13/Oct/20 07:58
Start Date: 13/Oct/20 07:58
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk merged pull request #1421:
URL: https://github.com/apache/hive/pull/1421


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 499899)
Time Spent: 20m  (was: 10m)

> SqlFunctionConverter#getHiveUDF handles cast before geting FunctionInfo
> ---
>
> Key: HIVE-24063
> URL: https://issues.apache.org/jira/browse/HIVE-24063
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When the current SqlOperator is SqlCastFunction, 
> FunctionRegistry.getFunctionInfo would return null, 
> but when hive.allow.udf.load.on.demand is enabled, HiveServer2 will refer to 
> metastore for the function definition,  an exception stack trace can be seen 
> here in HiveServer2 log:
> INFO exec.FunctionRegistry: Unable to look up default.cast in metastore
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> NoSuchObjectException(message:Function @hive#default.cast does not exist)
>  at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:5495) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfoFromMetastoreNoLock(Registry.java:788)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Registry.getQualifiedFunctionInfo(Registry.java:657)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfo(Registry.java:351) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:597)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.SqlFunctionConverter.getHiveUDF(SqlFunctionConverter.java:158)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:112)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:68)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:134)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:68)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:134)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:68)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:134)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] 
>  
> So it's may be better to handle explicit cast before geting the FunctionInfo 
> from Registry. Even if there is no cast in the query,  the method 
> handleExplicitCast returns null quickly when op.kind is not a SqlKind.CAST.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24063) SqlFunctionConverter#getHiveUDF handles cast before geting FunctionInfo

2020-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24063?focusedWorklogId=473816=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-473816
 ]

ASF GitHub Bot logged work on HIVE-24063:
-

Author: ASF GitHub Bot
Created on: 24/Aug/20 10:06
Start Date: 24/Aug/20 10:06
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 opened a new pull request #1421:
URL: https://github.com/apache/hive/pull/1421


   …g FunctionInfo
   
   
   
   ### What changes were proposed in this pull request?
   SqlFunctionConverter#getHiveUDF handles cast before geting FunctionInfo
   
   
   
   ### Why are the changes needed?
   With hive.allow.udf.load.on.demand is enabled,  another rpc call will be 
make to metastore for cast definition when getting FunctionInfo, but there is 
no need to do this.
   
   
   
   ### Does this PR introduce _any_ user-facing change
   No
   
   
   
   ### How was this patch tested?
   Included tests
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 473816)
Remaining Estimate: 0h
Time Spent: 10m

> SqlFunctionConverter#getHiveUDF handles cast before geting FunctionInfo
> ---
>
> Key: HIVE-24063
> URL: https://issues.apache.org/jira/browse/HIVE-24063
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Zhihua Deng
>Priority: Trivial
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When the current SqlOperator is SqlCastFunction, 
> FunctionRegistry.getFunctionInfo would return null, 
> but when hive.allow.udf.load.on.demand is enabled, HiveServer2 will refer to 
> metastore for the function definition,  an exception stack trace can be seen 
> here in HiveServer2 log:
> INFO exec.FunctionRegistry: Unable to look up default.cast in metastore
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> NoSuchObjectException(message:Function @hive#default.cast does not exist)
>  at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:5495) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfoFromMetastoreNoLock(Registry.java:788)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Registry.getQualifiedFunctionInfo(Registry.java:657)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfo(Registry.java:351) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:597)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.SqlFunctionConverter.getHiveUDF(SqlFunctionConverter.java:158)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:112)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:68)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:134)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:68)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:134)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:68)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.PartitionPrune$ExtractPartPruningPredicate.visitCall(PartitionPrune.java:134)
>