override getResultType方法,返回Types.SQL_TIMESTAMP.
这样应该可以绕过。
1.10会修复这个问题。

Best,
Jingsong Lee


------------------------------------------------------------------
From:守护 <346531...@qq.com>
Send Time:2019年9月5日(星期四) 12:11
To:user-zh@flink.apache.org JingsongLee <lzljs3620...@aliyun.com>; user-zh 
<user-zh@flink.apache.org>
Subject:回复: Re: flink1.9中blinkSQL对定义udf的TIMESTAMP类型报错

在哪声明DataType,这个要引入什么包吗,求指点,我的udf代码如下:

import org.apache.flink.table.functions.ScalarFunction;
import java.sql.Timestamp;


public class UTC2Local extends ScalarFunction {
    public Timestamp eval(Timestamp s) {
        long timestamp = s.getTime() + 28800000;
        return new Timestamp(timestamp);
    }

}



------------------ 原始邮件 ------------------
发件人: "JingsongLee"<lzljs3620...@aliyun.com.INVALID>;
发送时间: 2019年9月5日(星期四) 中午11:55
收件人: "user-zh"<user-zh@flink.apache.org>;
主题:  Re: flink1.9中blinkSQL对定义udf的TIMESTAMP类型报错

你声明了DataType吗?代码怎么写的?
由于目前只支持精度<=3,所以你得用DataTypes.TIMESTAMP(3)来表示。

Best,
Jingsong Lee


------------------------------------------------------------------
From:守护 <346531...@qq.com>
Send Time:2019年9月5日(星期四) 11:48
To:user-zh <user-zh@flink.apache.org>
Subject:flink1.9中blinkSQL对定义udf的TIMESTAMP类型报错

社区的各位大佬好:


使用场景:flink1.9版本使用flinkSQL创建udf函数使用没有问,当切换到blinkSQL使用这个udf就会报错TIMESTAMP类型错误,udf实现的功能也很简单,就是将时间+8小时,报错信息如下


org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Not support dataType: TIMESTAMP(9)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
&nbsp; &nbsp; &nbsp; &nbsp; at 
java.security.AccessController.doPrivileged(Native Method)
&nbsp; &nbsp; &nbsp; &nbsp; at 
javax.security.auth.Subject.doAs(Subject.java:422)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: java.lang.RuntimeException: Not support dataType: TIMESTAMP(9)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.dataformat.DataFormatConverters.getConverterForDataType(DataFormatConverters.java:248)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.CodeGenUtils$.isConverterIdentity(CodeGenUtils.scala:661)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.CodeGenUtils$.genToInternal(CodeGenUtils.scala:669)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.CodeGenUtils$.genToInternal(CodeGenUtils.scala:665)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.CodeGenUtils$.genToInternalIfNeeded(CodeGenUtils.scala:687)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.calls.ScalarFunctionCallGen.generate(ScalarFunctionCallGen.scala:79)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:737)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:451)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:51)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.calcite.rex.RexCall.accept(RexCall.java:191)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:131)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$5.apply(CalcCodeGenerator.scala:150)
&nbsp; &nbsp; &nbsp; &nbsp; at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$5.apply(CalcCodeGenerator.scala:150)
&nbsp; &nbsp; &nbsp; &nbsp; at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
&nbsp; &nbsp; &nbsp; &nbsp; at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
&nbsp; &nbsp; &nbsp; &nbsp; at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
&nbsp; &nbsp; &nbsp; &nbsp; at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
&nbsp; &nbsp; &nbsp; &nbsp; at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
&nbsp; &nbsp; &nbsp; &nbsp; at 
scala.collection.AbstractTraversable.map(Traversable.scala:104)

回复