wuchong commented on a change in pull request #12725:
URL: https://github.com/apache/flink/pull/12725#discussion_r443372461
##########
File path:
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/CatalogTableSchemaResolver.java
##########
@@ -76,16 +78,16 @@ public TableSchema resolve(TableSchema tableSchema) {
for (int i = 0; i < tableSchema.getFieldCount(); ++i) {
TableColumn tableColumn =
tableSchema.getTableColumns().get(i);
DataType fieldType = fieldTypes[i];
- if (tableColumn.isGenerated() &&
isProctimeType(tableColumn.getExpr().get(), tableSchema)) {
+ DataType exprType = null;
+ if (tableColumn.isGenerated()) {
+ exprType =
resolveExpressionDataType(tableColumn.getExpr().get(), tableSchema);
+ }
+
+ if (exprType != null && isProctime(exprType)) {
if (fieldNames[i].equals(rowtime)) {
throw new TableException("Watermark can
not be defined for a processing time attribute column.");
}
- TimestampType originalType = (TimestampType)
fieldType.getLogicalType();
- LogicalType proctimeType = new TimestampType(
- originalType.isNullable(),
- TimestampKind.PROCTIME,
- originalType.getPrecision());
- fieldType =
TypeConversions.fromLogicalToDataType(proctimeType);
+ fieldType = exprType;
Review comment:
The case is that the return type of functions (built-in or UDF) may
change in the future, e.g. we are planning to change the return type of
`CURRENT_TIMESTAMP` to `TIMESTAMP WITH LOCAL TIME ZONE`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]