wuchong commented on a change in pull request #12725:
URL: https://github.com/apache/flink/pull/12725#discussion_r443391977
##########
File path:
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/CatalogTableSchemaResolver.java
##########
@@ -76,16 +78,16 @@ public TableSchema resolve(TableSchema tableSchema) {
for (int i = 0; i < tableSchema.getFieldCount(); ++i) {
TableColumn tableColumn =
tableSchema.getTableColumns().get(i);
DataType fieldType = fieldTypes[i];
- if (tableColumn.isGenerated() &&
isProctimeType(tableColumn.getExpr().get(), tableSchema)) {
+ DataType exprType = null;
+ if (tableColumn.isGenerated()) {
+ exprType =
resolveExpressionDataType(tableColumn.getExpr().get(), tableSchema);
+ }
+
+ if (exprType != null && isProctime(exprType)) {
if (fieldNames[i].equals(rowtime)) {
throw new TableException("Watermark can
not be defined for a processing time attribute column.");
}
- TimestampType originalType = (TimestampType)
fieldType.getLogicalType();
- LogicalType proctimeType = new TimestampType(
- originalType.isNullable(),
- TimestampKind.PROCTIME,
- originalType.getPrecision());
- fieldType =
TypeConversions.fromLogicalToDataType(proctimeType);
+ fieldType = exprType;
Review comment:
Yes.. We didn't have a public discussion about this and some people may
thought the type is snapshotted when CREATE TABLE. Maybe we should write this
down in documentation.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]