dawidwys commented on a change in pull request #12725:
URL: https://github.com/apache/flink/pull/12725#discussion_r443371169
##########
File path:
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/CatalogTableSchemaResolver.java
##########
@@ -76,16 +78,16 @@ public TableSchema resolve(TableSchema tableSchema) {
for (int i = 0; i < tableSchema.getFieldCount(); ++i) {
TableColumn tableColumn =
tableSchema.getTableColumns().get(i);
DataType fieldType = fieldTypes[i];
- if (tableColumn.isGenerated() &&
isProctimeType(tableColumn.getExpr().get(), tableSchema)) {
+ DataType exprType = null;
+ if (tableColumn.isGenerated()) {
+ exprType =
resolveExpressionDataType(tableColumn.getExpr().get(), tableSchema);
+ }
+
+ if (exprType != null && isProctime(exprType)) {
if (fieldNames[i].equals(rowtime)) {
throw new TableException("Watermark can
not be defined for a processing time attribute column.");
}
- TimestampType originalType = (TimestampType)
fieldType.getLogicalType();
- LogicalType proctimeType = new TimestampType(
- originalType.isNullable(),
- TimestampKind.PROCTIME,
- originalType.getPrecision());
- fieldType =
TypeConversions.fromLogicalToDataType(proctimeType);
+ fieldType = exprType;
Review comment:
I was also thinking about it and I can do that. Nevertheless in my
opinion this requires careful consideration.
What is the point of data type of a computed column, if we completely ignore
it?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]