jelly-1203 commented on a change in pull request #18017:
URL: https://github.com/apache/flink/pull/18017#discussion_r766554531



##########
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/MergeTableLikeUtil.java
##########
@@ -426,6 +433,14 @@ private void appendDerivedColumns(
                         }
                     }
 
+                    Set<String> physicalFieldNames = 
physicalFieldNamesToTypes.keySet();
+                    Set<String> metadataFieldNames = 
metadataFieldNamesToTypes.keySet();
+                    final Set<String> result = new 
LinkedHashSet<>(physicalFieldNames);
+                    result.retainAll(metadataFieldNames);
+                    if (!result.isEmpty()) {
+                        throw new ValidationException(
+                                "A field name conflict exists between a field 
of the regular type and a field of the Metadata type.");
+                    }

Review comment:
       > may be we can just check duplication when put the new Column to the 
columns, at the end of this function?
   
   hi @wenlong88 
   Thanks for your review and comment. I do not think duplication can be 
checked when putting the new Column to the columns, at the end of this 
function. The reasons are as follows:
   1. If computeColumn or MetadataColumn uses overwrite's Merge strategy, 
duplicate fields are allowed for the same type.
   2. To add physical accessibleFieldNamesToTypes columns and metadata columns, 
if metadataColumn before, and repeated physical column, leads to putAll, 
Metadata columns overwrite duplicate physical columns, which can result in a 
generated computeColumn that is not as expected.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to