twalthr commented on code in PR #19490:
URL: https://github.com/apache/flink/pull/19490#discussion_r852601520
##########
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/MergeTableLikeUtilTest.java:
##########
@@ -431,6 +433,100 @@ public void mergeOverwritingMetadataColumnsDuplicate() {
assertThat(mergedSchema, equalTo(expectedSchema));
}
+ @Test
+ public void mergeIncludingMetadataColumnWithSameMetadataKey() {
+ TableSchema sourceSchema = TableSchema.builder().build();
+
+ List<SqlNode> derivedColumns =
+ Arrays.asList(
+ metadataColumn("one", DataTypes.BOOLEAN(), true),
+ metadataColumn("two", DataTypes.STRING(), "one",
false));
+
+ Map<FeatureOption, MergingStrategy> mergingStrategies =
getDefaultMergingStrategies();
+ mergingStrategies.put(FeatureOption.METADATA,
MergingStrategy.INCLUDING);
+
+ thrown.expect(ValidationException.class);
+ thrown.expectMessage(
Review Comment:
How about we add this validation logic to `DefaultSchemaResolver`? This way
we avoid checking duplicate metadata keys at three locations and fail as early
as possible.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]