Airblader commented on a change in pull request #16142:
URL: https://github.com/apache/flink/pull/16142#discussion_r661204303



##########
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaTableITCase.java
##########
@@ -300,6 +306,240 @@ public void testKafkaTableWithMultipleTopics() throws 
Exception {
         topics.forEach(super::deleteTestTopic);
     }
 
+    @Test
+    public void testKafkaSinkWithMetadataIncludeTopicOption() {
+        if (isLegacyConnector) {
+            return;
+        }
+        // we always use a different topic name for each parameterized topic,
+        // in order to make sure the topic can be created.
+        final String topic = "metadata_topic_" + format;
+        createTestTopic(topic, 1, 1);
+
+        // ---------- Produce an event time stream into Kafka 
-------------------
+        String groupId = getStandardProps().getProperty("group.id");
+        String bootstraps = getBootstrapServers();
+
+        final String createTable =
+                String.format(
+                        "CREATE TABLE kafka (\n"
+                                + "  `physical_1` STRING,\n"
+                                + "  `physical_2` INT,\n"
+                                // metadata fields are out of order on purpose
+                                // offset is ignored because it might not be 
deterministic
+                                + "  `timestamp-type` STRING METADATA 
VIRTUAL,\n"
+                                + "  `timestamp` TIMESTAMP(3) METADATA,\n"
+                                + "  `leader-epoch` INT METADATA VIRTUAL,\n"
+                                + "  `headers` MAP<STRING, BYTES> METADATA,\n"
+                                + "  `partition` INT METADATA VIRTUAL,\n"
+                                + "  `topic` STRING METADATA,\n"
+                                + "  `physical_3` BOOLEAN\n"
+                                + ") WITH (\n"
+                                + "  'connector' = 'kafka',\n"
+                                + "  'topic' = '%s',\n"
+                                + "  'properties.bootstrap.servers' = '%s',\n"
+                                + "  'properties.group.id' = '%s',\n"
+                                + "  'scan.startup.mode' = 
'earliest-offset',\n"
+                                + "  %s\n"
+                                + ")",
+                        topic, bootstraps, groupId, formatOptions());
+
+        tEnv.executeSql(createTable);
+
+        String initialValues =
+                String.format(
+                        "INSERT INTO kafka\n"
+                                + "VALUES\n"
+                                + " ('data 1', 1, TIMESTAMP '2020-03-08 
13:12:11.123', MAP['k1', X'C0FFEE', 'k2', X'BABE'], '%s', TRUE),\n"
+                                + " ('data 2', 2, TIMESTAMP '2020-03-09 
13:12:11.123', CAST(NULL AS MAP<STRING, BYTES>), '%s', FALSE),\n"
+                                + " ('data 3', 3, TIMESTAMP '2020-03-10 
13:12:11.123', MAP['k1', X'10', 'k2', X'20'], '%s', TRUE)",
+                        topic, topic, topic);
+        try {
+            tEnv.executeSql(initialValues).await();
+            fail(
+                    "Unable to create the Kafka sink table with table option 
'topic' and metadata column 'topic'.");
+        } catch (Exception e) {
+            assertTrue(e instanceof ValidationException);
+            assertEquals(
+                    String.format(
+                            "Invalid metadata key '%s' in column 'topic' of 
table 'default_catalog.default_database.kafka'. "
+                                    + "The %s class '%s' supports the 
following metadata keys for writing:\n%s",
+                            TOPIC.key,
+                            DynamicTableSink.class.getSimpleName(),
+                            KafkaDynamicSink.class.getName(),
+                            String.join("\n", Arrays.asList(HEADERS.key, 
TIMESTAMP.key))),
+                    e.getMessage());
+        }
+
+        // ------------- cleanup -------------------
+
+        deleteTestTopic(topic);

Review comment:
       This was marked as resolved, but I don't see any changes here. If 
there's a failure or a wrong assertion, the cleanup won't be called. 

##########
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaOptions.java
##########
@@ -144,7 +144,7 @@ private KafkaOptions() {}
                     .noDefaultValue()
                     .withDescription(
                             "Topic names from which the table is read. Either 
'topic' or 'topic-pattern' must be set for source. "
-                                    + "Option 'topic' is required for sink.");
+                                    + "Option 'topic' is optional for sink 
through specifying the 'topic' metadata column.");

Review comment:
       This comment has been marked as resolved, but the documentation still 
seems to differ from the description in the ConfigOption.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to