[
https://issues.apache.org/jira/browse/FLINK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255022#comment-16255022
]
ASF GitHub Bot commented on FLINK-8014:
---------------------------------------
Github user fhueske commented on a diff in the pull request:
https://github.com/apache/flink/pull/4990#discussion_r151363113
--- Diff:
flink-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/Kafka010JsonTableSink.java
---
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.kafka;
+
+import org.apache.flink.api.common.serialization.SerializationSchema;
+import
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner;
+import
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
+import org.apache.flink.types.Row;
+
+import java.util.Properties;
+
+/**
+ * Kafka 0.10 {@link KafkaTableSink} that serializes data in JSON format.
+ */
+public class Kafka010JsonTableSink extends KafkaJsonTableSink {
+
+ /**
+ * Creates {@link KafkaTableSink} to write table rows as JSON-encoded
records to a Kafka 0.10
+ * topic with fixed partition assignment.
+ *
+ * <p>Each parallel TableSink instance will write its rows to a single
Kafka partition.</p>
+ * <ul>
+ * <li>If the number of Kafka partitions is less than the number of
sink instances, different
+ * sink instances will write to the same partition.</li>
+ * <li>If the number of Kafka partitions is higher than the number of
sink instance, some
+ * Kafka partitions won't receive data.</li>
+ * </ul>
+ *
+ * @param topic topic in Kafka to which table is written
+ * @param properties properties to connect to Kafka
+ */
+ public Kafka010JsonTableSink(String topic, Properties properties) {
+ super(topic, properties, new FlinkFixedPartitioner<>());
+ }
+
+ /**
+ * Creates {@link KafkaTableSink} to write table rows as JSON-encoded
records to a Kafka 0.10
+ * topic with custom partition assignment.
+ *
+ * @param topic topic in Kafka to which table is written
+ * @param properties properties to connect to Kafka
+ * @param partitioner Kafka partitioner
+ */
+ public Kafka010JsonTableSink(String topic, Properties properties,
FlinkKafkaPartitioner<Row> partitioner) {
+ super(topic, properties, partitioner);
+ }
+
+ @Override
+ protected FlinkKafkaProducerBase<Row> createKafkaProducer(String topic,
Properties properties, SerializationSchema<Row> serializationSchema,
FlinkKafkaPartitioner<Row> partitioner) {
+ return new FlinkKafkaProducer010<>(topic, serializationSchema,
properties, partitioner);
+ }
+
+ @Override
+ protected Kafka09JsonTableSink createCopy() {
--- End diff --
Oh, yes.
> Add Kafka010JsonTableSink
> -------------------------
>
> Key: FLINK-8014
> URL: https://issues.apache.org/jira/browse/FLINK-8014
> Project: Flink
> Issue Type: Improvement
> Components: Table API & SQL
> Affects Versions: 1.4.0
> Reporter: Fabian Hueske
> Assignee: Fabian Hueske
> Fix For: 1.4.0
>
>
> Offer a TableSource for JSON-encoded Kafka 0.10 topics but no TableSink.
> Since, the required base classes are already there, a
> {{Kafka010JsonTableSink}} can be easily added.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)