[
https://issues.apache.org/jira/browse/FLINK-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16512660#comment-16512660
]
ASF GitHub Bot commented on FLINK-8983:
---------------------------------------
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/6083#discussion_r195469740
--- Diff:
flink-end-to-end-tests/flink-confluent-schema-registry/src/main/java/AvroDeserializationConfluentSchema.java
---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient;
+import io.confluent.kafka.schemaregistry.client.SchemaRegistryClient;
+import io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig;
+import io.confluent.kafka.serializers.KafkaAvroDecoder;
+import org.apache.avro.generic.GenericData;
+import tech.allegro.schema.json2avro.converter.JsonAvroConverter;
+
+import java.io.IOException;
+
+/**
+ * The deserialization schema for the Avro type.
+ */
+public class AvroDeserializationConfluentSchema<T> implements
DeserializationSchema<T> {
+
+ private static final long serialVersionUID = 1L;
+
+ private Class<T> avroType;
+ private final String schemaRegistryUrl;
+ private final int identityMapCapacity;
+ private KafkaAvroDecoder kafkaAvroDecoder;
+
+ private ObjectMapper mapper;
+
+ private JsonAvroConverter jsonAvroConverter;
+
+ public AvroDeserializationConfluentSchema(Class<T> avroType, String
schemaRegistyUrl) {
+ this(avroType, schemaRegistyUrl,
AbstractKafkaAvroSerDeConfig.MAX_SCHEMAS_PER_SUBJECT_DEFAULT);
+ }
+
+ public AvroDeserializationConfluentSchema(Class<T> avroType, String
schemaRegistryUrl, int identityMapCapacity) {
+ this.avroType = avroType;
+ this.schemaRegistryUrl = schemaRegistryUrl;
+ this.identityMapCapacity = identityMapCapacity;
+ }
+
+ @Override
+ public T deserialize(byte[] message) throws IOException {
+ if (kafkaAvroDecoder == null) {
+ SchemaRegistryClient schemaRegistryClient = new
CachedSchemaRegistryClient(this.schemaRegistryUrl, this.identityMapCapacity);
+ this.kafkaAvroDecoder = new
KafkaAvroDecoder(schemaRegistryClient);
--- End diff --
Why do we use the `KafkaAvroDecoder` instead of the `KafkaAvroDeserializer`?
> End-to-end test: Confluent schema registry
> ------------------------------------------
>
> Key: FLINK-8983
> URL: https://issues.apache.org/jira/browse/FLINK-8983
> Project: Flink
> Issue Type: Sub-task
> Components: Kafka Connector, Tests
> Reporter: Till Rohrmann
> Assignee: Yazdan Shirvany
> Priority: Critical
>
> It would be good to add an end-to-end test which verifies that Flink is able
> to work together with the Confluent schema registry. In order to do that we
> have to setup a Kafka cluster and write a Flink job which reads from the
> Confluent schema registry producing an Avro type.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)