[ 
https://issues.apache.org/jira/browse/FLINK-20379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17325364#comment-17325364
 ] 

Flink Jira Bot commented on FLINK-20379:
----------------------------------------

This issue is assigned but has not received an update in 7 days so it has been 
labeled "stale-assigned". If you are still working on the issue, please give an 
update and remove the label. If you are no longer working on the issue, please 
unassign so someone else may work on it. In 7 days the issue will be 
automatically unassigned.

> Update KafkaRecordDeserializationSchema to enable reuse of 
> DeserializationSchema and KafkaDeserializationSchema
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-20379
>                 URL: https://issues.apache.org/jira/browse/FLINK-20379
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>    Affects Versions: 1.12.0
>            Reporter: Stephan Ewen
>            Assignee: Jiangjie Qin
>            Priority: Critical
>              Labels: pull-request-available, stale-assigned
>             Fix For: 1.13.0, 1.12.3
>
>
> The new Kafka Connector defines its own deserialization schema and is 
> incompatible with the existing library of deserializers.
> That means that users cannot use all of Flink's Formats (Avro, JSON, Csv, 
> Protobuf, Confluent Schema Registry, ...) with the new Kafka Connector.
> I think we should change the new Kafka Connector to use the existing 
> Deserialization classes, so all formats can be used, and users can reuse 
> their deserializer implementations.
> It would also be good to use the existing KafkaDeserializationSchema. 
> Otherwise all users need to migrate their sources again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to