[ 
https://issues.apache.org/jira/browse/FLINK-11249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16747937#comment-16747937
 ] 

Piotr Nowojski commented on FLINK-11249:
----------------------------------------

I have realised about one more issue. This fix alone might not fully solve the 
problem. With this fix in place, user will be able to update his job from 
{{0.11}} connector to the universal one, but what happens if he upgrades the 
Kafka Brokers at any point of time? If user stops Kafka Brokers, upgrades and 
then restarts them, does this process preserves the pending transactions, that 
Flink already "pre committed"? Or are they automatically aborted? If they are 
automatically aborted we might have a data loss from our perspective.

If "pre committed" transactions are aborted during the Kafka brokers upgrades, 
we would need "clean stop with savepoint" feature to handle this user story. I 
guess this needs more experiments and more testing.

CC [~tzulitai] [~aljoscha]

> FlinkKafkaProducer011 can not be migrated to FlinkKafkaProducer
> ---------------------------------------------------------------
>
>                 Key: FLINK-11249
>                 URL: https://issues.apache.org/jira/browse/FLINK-11249
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Kafka Connector
>    Affects Versions: 1.7.0, 1.7.1
>            Reporter: Piotr Nowojski
>            Assignee: vinoyang
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 1.7.2, 1.8.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> As reported by a user on the mailing list "How to migrate Kafka Producer ?" 
> (on 18th December 2018), {{FlinkKafkaProducer011}} can not be migrated to 
> {{FlinkKafkaProducer}} and the same problem can occur in the future Kafka 
> producer versions/refactorings.
> The issue is that {{ListState<FlinkKafkaProducer.NextTransactionalIdHint> 
> FlinkKafkaProducer#nextTransactionalIdHintState}} field is serialized using 
> java serializers and this is causing problems/collisions on 
> {{FlinkKafkaProducer011.NextTransactionalIdHint}}  vs
> {{FlinkKafkaProducer.NextTransactionalIdHint}}.
> To fix that we probably need to release new versions of those classes, that 
> will rewrite/upgrade this state field to a new one, that doesn't relay on 
> java serialization. After this, we could drop the support for the old field 
> and that in turn will allow users to upgrade from 0.11 connector to the 
> universal one.
> One bright side is that technically speaking our {{FlinkKafkaProducer011}} 
> has the same compatibility matrix as the universal one (it's also forward & 
> backward compatible with the same Kafka versions), so for the time being 
> users can stick to {{FlinkKafkaProducer011}}.
> FYI [~tzulitai] [~yanghua]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to