AHeise commented on PR #111:
URL: 
https://github.com/apache/flink-connector-kafka/pull/111#issuecomment-2306382878

   > The interface is released in Flink 1.20. I saw other connectors, for 
example, Cassandra and JDBC (I also need to bump the flink version for them) 
have been using Flink 1.19. Could we use the similar way to do the dependency 
management?
   
   The way this interface is designed will always result in a breaking change 
on connector side. Any connector that uses the interface will need to release a 
specific version just for Flink 1.20.
   
   From the interface description and usage, I'm inferring that we need to do:
   
   ```
   class KafkaSource ... implements LineageVertexProvider
   ```
   
   Because it's designed to be a base interface, trying to load this jar in 
1.19 will cause ClassLoader exceptions. It's similarly to adding new abstract 
methods to the Source class directly and implementing them in KafkaSource.
   
   We have two options:
   * Bump to 1.20 as suggested. That means that new features and bugfixes in 
Kafka connector would not be directly available for users of 1.19. We could add 
a feature branch for maintaining older releases. However, that would cause at 
least double the number of releases of connectors until we phase out 1.19. 
@dannycranmer made the experience that just releasing a single release 
regularly is quite an effort because we have too few PMCs involved in the 
connector ecosystem. So very likely we effectively stop developing the 
connector for 1.19 unless something critical pops up.
   * Extend the interfaces to avoid locking on 1.20. We could achieve full 
backward compatibility by using something like
   ```
   class KafkaSourceLineageVertexProvider implements 
ExternalLineageVertexProvider<KafkaSource> {
     LineageVertex getLineageVertex(KafkaSource source); 
   }
   ```
   and find the respective implementations through class loader scans or using 
SPI. Loading this class in 1.19 would fail but there is also no need to load 
it. We would also need to adjust `TableLineageUtils` to look for these external 
providers.
   
   In general, since this is a Table API feature, it also feels more natural to 
extend `SourceProvider` instead of `Source` similarly to `ParallelismProvider`. 
This would solve the issue for DataStream users as well but it's too 
complicated to explain that Table API users wouldn't be able to use the new jar 
with 1.19 🙃 .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to