ijuma commented on pull request #34089:
URL: https://github.com/apache/spark/pull/34089#issuecomment-1034234832


   > How? Spark releases a single artifact of kafka data source so the only way 
to mitigate is injecting it in runtime, and it would require binary 
compatibility on kafka-clients.
   
   Right, so if you build your app by including Spark via a Maven/Gradle 
dependency, you can specify a kafka-clients dependency on your project to 
override the version used. Is there a reason why this is not possible with 
Spark?
   
   > Does Kafka guarantee binary compatibility between majors and minors for 
kafka-clients?
   
   Not always, but the question is whether there is binary compatibility for 
the APIs that Spark uses. I suspect the answer is yes for 3.0, but it would be 
good to verify (as I am not deeply familiar with Spark's code). This could be 
done by compiling Spark with Apache Kafka 3.1.0 (as it after this PR) and then 
using it with kafka-clients 2.8.x at runtime.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to