HeartSaVioR commented on pull request #34089: URL: https://github.com/apache/spark/pull/34089#issuecomment-1034277243
> Right, so if you build your app by including Spark via a Maven/Gradle dependency, you can specify a kafka-clients dependency on your project to override the version used. Is there a reason why this is not possible with Spark? It is possible, but that is not something we will test hard before releasing. Choosing the default must be done like we ourselves have a production directly affected by this, because that is what end users are facing. If we can do opposite, like compiling Spark with Kafka 2.8.1 and using it with kafka-clients 3.1.x at runtime, I would feel safer. For sure, I wouldn't recommend unless they really need it, and say "it's your own risk". >> Could you please elaborating more on this? Does this mean Kafka 3.0 has different default values on functionality related configurations? If then it sounds like end users using Kafka 3.0 should understand what is going on before moving on, and the end users of Spark will have to do this in technically "minor version upgrade". > Yes. OK this backs up my concern. This is something we totally missed. I don't see any effort in this PR about thinking hard on the possible impacts we will get. That is why I concern more on major releases; everyone struggling with semver knows that bumping major version is one of chance in years to make breaking changes, so we break minors to majors in major release. Spark community is not aiming to release the major version, so end users tend to not expect any breaking changes. And we technically do. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
